meta
dict
text
stringlengths
224
571k
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9396832585334778, "language": "en", "url": "https://www.greysheet.com/news/story/series-analysis-early-silver-dollars-draped-bust-1795-1798-part-2", "token_count": 2876, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01495361328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cda986a6-ce82-49a2-933d-6d2d6237fdea>" }
SEARCH BY CATEGORY SERIES ANALYSIS: Early Silver Dollars, Draped Bust 1795-1798 (Part 2) Published on December 12, 2019 Part 2 of Greg Reynolds series analysis on the draped bust silver dollars that were issued from 1795-98. These are also know as "small eagle reverse" dollars and quite popular among collectors.Editorial Collectors Featured U.S. Coins This article appeared in print in the November 2019 issue of The Monthly Greysheet. From 1795 to 1798, silver dollars were minted with a Draped Bust obverse and a so called ‘Small Eagle’ reverse. This eagle motif is not really small; it is just called a small eagle to distinguish it from the grand style ‘Heraldic Eagle’ reverse that was introduced later. A list of design types of silver dollars appears in my article in last month’s Greysheet. Of Draped Bust, Small Eagle silver dollars, ten major varieties are listed in the Greysheet and these are widely accepted: - 1795 Off-Center Bust - 1795 Centered Bust - 1796 Small Date, Small Letters - 1796 Small Date, Large Letters - 1796 Large Date, Small Letters - 1797 9×7 Stars, Small Letters - 1797 9×7 Stars, Large Letters - 1797 10×6 Stars - 1798 15 stars - 1798 13 Stars The names “Small Letters” and “Large Letters” refer to the size of the letters in the legend, ‘UNITED STATES OF AMERICA,’ on the reverse. The difference in size is readily noticeable, and the large letters are much closer to nearby dentils. The notation ‘9×7′ refers to obverse dies where nine stars are at the left of the collector viewing the coin and seven stars are at the right. There are thus nine stars to the left of the letters of Liberty and seven to the right, for a total of sixteen, the number of states in the union at the time. On June 1, 1796, Tennessee became the sixteenth state. The notation ’10×6’ refers to an arrangement where ten stars are on the left and the other six stars are on the right, from the perspective of someone viewing coins. More than one thousand 1795 Draped Bust silver dollars exist in the present. For the off-center variety, the bust of Miss Liberty was accidentally punched too far to the left, and her hair touches or just about touches the first star. A ribbon is close to the fifth star. There is a considerable amount of space between Miss Liberty’s face and the stars at the right. On the centered bust variety, a garment on Miss Liberty’s chest touches or just about touches the the sixteenth star, the bottom star at the right. On the off-center bust variety, in contrast, there is noticeable space between the sixteenth star and Miss Liberty’s chest. According to the September Greysheet, the centered bust and off-center 1795 Draped Bust dollars are worth around the same amounts in grades from G4 to AU50. My view, in contrast, is that, all other factors being more or less equal, the centered bust 1795 dollars are significantly scarcer and are worth more in most all grades. Indeed, the Centered bust 1795 dollars tend to bring more at auction, but the physical characteristics of each individual 1795 Draped Bust dollar affect its value more so than whether it is a centered bust or an off-center bust variety. PCGS did not distinguish centered from off-center busts in their data for many years. Consequently, the PCGS population now for centered bust 1795 Draped Bust dollars includes a substantial number of off-center bust dollars that were graded before these two varieties were classified separately at PCGS. It makes sense to list a few public auction and Internet sale results to provide an idea of current values and availability. It is important to keep in mind that each coin is distinctive and bidding competition varies in magnitude and depth. Auction results should not be thought of as conclusive indicators of wholesale or retail values; auction prices need to be analyzed along with additional information. In March 2018, Stack’s-Bowers auctioned a PCGS graded VG10 1795 Off-Center Bust dollar for $2.160. On June 7, 2019, at a Long Beach Expo, Heritage auctioned a PCGS graded Fine-12 1795 Off-Center Bust dollar for $3.360, more than the $2.620 result for a PCGS graded F15 Off-Center Bust in the FUN event in January 2019. In April 2019, Heritage auctioned a PCGS graded XF40 1795 Off-Center Bust silver dollar for $8,400. Heritage had earlier auctioned the exact same coin for $6,756.25 in January 2016. On March 14, 2019, Heritage auctioned a PCGS graded XF40 1795 Off-Center Bust for $6,600 and a PCGS graded XF45 1795 Off-Center Bust for $7,800. In Heritage sales last autumn, an NGC graded VF30 1795 Off-Center Bust brought more, $4,560, than a PCGS graded VF30 1795 Off-Center Bust, $4,200. On Sept. 5, 2019, at a Long Beach Expo, a PCGS graded VF-35 Off-Center Bust dollar realized $5,040. In August 2019 at an ANA Convention, a PCGS graded XF45 1795 Off-Center Bust was auctioned for $9,300, which seems to be a retail-level price. At the FUN event in January 2019, an NGC graded AU50 1795 Off-Center Bust dollar brought $9,000. Budget-minded collectors may like to know that an NGC graded Fair-02 1795 Centered Bust dollar was auctioned by Stack’s-Bowers for $1,057.50 back in August 2017. It is curious that, on July 12, 2018, Heritage sold two 1795 Centered Bust dollars for the same price, each for $7,200, yet one was PCGS graded XF40 and the other was PCGS graded XF45. In August 2018, Stack’s-Bowers auctioned a PCGS graded AU53 1795 Centered Bust dollar for $13,200. In uncirculated grades, 1795 dollars are the least rare of the Draped Bust, Small Eagle design type. For the whole design type, PCGS has assigned mint state grades (MS60 or higher) to only sixty-two coins, and NGC to sixty-nine coins. The combined total of one hundred and thirty-one probably amounts to fifty-five to seventy different coins. There is much demand for these, and collectors should consult experts before acquiring one. Prices may range from $60,000 to more than $1 million! Of all varieties of 1796 silver dollars, there are fewer than a dozen that are truly mint state. Of all three major varieties, 1796 dollars that are PCGS or NGC graded from VF20 to AU50 are somewhat available. Also, there are more than two hundred ungradable 1796 silver dollars around. On April 21, 2019, the firm of GreatCollections sold an NGC graded F15 1796 dollar for $2,707.88. Although not designated as such, this 1796 appears to be of the Small Date, Large Letters variety. In July 2019, at the Summer FUN Convention, Heritage auctioned a PCGS graded XF45 1796 Small Date, Small Letters dollar for $11,400. The exact same coin realized $13,800 in a Heritage event in December 2010. On July 14, 2019, the firm of David Lawrence sold a PCGS graded VF30 Small Date, Large Letters 1796 dollar for $4,700. On June 23, 2019, GreatCollections sold a PCGS graded XF40 Small Date, Large Letters dollar, with a CAC sticker, for $9,590.62. As all 1796 dollars with a Large Date on the obverse have “Small Letters” on the reverse, there is no need to mention the small letters when referring to the large date. On August 14, 2019, Stack’s-Bowers sold a PCGS graded VF30 coin for $4080. Almost exactly one year earlier, Stack’s-Bowers auctioned an NGC graded XF45 1796 Large Date for $7,800. In grades below AU50, a 1797 9×7 stars, Large Letters dollar or a 1797 10×6 would cost around the same as a 1796 silver dollar. In April 2019, Heritage auctioned a PCGS graded F15 1797 9×7, Large Letters silver dollar for $3,360. On August 25, 2019, GreatCollections sold a PCGS graded XF40 1797 9×7, Large Letters coin for $8,010. On August 15, 2019, Stack’s-Bowers auctioned a PCGS graded AU53 1797 9×7, Large Letters, for $12,000 and a PCGS graded AU55 coin that was struck from the same pair of dies, for $31,200. The 1797 9×7 stars obverse, Small Letters reverse, major variety is worth a substantial premium over the 1797 9×7, Large Letters and over the 1797 10×6. This premium tends to be very substantial, percentage-wise, for coins that grade above XF40. I estimate that fewer than one hundred and thirty different 1797 9×7 Small Letters dollars have been assigned numerical grades by PCGS or NGC, and another forty-five to sixty-five coins have been or would be found to be non-gradable by both services. It is noteworthy that just nine have received stickers of approval from CAC. There is only one mint state 1797 9×7 Small Letters silver dollar certified by PCGS or NGC, and it is the only uncirculated representative of this major variety that I have ever seen or heard about. It was NGC graded MS64 and CAC approved before the auction of most of Eric Newman’s early U.S. silver coins in November 2013. This 1797 9×7 Small Letters dollar then realized $381,875. After crossing into a PCGS holder, this finest known 1797 9×7 Small Letters dollar realized considerably less, $264,000, in an auction in November 2017. This weak to moderate result is curious. While market levels have fallen since November 2013, a mild retail price now would not be much different from a mild retail price for this coin in November 2017, in the range of $320,000 to $400,000. Noticeably circulated 1797 9×7 Small Letters dollars can be found without too much difficulty. On June 22, 2018, Stack’s-Bowers auctioned a PCGS graded G6 1797 9×7 Small Letters dollar for $2,280. In February 2019, Heritage auctioned an NGC graded F15 coin of this major variety for $3,120. The total number of surviving 1798 Small Eagle dollars is much lower than the total of all 1797 Small Eagle dollars extant. Heraldic, “Large Eagle” bust silver dollars were also minted in 1798. There are two major varieties of 1798 Small Eagle dollars, those with thirteen stars (7 left, 6 right) on the obverse and those with with fifteen stars (8 left, 7 right) on the obverse. I maintain that the fifteen stars variety is much scarcer overall. Price guides, however, tend to value these about the same in grades below AU50. In January 2018, Heritage sold a PCGS graded VF20 1798 13 Stars coin for $4,080. On July 14, 2019, the firm of David Lawrence sold a PCGS graded VF30 1798 13 Stars coin for $5,000. On February 28, 2019, Stack’s-Bowers auctioned two PCGS graded XF40 15 stars variety 1798 Small Eagle dollars, in consecutive lots. The first brought $13,200 and the second, $8,400. To understand the value and quality of an individual coin, there is often a need to examine it in actuality. In sum, a set of all ten major varieties of Draped Bust, Small Eagle silver dollars is a practical objective. Not one is an extreme rarity. Collectors focusing upon coins that grade above AU53 should hire at least one expert for advice and other services. Analyzing high grade early U.S. coins is very difficult and requires years of experience. A collector who builds a set without a paid consultant should endeavor to personally examine a large number of bust dollars, and to ask many pertinent questions. Invariably, most early silver dollars have been cleaned or mistreated at one time or another. It is important to discuss the nature and degree of past cleanings, dippings, and natural retoning. For well circulated pieces, mildly awkward colors stemming from past cleanings are considered normal and are usually not that important. It is fortunate that thousands of Draped Bust, Small Eagle silver dollars survive. These attractive, 18th century U.S. coins may be studied and enjoyed by a substantial number of collectors and other coin enthusiasts. # # # ©2019 Greg Reynolds Leave a comment Please sign in or register to leave a comment. Your identity will be restricted to first name/last initial, or a user ID you create.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9557871222496033, "language": "en", "url": "https://www.hipaaguide.net/purpose-of-hipaa/", "token_count": 561, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1591796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d31ff718-38d3-4616-93ca-3dfc658db1e1>" }
The Health Insurance Portability and Accountability Act – or HIPAA is a vital legislation which affects the U.S. healthcare market. But some are wondering what is the purpose of the HIPAA? Healthcare pros frequently make a complaint concerning the constraints of HIPAA. Are the advantages of the law really worth the extra amount of work? HIPAA first came into the scene in 1996. In its initial form, the HIPAA helped employees who were between jobs continue to get their medical insurance coverage. The legislation likewise demanded from healthcare institutions to execute measures to protect patient data from healthcare fraudulence, though it took many years to put the rules for doing so into writing. HIPAA additionally developed a number of new standards that were meant to enhance the efficiency of providing services in the healthcare field. It required healthcare institutions to take on the standards to lessen the burden of paperwork. Code sets needed to be utilized together with patient identifiers, which aided in the effective transfer of healthcare data from one healthcare company and insurance company to another, streamlining eligibility verifications, billing, payments, and other healthcare procedures. HIPAA likewise discourages the tax-deduction of interest on life insurance loans, enforces group health insurance requirements, and standardizes just how much may be saved in a pre-tax medical savings account. HIPAA is a complete legislation integrating the requirements of a number of other legislations, such as the Public Health Service Act, Employee Retirement Income Security Act, and fairly recently, the Health Information Technology for Economic and Clinical Health (HITECH) Act. HIPAA is now most commonly known for safeguarding the privacy of patients and making sure patient data is suitably secured, with the requirements put in by the HIPAA Privacy Rule of 2000 and the HIPAA Security Rule of 2003. The requirement to notify individuals of a breach of their protected health information started in the Breach Notification Rule in 2009. The objective of the HIPAA Privacy Rule was to present limitations on the permitted uses and disclosures of PHI, stipulating when, with whom, and under what conditions, medical information may be shared. One more crucial purpose of the HIPAA Privacy Rule was to provide individuals access to their health information upon request. The objective of the HIPAA Security Rule is principally to make sure electronic protected health information (ePHI) is adequately secured, access to ePHI is controlled, and an auditable track of PHI activity is kept. So, to sum up, what is the reason for the HIPAA? To boost efficiency in the healthcare field, to further improve the portability of medical health insurance, to safeguard patient privacy and privacy of health plan members, and to assure health data is kept safe and patients are informed of breaches of their personal health data.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9262122511863708, "language": "en", "url": "https://www.techworksasia.com/the-14th-five-year-plan-encapsulates-chinas-bold-technology-strategy/", "token_count": 913, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1787109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4d014e45-e43f-401a-91d1-75fce6bf81d0>" }
Cutting-edge semiconductor microchips, artificial intelligence (AI), robotics, biomedicine, and ultrafast internet are among the areas been singled out for special focus as China lays out its 14th Five-Year Plan. China’s National People’s Congress (NPC), or “two sessions” (“lianghui”), took place last month, with passage of the 14th Five-Year Plan (2021-25) headlining the agenda. Although wide-ranging in scope, the “14FYP” is where we look for clues to China’s evolving plans for its high-tech base. First, basic research in technology is to receive significant prioritisation, within a 10-year horizon. According to the People’s Daily Online, China will formulate and implement a 10-year action plan for basic research, specifies for the first time the proportion of China’s basic research expenditure within the country’s total spending on research and development (R&D). Government funding for basic research is expected to account for over 8% of China’s total expenditure on R&D by 2025. R&D is due to account for a higher percentage of gross domestic product than in the previous five-year period. China spent 150 billion yuan (US$23 billion) on fundamental research last year, according to Ye Yujiang, an official at the Ministry of Science and Technology. Caixingglobal.com also revealed a drive for self-reliance in science and technology, establishing three major indicators. From 2021 to 2025, China’s research and development (R&D) spending will increase by more than 7% a year. The number of high-value patents per 10,000 people will increase from 6.3 in 2020 to 12 in 2025, while the added value of digital-economy core industry as a proportion of GDP will rise from 7.8% in 2020 to 10% in 2025. Notably, the 14th Five-Year Plan will list seven strategic areas considered essential to “national security and overall development”. These include AI, quantum computing, integrated circuits (i.e. semiconductor chips), genetic and biotechnology research, neuroscience and aerospace. China plans to create national laboratories and bolster academic programs to incubate and buttress some of these technologies. Vaccines, deep-sea exploration and voice recognition are also targets for development. The expectation is that by 2035, China aims to have made significant breakthroughs in core technologies, while seeking to be among the most innovative nations globally. The word “innovation” occurs frequently in the draft of the 14th Five-Year Plan, “Insisting on Innovation-Driven Development”. China does of course have considerable motivation to wean its technology base off its reliance on imported technology, particularly in the field of semiconductors. This requires the development of an independent supply chain, but establishing a world-class domestic chip-making capability has become an urgent need as the US curbs progress at China’s leading foundry (contract chip maker), the Shanghai-based Semiconductor Manufacturing International Corp (SMIC). Leading the way in comms? In the meantime, an impressively forward-looking communications infrastructure is setting the pace in terms of global competitiveness. China likely had 690,000 5G base stations operating across the country by the end of 2020, compared with 50,000 in the US. At the GTI Summit, MWC Shanghai 2021, Liu Liehong, Vice Minister of the Ministry of Industry and Information Technology, said that China has deployed 718,000 5G base stations accounting for about 70% world total. The 14FYP sets a goal of raising the percentage of 5G users in China to more than 50%, and to lay the groundwork for 6G networks. The 14FYP establishes formidable objectives, and innovation will also need to include some creative financing, presumably. Certainly, in a speech to the NPC, Chinese Premier Li Keqiang said that China will revise regulations and policies to support the flow of venture capital into startups, free up bank lending and extend tax incentives to encourage research and development. The 14FYP looks set to kickstart a fascinating and promising five years that could well provide foreign investors with some remarkable opportunities. Photo: Zhang Kaiyv, Unsplash
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9475435018539429, "language": "en", "url": "https://canceravenue.com/covid-19-great-recession/", "token_count": 855, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6b35bd6c-b5b9-4a95-a589-853f37b59af3>" }
COVID-19 Prediction /COVID-19 Great Recession Today the media is flooding with highlights, COVID-19 great recession. Bill Gates, in 2018, had predicted a recession-related to viruses. In an interview, in 2018, Bill Gates said: Bill Gates in April 2018: “There is one area, though, where the world isn’t making much progress, and that’s pandemic preparedness. This should concern us all because if history has taught us anything, it’s that there will be another deadly global pandemic.” Here we are, two years later, a new virus emerged and a state of a pandemic is declared. The COVID-19 pandemic is claiming thousands of lives and brought economies to a standstill, and put nations in chaos. Globally, the novel virus has caused socioeconomic deterioration The most affected are poor countries, and they are the ones whose recovery will take many years. The United Nations Development Program released a recent report emphasizing the lost incomes in poor countries. They predicted a loss exceeding $220 billion. For instance, India’s losses started after being on lockdown for more than 3 weeks. In India, laborers and unregistered workers are the backbones of the Indian economy. Thousands of young workers in Delhi are on daily wages. After the lockdown, they left with their families the capital to the countrysides. Global Impact / COVID-19 Great Recession In China business slowed dramatically. The United Nations has estimated global economic losses of $2 trillion. In Toronto, the Royal Bank of Canada has forecasted a recession after the hit of the novel virus. The impact will be seen in oil prices plunge. They also predicted economic growth of up to 0.8% in the first quarter of the year. All RBC forecasts are based on the assumption that the pandemic of COVID-19 will end at the end of the first half of the year. Yet, the recovery f the economy will be obstructed by persistently low prices. The bank also cut its key interest by 0.5%. Social & Economic Link The USA – According to Mark Zandi, chief economist at Moody’s Analytics, social distancing is equal to economic distancing. What does that mean? When people do not go to restaurants, molls, or workplaces, there is a lack of goods and a shortage of supplies which mean economy losses. Since the pandemic in the US, there is an unprecedented loss of jobs. The US Senate passed a disaster funding package. 1.The International Monetary Fund (IMF) The International Monetary Fund consists of 189 members and 90 of them have asked for funding because of the pandemic. The IMF declared a world recession and predicted a way worse recession than in 2008. Kristalina Georgieva, IMF managing director, announced a global recession and called countries with advanced economies to coordinate their efforts to support countries with developing markets 2.The International Monetary Fund (IMF) A helping hand will be given in the impacted by COVID-19 economic and health sectors. Georgieva said, the International Monetary Fund, suggested to China and other official creditors to temporary (at least for one year) stop the debt collections from underprivileged and poorest countries. China agreed to be engaged in the proposal. The IMF will continue to work on a specific project with the Paris Club and the major 20 advanced economies in the world. The idea of this project is to discuss with the creditors (commercials and officials) both a support plan and a possibility for debt reduction of the poorest countries affected. The IMF demanded the emergency fund be used for strengthening the health system, such as frontline healthcare workers and personal protective equipment. 3.The International Monetary Fund (IMF) The IMF managing director, Georgieva, announced that the IMF is ready to hand out $ 1 trillion to countries in need. They started allocating the funds to requesting countries. Georgieva also said the central banks and finance ministers have already taken measures to mitigate the financial effect of the pandemic on the emerging markets. She also urged the central bank to offer trades line to the developing economy.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9420741200447083, "language": "en", "url": "https://fundingsage.com/3-critical-founder-contributions-for-a-startup-money-commitment-and-effort/", "token_count": 735, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.060546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:80d5cd67-7710-49bc-9adf-46ecabd45305>" }
Founder contributions are critical to entrepreneurial startups. There are three major contributions that founders provide to startup businesses: Money, Commitment, and Effort These are the basic building blocks for creating a company. Money, Commitment, and Effort. The difficulty arises in how to value these very different types of founder contributions. A basic premise of economics is the idea that “money” today is worth more than the same amount in the future due to its potential earning capacity. It can earn interest in the market. This principle is a cornerstone to understanding startup financing. In fact, this principle applies to the entire process from initial organization through external funding or sale of the company. Typically, uncertainty is greatest at the early stages of a business’ formation. At this point, there are a significant number of unknowns: the exact nature of demand, whether regulatory approval can be secured and even the eventual cost of designing, producing and marketing the product or service. Uncertainty creates risk and risk demands a premium. Therefore, investment in the early stage is worth more than an investment later. As time passes the uncertainty begins to lessen, for good or bad. Ideas are translated into concrete plans. Products and services are developed. A sense of the market and the ability to monetize the concept becomes clearer. This clarity reduces risk. Therefore, valuations tend to increase over time. This effectively makes comparable investments in future rounds of funding worth less than before. In general, the greater the reductions in uncertainty, provided the information is positive, the higher the valuations will be. A major question to consider is whether this principle should apply whether the money is actually transferred or simply committed by the founders or owners. The availability and “commitment” to fund can be a fundamental prerequisite for proceeding with a project. Founders need the confidence that the projected funds necessary to create the product or service will be there. Likewise, “effort” in the form of sweat-equity, if recognized as an in-kind contribution, is worth more when expended at an earlier stage. There is a significant risk that if the project is canceled, those efforts will have no return to the contributor. This creates an opportunity cost. This is the value of what that time and effort could have produced but didn’t because you made this investment. All of this relationship must be formally documented to reduce the possible risks. - The physical contribution of money, the “irrevocable commitment” of funds, and the effort of sweat-equity can be treated as Capital Contributions and reflected as such in the Cap Table. - Formal agreements must specify the magnitude and timing of the committed contribution. They should delineate how the commitments are to be made and altered. The time value of money concept (outlined above) indicates that any revision of funding commitment at a later date may be different than it would have been initially. This creates a potential need to revalue the company before such an event is transacted. - Similar documentation can be created to specify the value, timing, and results of imputed sweat-equity. It should be noted that sweat-equity creates tax consequences for the contributor since it is treated as if they had been paid for that effort. The way a company treats the contributions of founder Money, Commitment and Effort is a critical concept in the formation of a company. It not only identifies the initial company valuation, it also establishes the fundamental relationship between the owners. These factors affect the way in which potential future investors will view the company.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9353368878364563, "language": "en", "url": "https://gardenforindoor.com/chinese-money-plant-dropping-leaves/", "token_count": 2765, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.30859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:660d6625-351c-42d3-9b6c-9ea1948aeec6>" }
If you manage to have acquired a Chinese money plant (Pilea peperomioides), you’re lucky. Chinese money plant has rounded, glossy, deep green-colored leaves that are very pleasant looking. Unfortunately, dropping leaves is a common problem we’ll encounter with this plant. You may have wondered to yourself, Chinese money plant dropping its leaves due to poor soil drainage, overwatering or underwatering. In addition, lack of nutrients in the soil, lack of exposure to bright light, pest infestation, frosting, aging plant leaves are the possible causes for this issue. Now, this one is not as much as available as other house plants we know. Although widely distributed all over the world, Chinese money plants are a rare purchase. That’s why you’ve got to make sure it’s well taken care of so that it stays long with you. This house plant is a fragile one. Let’s dive into the details of why your Chinese money plant is dropping leaves. What Causes Leaf Dropping of Your Chinese Money Plant? Given the number of reasons to consider, you have to carefully observe the plant’s behavior to correctly assess its problem. Dropping leaves of Chinese money plant can be caused by one factor or a combination of many. Check out these following considerations to see which one is possibly the cause. Poor Soil Drainage If your plant’s soil isn’t well-drained, water will accumulate at the bottom of the pot. Excess water inhibits the presence of oxygen in the soil. Imagine the feeling of drowning. That’s what happens to the plants. As a result, roots will die and rot. When roots get damaged, the transport of essential minerals to other plant parts will stop. Eventually, the entire plant will experience wilting and shedding off of leaves. Poor soil drainage is a result of a poor soil mix. If the soil is compact and not aerated, the tendency is that water won’t flow out freely. Another reason can be the lack of drainage holes in your container. Overwatering or Underwatering Chinese money plants are succulents. They have the means to store more water for themselves and survive dry places even for an extended period of time. It has fleshy stems and leaves where they contain all the extra moisture for future use. Since they have water intact, adding more water is not a wise move. The result of overwatering is the same as that of having poor soil drainage. The roots will rot under the soil and aboveground, the plant will experience yellowing of leaves. Eventually, they will drop one by one. Underwatering, on the other hand, can dehydrate the plants. Lack of water causes cells to shrink. Plants will lose their turgid look. If not prevented, the plant will soon wither. Lack of Nutrients Available in Soil Nutrient deficiency causes plants to be unhealthy. Like humans, any malnourished plant will have a weak structure and form. One indication of deficient nutrient supply is the stunted or slow growth of the entire plant. Less vibrant color is another visible effect as well as yellowing, curling, and dropping off of leaves. Lack of Exposure to Bright Light Chinese money plants love bright light but exposure should be indirect. There must be a sufficient source of generous light for them to grow well and healthy. If the room is dim, plants will get leggy. They will grow taller than usual with lanky appearance because of weak cell walls in its stems and leaves. The entire plant foliage would be thinner and paler in color. Leggy leads to a premature drop of leaves and flowers in plants. Pest infestation in Chinese money plants is a very rare case. Nevertheless, it’s one possibility that we cannot set aside. Mealybugs and scale are the ones that can be commonly found on the underside of its leaves. They can work very subtly. If you aren’t regularly checking on your plants, they might cause huge damage. They suck on the leaves and create holes in it. Wounded leaves then start to turn yellow and become limp. Leaves that hang loose eventually drop. A pest infestation can cause brown or black spots on pilea plant leaves. Read this article to know the other causes and cures. Chinese money plants will grow at their best within a temperature range of 16 to 24oC (60 to 75oF). It is more tolerant to higher than lower temperatures though. So, extreme drops in temperature during winter can be harmful. Chinese money plants are not frost tolerant. Leaving it unprotected during winter can frost the leaves. Frosted leaves die and drop after. Aging Plant Leaves Dropping of leaves can be a result of a natural process. In this case, there’s no need to worry because it’s pretty normal for plants to shed old leaves. This is their chance to give way for young leaves to flourish. Aging is a normal phase of decline among plants. Leaf Shine Products Though we all want our plants to have vibrant leaves, using leaf shine products can actually do more harm than good. When you coat the leaves with leaf shine, you are covering the stomata. We know how important stomata is because this serves as a passage for gas exchange. Once the stomata get clogged, the leaves will find it difficult to take in carbon dioxide. The limited presence of CO2 will cause a decline in photosynthetic activity. With decreased levels of food manufactured, plants will starve and die. After some time, the plant will start dropping its leaves. Too much air circulation can ruin the plant’s leaves as well. If your plant is located in a place with this condition, it can be the reason why its leaves are dropping. Exposures to open windows and air conditioning are examples. These things can instantly detach the leaves petiole. You May Also Enjoy: Why Are My Azalea Leaves Dropping? (And How To Fix It) How Do You Prevent Chinese Money Plants from Dropping its Leaves? Now that you know the causes of dropping leaves, now let’s talk about how you can prevent this from happening to your Chinese money plant. Use Well-drained Soil and Pots with Drainage Holes Your soil mix will help your plant thrive well. Always start with a good potting mix. You can add sand, perlite, or vermiculite to make it have a porous structure. These little spaces will allow water to pass through with ease. It also creates good aeration making oxygen more available to plant roots After mixing your soil, check how fast it drains by pouring in water into it. If it takes only a few seconds to flow out of the drainage holes, then it’s good. On the other hand, if it takes several minutes for the water to drain, it has to be amended. Also, drainage holes should have a size that’s proportional to the size of your pots, not too large or too small. Water Only When Soil is Dry It’s normal to water our plants daily as we usually do. But, in the case of chinese money plants, that’s not applicable. Once a week watering is enough during hot seasons. If it’s cold like winter, watering should be less frequent. Watering once every two weeks or even once a month will do depending on the condition. To know whether it’s time for you to water, dip your finger in the soil at least 2 inches deep, and see if you can feel moisture. If yes, then don’t water it yet. No need to force the plant to drink when it’s not thirsty. Wait for the soil to completely dry off before watering again. You don’t have to worry about it getting wilted because it has a good water sink. Just consistently check to determine the right timing. Make Sure to Always Drain Water When watering, make sure that it passes through the soil and out of its drainage holes. You do not want any excess water to remain stagnant in its container because it causes root rot. After all, water is drained, put it back in its location. Don’t forget to always remove the plate under your pots when you water. Otherwise, all liquid that flowed out of the drainage will just remain in there. If possible, use terra cotta pots because they can dry out very quickly. Pot’s size should not be too large for the plants. It will just accumulate more water than what your plants need. You May Also Enjoy: Dracaena Leaves Falling Off (7 Causes and Solutions) Add Liquid Fertilizer Like any other plant species, Chinese money plants need sufficient amounts of nutrients to grow. If you want your plant to live longer, you have to constantly feed its soil. You can add fertilizer at an interval of two months but it actually depends on the condition of the plant. Use soluble fertilizer with a concentration of just half the original. Chinese money plants won’t need so much fertilizer anyway. Dissolve it in water and pour it into the soil of the plant. Be careful not to over-fertilize as this may be harmful and can cause sudden death. Expose to Bright but Indirect Light Locate a place where there are enough light sources. Windowsill is a perfect spot especially the one facing east. However, it’s important to avoid direct sunlight contact with your plants to prevent heat stress and the burning of leaves. If you have a glass window, that’s the best option. Just don’t put it too near to the glass because it heats up. Just leave enough space for air circulation. During seasons when bright light is less available, artificial light is the next best alternative. You can just open whatever light source you have in your room and allow it to make up for missed sunlight. You may also remove any shade like curtains to maximize whatever light is there. Manually Remove Pests The best way to get rid of bugs is to manually remove them out of the leaves. There is no need to use synthetic pesticides, especially for house plants. You might just end up inhaling the toxic chemicals since it’s inside your house. Whenever you see one or two bugs, remove them immediately. Don’t wait for it to multiply. If the leaves get severely infected, pluck it out as well. You may use a liquid detergent with low concentrations to spray on leaves infected with scale bugs. Be quick in repotting new buds for propagation of new plants. This will save the young plants from infestation. Watch out for Extreme Changes in Temperature Climate is very hard to predict nowadays especially with global warming. For that reason, you have to be vigilant in observing unexpected rises or fall in temperature. When this happens, you have to make quick adjustments. If it gets too hot, find a cooler location. If it gets too cold, you can put in shades around your plants or increase light presence. Always prepare for the winter season. Secure your chinese money plants by covering them. Remove Aged Leaves Once you notice that the aged leaves are starting to appear, take the liberty to cut it off the plant. That way, you are able to maintain only the vibrant and healthy leaves in your foliage. It helps create space for young leaves to grow further. It also maintains the plant’s fresh and good appearance. Be careful though not to damage the other leaves. Tools like pruning shears or regular scissors will help. Choose a Steady Location A place with less movement is ideal. Chinese plants are very fragile so you have to secure it in a location with fewer disturbances. Avoid putting it near an air conditioner, fan or heater. Use other sturdy plants to serve as shade if it’s too windy. Use Water to Clean the Leaves If it gets dusty, just spray shower the top leaves with water. You may also wipe it off with a clean damp cloth. Is Chinese Money Plant a Good Houseplant Investment? Definitely, yes! Generally, the maintenance of Chinese money plants should not be that hard. It may seem fragile on the outside but with proper care, this one is a tough survivor. Additionally, propagation is an easy feat. One single plant produces many buds. So, invest more time in propagating it rather than just focusing on one plant. You may be lucky to perfect the practices and soon turn your Chinese money plants into a profitable business. Who would say no to this chic plant, anyway? Monsteras are famously known for their unique gigantic leaves with decorative splits and holes. It was used before as an excellent addition to backyard gardens. But nowadays, this climbing... If you’re after a houseplant that you can use as an accent or make a bold statement, you can’t go wrong with Ficus elastica ‘Tineke’. It’s tough, versatile, and easy to maintain. However,...
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9153058528900146, "language": "en", "url": "https://repository.eduhk.hk/en/publications/socioeconomic-thresholds-that-affect-use-of-customary-fisheries-m-5", "token_count": 342, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2470703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b0f959e4-8fbb-4333-bd7f-fa9206b4b65a>" }
Customary forms of resource management, such as taboos, have received considerable attention as a potential basis for conservation initiatives in the Indo-Pacific. Yet little is known about how socioeconomic factors influence the ability of communities to use customary management practices and whether socioeconomic transformations within communities will weaken conservation initiatives with a customary foundation. We used a comparative approach to examine how socioeconomic factors may influence whether communities use customary fisheries management in Papua New Guinea. We examined levels of material wealth (modernization), dependence on marine resources, population, and distance to market in 15 coastal communities. We compared these socioeconomic conditions in 5 communities that used a customary method of closing their fishing ground with 10 communities that did not use this type of management. There were apparent threshold levels of dependence on marine resources, modernization, distance to markets (<16.5 km), and population (>600 people) beyond which communities did not use customary fisheries closures. Nevertheless, economic inequality, rather than mean modernization levels seemed to influence the use of closures. Our results suggest that customary management institutions are not resilient to factors such as population growth and economic modernization. If customary management is to be used as a basis for modern conservation initiatives, cross-scale institutional arrangements such as networks and bridging organizations may be required to help filter the impacts of socioeconomic transformations. Copyright © 2007 by John Wiley & Sons, Inc. CitationCinner, J. E., Sutton, S. G., & Bond, T. G. (2007). Socioeconomic thresholds that affect use of customary fisheries management tools. Conservation Biology, 21(6), 1603-1611. - Coral reefs - Customary resource management - Papua New Guinea - Social thresholds - Common property
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9616252183914185, "language": "en", "url": "https://wol.iza.org/articles/how-effective-are-financial-incentives-for-teachers", "token_count": 326, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0458984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ac6ae88d-3c07-482e-bcae-6c6de37a1e88>" }
Concerns about poor student performance have led schools to diverge from traditional teacher compensation and base a portion of pay on student outcomes. In the US, the number of school districts adopting such performance-based financial incentives has increased by more than 40% since 2004. Evidence on individual incentives in developed countries is mixed, with some positive and some negligible impacts. There is less evidence for developing countries, but several studies indicate that incentives can be highly effective and far cheaper to implement. Innovative incentive mechanisms such as incentives based on relative student performance show promise. Incentives can effectively improve student performance if they are designed well. In developing countries, paying teachers for student performance has been shown to be highly effective at low cost. Incentives based on the collective performance of small groups of teachers strike a balance between loss of effectiveness from free-riding teachers and gains in effectiveness from teachers cooperating with each other. Innovative incentive mechanisms based on loss rather than gain or on relative student performance show promise for high effectiveness but are yet to be rigorously evaluated. Overall, evidence on individual incentives in developed countries is mixed, with some positive and some negative impacts. In countries with high teacher salaries, incentives need to be large to elicit a response, which could make them too expensive for general use. Incentives based on the collective performance of large groups of teachers have been shown to have little impact on achievement and in some cases even generate negative impacts. There is no evidence that incentives tied to specific exams result in improvements in other measures of academic performance, suggesting a lack of general improvements in knowledge.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9422903060913086, "language": "en", "url": "https://www.agri.bot/post/how-protected-agriculture-will-make-lucrative-lifestyle-for-farmers", "token_count": 636, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f5476d2c-9b11-43a9-8113-c5044fa6e9c4>" }
How Protected Agriculture Will Make Lucrative Lifestyle for Farmers Simply, Protected Agriculture is defined as the modification of the natural environment to achieve optimal growth. It uses various techniques to improve crop growth and obtain high-quality final produce by controlling the surrounding environment. Protected Agriculture is one of the best solutions for the seasonality problem. In the North Indian region, we can observe this seasonality. So the open field cultivations are difficult to perform during the summer and winter periods of North India. Also, Protected Agriculture is highly popular among farmers in Maharashtra, Uttarakhand, Karnataka, and Jammu and Kashmir. In India, mostly cultivating crops in protected houses are cabbage, capsicum, cauliflower, knoll-kohl, broccoli, onion, tomato, brinjal, chili, and Brussels sprout like high-value crops. There are several reasons for why farmers who are practicing protected cultivation have a lucrative life than other farmers. 1. Off-season production Under Protected Agriculture, we control almost all the environmental factors that affect plant growth. So the seasonality is not a barrier for crop production during off-seasons when cultivating inside a protected house. While open-field farmers cultivate only during scheduled on-season, farmers who practice Protected Agriculture can cultivate throughout the year. So they are able to market their products throughout the year and earn profits during off-seasons also. 2. Higher profits from high quality produce Since we control all environmental factors, crops grown in protected houses are not subjected to harsh weather conditions and pest and disease attacks. Also, we try to provide optimum conditions (Temperature, Relative Humidity, Soil moisture, Soil pH, etc.) that required for better plant growth. Due to all these reasons, the final harvesting products become high-quality ones in both internally and externally that can be market at a higher price than products from conventional agriculture. 3. Low cost for pest and disease control In Protected Agriculture, we cultivate crops inside protected structures which minimize the entering of pest and diseases as much as possible. But in conventional farming, crops are affected by various pest and disease attacks. These will lead to yield and the quality loss when pest population and disease distribution become non-controllable. So farmers have to maintain the pest population below Economic Injury Level (EIL) by using pest and disease control methods. For that, it takes a high cost. But in Protected Agriculture, we don’t need such control methods and it saves a large amount of money for the farmer. 4. Export possibility Due to the high quality and the healthiness of products, products from protected cultivation can be sold at high-value markets (supermarkets) in both locally and internationally. So by exporting Agri produce from protected cultivations these farmers get much more profits than conventional farmers. With the vision of high quality and healthier food for the future, Protected Agriculture has become a good trend in Indian Agriculture. So there is a high potential for engaged in Protected Agriculture and earn higher profits. Then your lives will become more lucrative and more stable.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9252870678901672, "language": "en", "url": "https://www.analystforum.com/t/check-your-econ-knowledge/11833", "token_count": 697, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2216796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d6f4a770-dde1-4066-97b2-d57a23894b05>" }
If the central bank attempts to “peg” the nominal interest rate at 5% by buying bonds, then every time the nominal rate exceeds 5%, the most likely effect of the bank’s policy will be: a. Low nominal interest rates and a reduction in the rate of inflation. b. Rapid expansion of bank reserves, rapid growth in the money supply, and inflation. c. Low nominal interest rates, a high rate of private investment, and rapid growth accompanied by price stability. d. A decline in the reserves of banks, a slow growth rate of the money supply, and deflation. Buys bonds=> pumps money in the market=> money goes to banks=> increased excess reserves=> rapid growth of money supply=>money lended at lower rates=> inflation C looks allright to me Nominal interest = real interest+inflation impact B----> indicates high inflation raising nominal interest rates. I think price stabilization with increase in money supply is key. Whats the official word ? Because if interest rates are pegged at 5%, an excess above this limit will (considering the below formula: Mv=PY Money Supply x Velocity of Money= Nominal Inflation P and hence the right side of the equation will increase an as a consequence (to have equilibrium) also the left side will increase. Dreary…your bless please! map1, you are saying that if the central bank tries to lower interest rates, it will create inflation? in the end yes So why do we all get excited when the Fed lowers interest rates? Because it is cheaper to borrow, but hey, what comes up must come down, and vice-versa. Correct answer is B. I just wanted to make sure you all get the point. Yes, lowering interest rates will have the bad side-effect of raising inflation. That’s why it is not a good idea to try to lower interest rates if there is already high inflation. Detailed answer follows: Purchasing bonds will put cash back in banks, which will result in a rapid expansion of excess reserves that the banks can then loan out at declining interest rates until the excess reserves are cleared. This increase in loanable funds will result in higher consumption and investment demand, and will be shortly followed by higher prices. Shortly be followed by higher prices because more money in the markets dilutes the real value of money, and more money are after buying goods, and if that’s not inflation, I don’t know what it is. Dreary, map1, look the way I reach the conclusion. may be it is not the perfect one, but I think it is very straigthforward. yes, according to the quantity theory of money, it is strangedays. thanks guys… this is useful stuff ! > look the way I reach the conclusion. true, that should also hold, but make sure you assume that velocity is not affected by the action. If so, this increase in money supply coul be reduced by a decrease in velocity. Dreary Wrote: ------------------------------------------------------- > > look the way I reach the conclusion. > > true, that should also hold, but make sure you > assume that velocity is not affected by the > action. If so, this increase in money supply coul > be reduced by a decrease in velocity. Yes, thanks Dreary!
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8601358532905579, "language": "en", "url": "https://www.economicshelp.org/blog/glossary/base-year/", "token_count": 185, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0830078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3bbb5e9a-a887-425d-839f-3a72dcdc2df5>" }
Definition of base year: the starting point for the construction of an index number series. The base period or base year refers to the year in which an index number series begins to be calculated. This will invariably have a starting value of 100. For example, in constructing the Consumer price index, the government may use a base year of 2000. Therefore a CPI index may look like this 2000 = (100) 2001 = (103.5) 2002 = (109) This simple index series shows an inflation rate of 3.5% in the first year. In 2002, we can see prices have risen 9.0% since 2000. Sterling Exchange Rate Index With base year of 2007 = 100 Examples of index numbers Index of renting In this case, all renting index start off with a base year in Jan 2011. This enables us to make comparisons from this point.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9382078051567078, "language": "en", "url": "https://www.greenaironline.com/news.php?viewStory=1910", "token_count": 538, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.099609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9173164c-3c66-49ae-8f21-e214b482ab4f>" }
Boeing identifies readily-available renewable green diesel as potential cost-competitive sustainable jet fuel Tue 14 Jan 2014 – In what could be a significant breakthrough, Boeing has identified green diesel – a renewable ground transportation fuel – as a new source of sustainable aviation biofuel. Analysis by Boeing researchers has found that green diesel – not to be confused with biodiesel, which is a separate product and chemically different – has similar chemical properties to today’s aviation biofuel. The company says the fuel emits at least 50 per cent less carbon dioxide than fossil fuel over its life-cycle and could be blended directly with traditional jet fuel. Boeing says it is now working with the US FAA and other stakeholders to gain approval for aircraft to fly on green diesel. “Green diesel approval would be a major breakthrough in the availability of competitively priced, sustainable aviation fuel,” commented Dr James Kinder, a Technical Fellow in Boeing Commercial Airplanes’ Propulsion Systems Division. “We are collaborating with our industry partners and the aviation community to move this innovative solution forward and reduce the industry’s reliance on fossil fuel.” According to Boeing, significant green diesel production capacity already exists in the United States, Europe and Singapore that could supply as much as 1% – around 600 million gallons – of global commercial jet fuel demand. At a wholesale cost of about $3 a gallon with US government incentives, this would make it competitive with petroleum jet fuel, currently trading at around $2.97 a gallon. Biofuels approved for aviation must meet or exceed stringent jet fuel performance requirements. Green diesel, which can be used in any diesel engine, is made from oils and fats, similar feedstocks to those used in processes that were approved in 2011 by fuel certification body ASTM International for commercial airline use in blends of up to 50%. Other conversion technologies are currently undergoing approval by ASTM. Boeing, the FAA, engine manufacturers, green diesel producers and others are now compiling a detailed research report that will be submitted in the fuel approvals process. Boeing says it and the 27 airlines in the Sustainable Aviation Fuel Users Group are committed to developing biofuel that is produced sustainably and without adverse impacts on greenhouse gas emissions, local food security, soil, water and air. “Boeing wants to establish new pathways for sustainable jet fuel, and this green diesel initiative is a groundbreaking step in that long journey,” said Julie Felgar, Managing Director of Boeing Commercial Airplanes Environmental Strategy and Integration. “To support our customers, industry and communities, Boeing will continue to look for opportunities to reduce aviation’s environmental footprint.”
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9079781174659729, "language": "en", "url": "https://www.nber.org/papers/w7288", "token_count": 246, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1455078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:52f33323-528d-4710-8bf4-7683c330c687>" }
Policies to Foster Human Capital This paper considers the sources of skill formation in a modern economy and emphasizes the importance of both cognitive and noncognitive skills in producing economic and social success and the importance of both formal academic institutions and families and firms as sources of learning. Skill formation is a dynamic process with strong synergistic components. Skill begets skill. Early investment promotes later investment. Noncognitive skills and motivation are important determinants of success and these can be improved more successfully and at later ages than basic cognitive skills. Methods currently used to evaluate educational interventions ignore these noncogntive skills and therefore substantially understate the benefits of early intervention programs and mentoring and teenage motivation programs. At current levels of investment, American society underinvests in the very young and overinvests in mature adults with low skills. Heckman, James J., 2000. "Policies to foster human capital," Research in Economics, Elsevier, Elsevier, vol. 54(1), pages 3-56, March. citation courtesy of James Heckman, 2011. "Policies to foster human capital," Educational Studies, Higher School of Economics, issue 3, pages 73-137. citation courtesy of
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9603761434555054, "language": "en", "url": "https://www.nwomcities.com/what-can-i-use-eos-for-crypto/", "token_count": 1650, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.177734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9efbc6d8-19a9-44eb-9952-f7eeed078934>" }
What Can I Use Eos For Crypto – A Cryptocurrency, as specified by Wikipedia is “a digital currency developed to operate as a medium of exchange for the transfer of digital properties “. It was developed as an alternative to traditional currencies such as the US dollar, British pound, Euro, and Japanese Yen. A Cryptocurrency is a virtual property that is managed by its owners. As such, no reserve bank is associated with the management of these currencies. Unlike traditional money and products, which are managed by a single main body, the supply and need of the cryptocoin will be determined by the market. This characteristic is different from classical economies where the economy is led by a reserve bank. The distribution of the cryptocoin is usually done through a process called “minting ” in which a certain quantity of the digital property is produced in order to increase the supply and consequently decrease the need. When it comes to the Cryptocurrency ledger, this deal is done by cryptographers, which are groups that specialize in creating the needed proofs of credibility required for appropriate transaction to occur. While most Cryptocurrencies are open-source software application solutions, some exist that are proprietary. This is in contrast to the open source software application that defines most cryptocurrencies, which are developed by any number of individual contributors. A significant distinction between the 2 is that open source software can alter its underlying code and trigger issues if a change is required. On the other hand, a centralized authority does not require to change its underlying code to enable a modification in the supply or need of the cryptocoin. The creator of Litecoin, Robert H. Jackson, was attempting to create a safe and secure alternative to Cryptocurrency when he was forced to leave the company he was working for. By producing this version of Litecoin, which has a much lower trading volume than the original, he hoped to offer a trustworthy but secure form of Cryptocurrency. One of the most appealing applications for the future of Cryptocurrency is the principle of “blockchain. ” A “blockchain ” is merely a big collection of encrypted files that are taped and kept on computer systems all over the world. As soon as tampered with, each block of details is protected by mathematical algorithms that make it difficult to reconstruct the info. The cryptography used in the chain is also mathematically secure, which permits deals to be confidential and smooth. Because each transaction is safeguarded by a highly safe and secure file encryption algorithm, there is no possibility of impersonating owners of homes, hacking into computers, or dripping information to 3rd parties. All deals are tape-recorded and encoded using complex mathematics that safeguards info at the exact same time as ensuring that it is available only to licensed participants in the chain. The significant issue with standard ledgers is that they are vulnerable to hacking which allows someone to take control of a company ‘s funds. By utilizing crypto technology, a business ‘s journal can be encrypted while keeping all the information of the transaction private, ensuring that only they understand where the money has gone. A “virtual currency ” is merely a stock or digital commodity that can be traded like a stock on the exchanges. Virtual currencies can be traded online just like any other stock on the conventional exchanges, and the benefit of this is that the same rewards and rules that apply to genuine markets are likewise appropriate to this type of Cryptocurrency deal. As more Crypto currencies are created and made available to consumers the benefits end up being clear. There are currently numerous effective tokens being traded on the significant exchanges and as more go into the marketplace to the competition will enhance the strength of the existing ones. Cryptocurrency trading is definitely an amazing investment. It entails the purchasing and trading of various currencies with different coins. In basic, if you acquire cryptographic currencies, you ‘re generally buying Crypto currency. It ‘s essentially much like trading in shares. Now, if you ‘re not knowledgeable about how to trade and purchase crypto currencies, this can be pretty scary things. Well, it actually isn ‘t that scary. Nevertheless, there are particular safety measures you require to take. You will want to get a broker either a complete FX broker or a discount rate broker that charges a small charge. They will then supply you with a user interface for your application and software. You will also want to establish a “mini account “. This is just an account that you utilize for a short amount of time. This assists you get familiar with the features of the platform and get utilized to how it works. When you trade in the open market with real money, there is no such thing as a small account. That would make the process too safe for you. Considering that you ‘re trading in the crypto market with ” cryptocoins “, it ‘s completely appropriate. The MegaDroid goes one step even more and enables you to begin trading with your preferred coins at any time. It does give you the ability to do some “quick ” trades, however that ‘s about the limit. Maybe you must be if you ‘re wary of quick trades! If this was the only advantage of utilizing the MegaDroid, it would be great! Regrettably, it ‘s not. What traders actually enjoy about this extraordinary robot is the fact that it gives them full control over their projects. Some traders still claim that it ‘s a trouble to by hand handle a project. I know that it ‘s simpler than manually handling several projects on your PC, but it does have a couple of advantages over the others. They can then transfer funds into their account and automatically utilize them to trade. Rather, they can handle their funds using their own wallets. Considering that all transactions are held digitally, you put on ‘t need to deal with brokers or dealing with trading exchanges – everything is kept strictly within your own individual computer system. The last major perk is that it no longer holds ether and pennybase. The 2 largest exchanges by volume (Euromoney and MegaDroid) are now managed by the different developers of Cryptocorx. If you want to trade on these 2 big exchanges, this implies that you will have to download and set up the software on your own computer system. Even though this may sound like a discomfort, it has actually greatly increased the liquidity of the two coins. All you ‘ve got to do is visit their websites and you ‘ll have the ability to see their price quotes. You require to know how the market will move so that you can be prepared when you do choose to trade. If you do this correctly, you will know exactly when you should get in and exit the market – hence you can make much better choices with your trades. Now that we ‘ve gone over the pros and cons, let ‘s take an appearance at some technical analysis approaches. If you are a technical analyst and are familiar with the market trends, then it shouldn ‘t be an issue. With this info, you ought to be able to analyze the cost action on the 2 exchanges very quickly and make great trades. There are numerous different methods to offer and perform this buy action, so you ‘ll want to pick one that you ‘re comfortable with. A Cryptocurrency, as defined by Wikipedia is “a digital currency developed to operate as a medium of exchange for the transfer of digital assets “. ” A “blockchain ” is merely a large collection of encrypted files that are recorded and maintained on computer systems around the world. A “virtual currency ” is merely a stock or digital product that can be traded like a stock on the exchanges. Since you ‘re trading in the crypto market with ” cryptocoins “, it ‘s completely appropriate. It does give you the capability to do some “quick ” trades, however that ‘s about the limitation. What Can I Use Eos For Crypto
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9580478668212891, "language": "en", "url": "https://www.personalfinancialindependence.org/the-whats-up-newsletter/the-printer-is-running-around-the-clock", "token_count": 984, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.3984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c4b230b3-3a18-457d-8f05-e2739ff9c4e2>" }
Alchanati Campbell & Associates The job of physically printing currency belongs to the Treasury Department’s Bureau of Engraving and Printing. After the currency is printed it is distributed to 28 cash offices that individually distribute the money to over 8,400 banks and other financial institutions where the currency can enter the money supply. For the 2020 final year, the Fed’s Board of Governors voted to have the BEP print 5.2 billion Federal Reserve notes, valued at $146.4 billion. This process seemingly creates money out of thin air but is actually built on the foundation of international commerce. Before 1971, most currencies were backed by gold and silver therefore the central banks of the world were limited in how they could print currency and increase the money supply. Now governments like the U.S can print money as needed and the value of their currency is decided by demand for the currency, tied to debt, and backed by the credit of the issuing nation. When people refer to “printing money” they are usually referring to the processes the Fed undergoes to increase the money supply. The U.S Federal Reserve has several tools to control the supply of money, but two methods, in particular, are being used to carry the economy through the nationwide economic standstill. Quantitative Easing is a policy that was pioneered and widely used during the Great Recession and it involves the Fed purchasing massive amounts of financial securities, mostly U.S. government bonds, from financial institutions, with the goal of pumping more money into the economy. The other method is referred to as “helicopter money” and used much less. Helicopter money involves the Treasury Department, which under the direction of the Fed sends money directly to individuals. Most recently, the Feds have used this method and started distributing $1,200 to individuals who are eligible. You can see if you are eligible by going to IRS.gov and filling out the form. When the Fed institutes a “helicopter money” policy, like the stimulus checks being sent out this month, it is to help get rescue the economy from what is known as a liquidity trap. A liquidity trap in a simple sense is when interest rates are near zero, but the economy remains in a recession. As of this week, the Fed’s balance sheet due to its aggressive round of quantitative easing has inflated to a record of $6.13 trillion; $5 trillion in bond holdings alone. While these measures are seen as necessary, they will likely have long term consequences. Critics of QE argue that it will lead to hyperinflation, that it allows corporations and investors to act irresponsibly, and that it could make the U.S dollar less favorable to other nations and jeopardize its status as the global reserve currency. Long term, the status of the US dollar as a global reserve currency is in trouble but short term, we are seeing a large use of the Fed’s central bank liquidity swap lines, which allow foreign central banks to exchange their local currency for dollars, rise to $358.1 billion. Economists currently believe that the U.S can avoid hyperinflation and even deflation by using a combination of its many other economic management policies like cutting tax rates, lowering bank reserve limits, and potentially using negative interest rates. While the long term consequences of these policies remain uncertain, it is very likely that they will remain prominent for more than a decade. Now that we have some background, we can look at how printing and spending money affects the U.S Government. It is no secret that the federal debt had been increasing at an alarming rate and in many years the government operates with a large deficit, but now COVID-19 and the unsteady direction of President Trump has the U.S operating like never before. Ceteris paribus, the U.S federal budget deficit is on track to exceed $3.8 trillion this year, making it nearly 4 times the deficit from the prior fiscal year. By October 1st, the Committee for a Responsible Federal Budget estimated that the federal debt to GDP ratio will be larger than the record set after World War 2 at 121.7%. Years of economic expansion, combined with the novel coronavirus have set the world stage for the worst economic conditions of our lifetime, here at the ACA Foundation we will keep you updated with the latest news and straightforward analysis to help you navigate these troubling times. Stay safe. The ACA Foundation WHAT'S UP FRIDAY? is a weekly newsletter that will give you a summary of "What's up?" on Wall Street, in the US and around the World written by The Alchanati Campbell and Associates Team. What makes us unique is we focus on long-term knowledge; knowledge that will still be useful to you 10 years from now.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9697555899620056, "language": "en", "url": "https://www.pragmaticinstitute.com/resources/articles/product/the-will-i-which-one-product-continuum/", "token_count": 1190, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0021209716796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8c3d6ec8-df1d-4483-9e57-63194baf8f03>" }
Understanding the concepts of “Will I?” and “Which one?” provides business people with a valuable framework to think about pricing … and even create new products. We always talk about “Will I?” and “Which one?” as if it is one or the other, but applying this concept to products it is probably more of a continuum. First, let’s review the concept. When customers make a purchase decision, they typically make two decisions, “Will I?” and “Which one?” The “Will I?” decision is will I buy something in the product category. After they have said yes to “Will I?” they then answer the question: “Which one will I buy?” Most purchases are made only after answering both questions. However, sometimes people buy after only making the “Will I?” decision. It is extremely valuable for companies to understand when this happens because buyers who only make the Will I? decision are less price sensitive. Companies should be able to charge higher prices. The decision that occurs just before purchase is not a continuum. A buyer either purchased after making the “Will I?” decision or after the “Which one?” decision. Which decision they addressed is binary. The best way for companies to use this concept is to ask buyers, typically during win/loss analysis, what else did you consider? If they answer “nothing really”, then they only made the “Will I?” decision. If they list competitive alternatives, then they also made a “Which one?” decision. The point is just before the purchase, the last decision they made was either “Will I?” or “Which one?” However, the product is on a continuum. We often think of products as either “Will I?” products or “Which one?” products. A “Will I?” product is one where buyers make the decision after answering “Will I?” They never consider alternatives. A “Which one?” product is one where buyers take the second step and consider competitive offerings. Turns out though, products aren’t always one or the other type. There is a continuum and that continuum could be driven by several different market characteristics including market segments, distribution, and level of competition. Market segments: Different people behave differently. Some people may only make the “Will I?” decision while others may make a “Which one?” decision. One example of this is your next smart phone purchase. Most iPhone users are “Will I?” type buyers. They are deciding should I upgrade to the new iPhone or not? However, some iPhone users are trying to decide whether or not to switch to an Android phone. Those that are considering Android are making the “Which one?” decision. The iPhone isn’t always a “Will I?” or a “Which one?” product. It depends on who the buyer is and how he or she acts. Distribution: Sometimes products are “Will I?” products in certain situations but not always. This is often driven by the distribution channel. Listed below are three products that are typically “Which one?” products, yet the distribution mechanism turns it into a “Will I?” product. – Popcorn – at the movie theater – Potato chips – on the end of an aisle at a grocery – Gasoline – in the middle of the dessert Level of competition: Probably the purest form of a “Will I?” product is a monopoly. Electricity in many places can only be purchased from one provider. You either choose to buy it or you don’t. And yet, there are alternatives. You could put in your own generator. You could live off the grid, using only solar or wind power. You could live without power and use candles. There are alternatives, but not many people actual consider them. The purest form of a “Which one?” product is a commodity. There is no difference between product A and B so the only thing to use to decide is price. Yet most products (all?) are neither commodities nor monopolies. Instead, buyers are trading off differences in attributes for differences in price. They typically think something like product A is more expensive, is it worth it? When you think about the level of competition as a continuum, you are really looking at the amount of differentiation. Zero differentiation is a commodity. Infinite differentiation is a monopoly. The more differentiation, the more like a “Will I?” product you have. OK, but how is this relevant to you? Whenever you are pricing a product, you want to understand the decision your buyers are making just before they purchase. Are they considering a competitor or not? Put yourself in the mind of your buyers. But realize, not all buyers are the same. Not all situations are the same. Not all competitors are the same. Can you price differently based on different situations? The other way this is valuable to you is as you create products. Can you create products targeted directly at the market segments who only answer “Will I?” Can you find or create distribution situations where there is no competition? Can you build more differentiation into your products? Each one of these moves you more toward the “Will I?” end of the continuum. Although it hasn’t been said, there is much more profit on the “Will I?” side of the continuum. Find a way to get there.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8826699256896973, "language": "en", "url": "https://www.rti.org/publication/%E2%80%9Cdeveloping%E2%80%9D-achievement-gap-colombian-voucher-reform", "token_count": 213, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1650390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:887c9a79-4ae9-434d-9fda-36c72dc4ba41>" }
The achievement gap in many developing countries is defined in terms of rich/poor and public/private. The prevailing explanation for the “developing” achievement gap is an underfunded, inefficient, and/or inadequately supplied public school sector. Via an analysis of a Colombian voucher experiment, this article examines the extent to which income-contingent vouchers can narrow the achievement gap and provide a cost-effective method for increasing secondary school enrollments. Despite structural and implementation flaws, which diminish the program's impact on achievement and enrollments, its successes strengthen the argument that implementing an income-contingent voucher program can help narrow the achievement gap in developing countries. To cost-effectively increase enrollments, however, significant modifications and expansions to the program would be necessary—as explained in the conclusion of this article. The “developing” achievement gap: Colombian voucher reform Stern, J. (2014). The “developing” achievement gap: Colombian voucher reform. Peabody Journal of Education, 89(1).
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9557462334632874, "language": "en", "url": "http://confessionsofadad.ca/", "token_count": 455, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6c2041b1-2610-446a-98c5-5fd44182c337>" }
Confessions of A Dad: My Kids Don’t Understand the Value of Money This book has nothing to do with accumulating wealth, but everything to do with helping children understand the value of money and developing healthy habits about money. Parents feel that kids will figure out the “money stuff” as they get older. While this may be partially true, there is no reason to let your kids drown in financial ignorance. Given a car and car keys, most kids could eventually figure out how to drive, but no sane parent would let their kids risk injury or death to learn how to drive on their own. Kids who are not taught about money matters can gather some hefty bumps and bruises along the way. Parents must remember that the responsibility of personal finance education is always with them, even after their kids finish school and transition into their early adult years. THE GOAL OF THE BOOK: Good parents teach their children the importance of eating well. They stress personal hygiene and insist on regular flossing. The majority of moms and dads also emphasize academic achievements and a good work ethic to guarantee a brighter future for their kids. Unfortunately, most well meaning parents have missed a vital ingredient necessary to help their kids become financially astute adults. This book will help parents (and young adults) with the confidence to talk about money and learn to be financially free. It is our job as parents to give our children the wings to thrive in the real world. Experience The Book This is the definitive guide to personal finance for teenagers and young adults! Azhar Laher started out on a journey to educate his own children about how to manage their money and ended up writing an entire book on the subject. With light-hearted wit and heavy sincerity, Confessions of a Dad is filled to the brim with a lifetime of wisdom on topics that everyone should know, including: – Money Can’t Buy Happiness. – Some rich people live poor lives. – Budget isn’t a bad word. – How to Easily Save $1378 Each Year – Beware of the Diderot Effect. – Wealth lessons from Vincent Van Gogh Know good debt and bad debt. – Many more chapters that will change your outlook on money
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9512174129486084, "language": "en", "url": "https://completecontroller.com/5-ways-to-calculate-turnover-rates-in-retail/", "token_count": 860, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.02001953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4459fd5e-3b80-4ba3-b8e8-6acd0f5ed963>" }
What are Turnover Rates? Turnover rates are the way of measuring the number of times a business sells stock inventory in a certain time period. The turnover rates are used by businesses for calculating competitiveness and profits. Usually, it is a track of the performance of a business. A high turnover rate in the inventory is mostly seen as positive as it is a sign that goods are being sold before they are damaged or deteriorated. The most generally used formula for calculating a turnover is: Turnover = Cost of goods sold /Average inventory Ways of Calculating Turnover Rates 1. Determine a Time Period for your Calculation Inventory turnovers are calculated over a specific time period. This time period varies as it can be anything from a fiscal year to an every day basis. The costs of goods that are sold are meaningless when they appear as an instantaneous value. Once the time period has been decided on for calculating the turnover, calculate the cost of goods that have been sold over that period of time. The cost of calculating the goods sold will not include the amount of money spent on the shipping, distribution, and creation of the products. 2. Utilizing the Formula 365/turnover for Finding out the Average Time Period of Selling the Products With this operation, you will be able to calculate the estimated time it has taken to sell all of your products that were stored in the inventory. Normally, you find out the turnover on a yearly basis and divide the ratio by 365. The number will be the average calculation of how long it took to sell your products. 3. Divide the Cost of Goods Sold from the Stored Average Inventory Divide the cost of the goods that have been sold by the number of goods that are still in your inventory. The average inventory is the sum of the beginning value of the inventory balance and the ending value of the inventory balance divided by two. 4. Using the Formula Turnover=Sales/Average Inventory for Quick Estimates Only Time is a valuable aspect of any business and entrepreneurs often do not have the time to make complicated calculations. This formula saves them time from calculating the turnover rates but, alongside, there is a slight chance of inaccuracy. The values can turn out to be inaccurate because the inventory is calculated on wholesale rates whereas the goods that are sold are recorded at the prices offered to the customers. This makes the inventory look higher than it really is. It is best advised that you only use this equation to calculate quick estimates. 5. Use the Inventory as an Approximate Measure of Efficiency Businesses make efforts to clear out their inventory as they have an aim to sell their products as soon as possible. This shows how the business is performing, especially among their competitors. Although the background of the business and the scale it is operating on has to be determined before any of these comparisons can be held, the time period it takes for a business to sell out the products in the inventory proves how well it is performing. Low inventory turnovers do not always prove to have a negative effect and high inventory turnovers do not always have a benefit. Record every single transaction, including the price of the stocked inventory, every product that is sold, the profit gained, and sale targets in bookkeeping records. This will also help in calculating the turnover rates without having to gather all of the information at the last minute. About Complete Controller® – America’s Bookkeeping Experts Complete Controller is the Nation’s Leader in virtual accounting, providing services to businesses and households alike. Utilizing Complete Controller’s technology, clients gain access to a cloud-hosted desktop where their entire team and tax accountant may access the QuickBooks file and critical financial documents in an efficient and secure environment. Complete Controller’s team of US based accounting professionals are certified QuickBooksTMProAdvisor’s providing bookkeeping and controller services including training, full or partial-service bookkeeping, cash-flow management, budgeting and forecasting, vendor and receivables management, process and controls advisement, and customized reporting. Offering flat rate pricing, Complete Controller is the most cost effective expert accounting solution for business, family office, trusts, and households of any size or complexity.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.945327639579773, "language": "en", "url": "https://www.accountingcoach.com/blog/dfifference-actual-overhead-applied-overhead", "token_count": 383, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2001953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3aed01c3-e723-4ca6-9146-6bdce8ed5417>" }
Definition of Actual Overhead In the context of actual and applied overhead, actual overhead refers to a manufacturer's indirect manufacturing costs. (Costs that are outside of the manufacturing operations, such as marketing and general management, are expenses of the accounting period and are not applied or assigned to products.) Actual overhead are the manufacturing costs other than direct materials and direct labor. Since the overhead costs are not directly traceable to products, the overhead costs must be allocated, assigned, or applied to goods produced. Examples of Actual Overhead A few of the many overhead costs are: - Electricity used to power the production equipment - Natural gas to heat the production facilities - Depreciation of the production equipment and facilities - Normal repairs and maintenance of the production equipment - Salaries and benefits for production supervisors These actual costs will be recorded in general ledger accounts as the costs are incurred. Definition of Applied Overhead Applied overhead is the amount of the manufacturing overhead that is assigned to the goods produced. This is usually done by using a predetermined annual overhead rate. Example of Applied Overhead Let's assume that a company expects to have $800,000 of overhead costs in the upcoming year. It also expects that it will have its normal 16,000 of production machine hours during the upcoming year. As a result, the company will apply, allocate, or assign overhead to the goods manufactured using a predetermined overhead rate of $50 ($800,000 divided by 16,000) for every production machine hour used. Since the future overhead costs and future number of machine hours were not known with certainty, and since the actual machine hours will not occur uniformly throughout the year, there will always be a difference between the actual overhead costs incurred and the amount of overhead applied to the manufactured goods. Hopefully, the differences will be not be significant at the end of the accounting year.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.944717288017273, "language": "en", "url": "https://www.studiestoday.com/download-book/cbse-class-12-fmm-banking-operations-211642.html", "token_count": 631, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.068359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fc68c097-63dc-45f9-84e6-b3235ad8d252>" }
Read and download CBSE Class 12 FMM Banking Operations chapter in NCERT book for Class 12 Other Subjects. You can download latest NCERT eBooks for 2021 chapter wise in PDF format free from Studiestoday.com. This Other Subjects textbook for Class 12 is designed by NCERT and is very useful for students. Please also refer to the NCERT solutions for Class 12 Other Subjects to understand the answers of the exercise questions given at the end of this chapter Learning Objectives : After studying this Chapter you will be able to . list out the new instruments offered by banks; . understand the types of loans given by banks; and other financial services. . know the meaning of and different types of personal banking services. . appreciate the meaning and importance of NRI banking. explain the meaning and importance of corporate banking. . understand the meaning and advantages of mobile banking, internet banking at Core banking For investment of surplus funds or to create a fund for future needs like children’s’ education and marriage, construction of house, business, etc one can find plenty of opportunities to deposit money banks under various deposit schemes. Now a days almost all banks are computerized, core banking/ network banking system is introduced which helps the people to deposit money at their own convenient locations. 2.2 Types of Deposits The following are some of the deposit schemes available in banks: 1. Current Account 2. Savings Account 3. Term Deposit/Fixed Deposit/ Recurring Deposit Account 4. Multi Option Deposit Account The first three accounts were already discussed in Class XI and the last type is discussed below: Multi Option Deposit Scheme is a term deposit which is not fixed at all and comes with a unique break-up facility which provides full liquidity as well as benefit of higher rate of interest, through the savings bank account. One can also keep that deposit intact by availing an overdraft facility, to meet occasional temporary funds requirements. Individual banks have their own deposit schemes to suit the current as well as future needs of the people. You may visit nearby branches of the banks and collect information about different types of deposit accounts to ascertain the comparative advantages and limitations of the different types of deposit schemes. Banks have variety of schemes under Personal Finance to satisfy varying needs of the banking public. Banks provide credit in the form of overdraft or loans. Overdraft facility is generally provided on current account. Overdraft is a service provided by a bank to utilize money even when there is no balance in the customer’s account. It is a form of credit and one has to pay interest for the overdraft drawn. It is an arrangement made to cover the cash shortages. The rates differ from bank to bank and depend on the time period also. It is not suitable for long period of time. Bank loan is the money which one borrows from the bank for a specific purpose for specific period with agreement for interest and repayment periods etc. Please refer to the link below for - CBSE Class 12 FMM-Banking Operations Click for more Other Subjects Study Material ›
{ "dump": "CC-MAIN-2021-17", "language_score": 0.947292149066925, "language": "en", "url": "http://foster-care-newsletter.com/ffpsa-overview/", "token_count": 1564, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1005859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:33b4230e-6f08-4b99-88ac-f87a1e27f776>" }
A groundbreaking piece of legislation has passed through Congress and is seeing the beginning stages of implementation across the country. The Family First Prevention Services Act (FFPSA), originally designed as its own bill, has been passed into law attached to a government spending bill. The FFPSA has implications for all child welfare providers in the United States but finally brings the federal government in line with what child welfare studies have been saying for years: kinship care is the most effective form of foster care. The provisions of the FFPSA pave the way to move foster care away from a system that relies on people who are, effectively, strangers to the children being placed with them and bolsters states’ ability to support and grow kinship care communities. To do this, the bill will divest from congregate (group home) care and shift funds into what could be called the “Foster Care New Deal.” This legislation has two primary approaches – it will create prevention services and family supports to address the causes that lead to foster care placement while developing the infrastructure relative caregivers need to allow them to care for the children for whom prevention services were insufficient. What Does the FFPSA Do? The first approach, prevention services, has the goal of reducing the need for child welfare systems entirely. Through the establishment of mental health services, substance abuse treatment and prevention programs and in-home parenting skill programs, the FFPSA will help states work with biological parents to ensure that not only do their children get to experience bright futures but also that those children get to do so in their own home, with their biological family. The second approach addresses the growing trend of kinship care across the country. The FFPSA creates funding for states to develop and build what are known as Kinship Navigator Programs (KNPs) – KNPs are critical for preparing relative caregivers to handle raising their relatives’ children while under guidance of the state. Before the FFPSA, however, there were no federal guidelines for what a given state’s KNP should look like or how they would be funded, which left some states unable to provide services, programs or financial assistance to relative caregivers. Under the new regulations, states will now be able to claim a 50 percent reimbursement for expenses related to Kinship Navigator Programs, reducing the burden on state budgets and ensuring that informal kinship caregivers can still get the funds needed to ensure the best outcomes for the children in their care. These shifts in the federal approach to kinship care won’t happen overnight, however, nor will they happen for free. How Will The FFPSA Make These Changes? To help fund these changes, the bill asks states to divest from what is known as “congregate care” – group homes and residential treatment facilities that care for foster children in a group setting without the use of traditional foster parents. These facilities often serve children with special emotional, behavioral or developmental needs, but as Foster and Adoptive Family Services has previously reported, states are increasingly moving away from the congregate care model. By reducing federal expenditures for congregate care, the FFPSA creates funding that will enable states to implement new prevention services and supports for relative caregivers that the child welfare community agrees are better for children in care. The implementation process for the FFPSA stretches from 2018 all the way to 2027. Most provisions, however, are set to be implemented before the start of 2020 – roughly a year and a half from the time this article was published. Below is a brief overview of the changes that will be implemented this year: Upon Enactment of the FFPSA: The Federal Government will: - Provide technical assistance to states so they can share best practices for prevention services - Create a clearinghouse to develop and manage standards for prevention services - Collect data and conduct evaluations relating to existing and future prevention services - Amend and/or reauthorize funding programs to be more flexible in the ways states may use these programs’ resources to implement FFPSA changes - Begin development on an interstate case processing system The State Governments will: - Establish health care protocols to prevent misdiagnosis of mental illness, disorders or disabilities for foster children - Collect information and report on children in non-foster home settings By October 1, 2018: The Federal Government will: - Provide guidance for the practice criteria for prevention services - Identify model licensing standards for foster homes which states will need to abide by The State Governments will: - Be able to receive federal reimbursement for expenditures related to the development of evidence-based kinship navigator programs - Document steps being taken to monitor and prevent child maltreatment fatalities (See: Foster Care Negligence, Abuse and Death) - Establish procedures for criminal record and child abuse/neglect background checks for any adult working in group care settings that have foster placements. (Click here for a full implementation schedule) These changes represent a turning point in the nation’s child welfare systems. In a statement for Casey Family Programs, Dr. William C. Bell, President and CEO, said “This legislation makes it clear that our national child and family well-being response systems will not operate as though it is possible to fully address the well-being of children, without addressing the well-being of their families and their communities.” However, as previously reported, some states have more work to do than others. In California, recent efforts to enable relative caregivers to collect the same benefits as foster parents have resulted in a number of false-starts and stumbles, with kinship parents waiting for long periods of time to receive appropriate compensation. The FFPSA will serve as a guide to help states improve their internal systems and properly support relative caregivers. The FFPSA and New Jersey As a leader in the child welfare community, New Jersey has already acknowledged the value of relative caregivers and the critical need to help families before foster care placement becomes necessary. As the 2017 Outcomes Report and Executive Summary from the state’s Department of Children and Families acknowledges, “Removing a child from their home can have significant impact on, and create additional trauma for, the child and parent.” Currently, there are 41,945 children benefiting from in-home services that help their families avoid a potentially painful and traumatic entry into the foster care system. These in-home efforts, however, began almost fourteen years ago. From 2004, when New Jersey began emphasizing in-home care, to April of 2018, out-of-home placements were cut nearly in half, from more than 12,000 to 6,225. The Commissioner’s Monthly Report shows that of those 6,225 children in out-of-home placement, more than half are currently in formal, state-sponsored kinship care. However, despite an infrastructure that is in apparent alignment with the FFPSA, New Jersey will still have to be ready to adjust to the specifics of these changes, including prevention services and model licensing. In October, the Foster and Adoptive Family Services management team and board of directors will be convening for a strategic planning meeting to discuss how New Jersey can prepare for the specifics of these federal changes and guidelines as they are implemented over the coming years. To learn more about the FFPSA, you can find a summary of the changes from the Childrens’ Defense Fund here. To find specifics about Kinship Navigator Programs and how they function across the country, click here. To take a closer look at how kinship care has been on the rise, click here.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9479419589042664, "language": "en", "url": "http://www.the9billion.com/2011/09/08/european-union-and-australia-may-link-carbon-emissions-trading-schemes/", "token_count": 409, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.298828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b2ba593b-53b9-4bdc-ac0c-ca752519b4ad>" }
As climate change becomes a widely recognized and acknowledged global issue, and more countries bring forth efforts to combat it, the hope is that countries may link schemes with others in an attempt to combat it together. Australia and the European Union are one pair that could agree to link carbon emissions trading schemes as a way to broaden their impact. The Australian government released plans two months ago to put a price on carbon emissions, imposing a tax from July 2012 before moving to a carbon trading system in 2015. According to European Commission President Jose Manuel Barroso, this is an important step both economically and environmentally. Europe has the world’s largest carbon market with their emissions trading scheme, which was launched in 2005 and forces factories and utilities to buy carbon permits in order to cover their emission output. Australian Prime Minister Julia Gillard is fighting for public support for the proposed carbon reduction scheme. While it remains controversial, Barroso has praised the Gillard government for its plan. He wants to enlarge the EU carbon market and bring more attention to greenhouse gas emissions in these trying economic times. Leaders of 193 countries will be meeting for the next annual United Nations climate summit in November in Durban, where disagreements between rich and poor countries could continue when discussing whether or not to extend current climate protocol. Discouraging talk faded hopes for compromise and partnerships after the 2009 Copenhagen meeting, where U.S. President Barack Obama and other leaders failed to agree on a new deal for slowing down emissions and limiting global warming. The United States and China are the world’s largest carbon emitters, and have yet to sign up for any sort of emissions caps, although China now has internal plans for regional pilot emissions trading schemes. The previous head of the United Nations Framework Convention on Climate Change actually stepped down after the unsuccessful 2009 discussions. As some countries are hesitant to put caps on carbon emissions, in the near future Australia and the European Union could be used as examples by others of how to achieve success in reducing emissions within large industrialized economies.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9780839681625366, "language": "en", "url": "https://conotoxia.com/about-us/press-centre/press-release/american-men-earn-less-than-forty-years-ago", "token_count": 851, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.435546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5f4213e0-9eaa-4860-a72e-ef1b229c175d>" }
Currently, the average American salary is five times higher than the mid-1970s. However, due to both increasing inflation and increasing social stratification, the real salaries of some American employees are clearly lower than they would have been forty years ago. A commentary from Marcin Lipka, Cinkciarz.pl senior analyst. Judging from basic data regarding salaries in the USA, one can assume that quality of life has clearly improved there over the past few decades. In 1975, the average hourly wage was less than five dollars. Currently, this index is above 25 dollars. According to the official data from the Census Bureau, the average income of an American family in 2015 was at the level of 79k dollars, whereas forty years ago it was less than 14k dollars. However, this data doesn’t fully reflect the actual purchasing power of people who work in the United States. This is because the average value is overvalued by people with a very high income. Therefore, the median (income for 50% is below this value and for 50% is above this value) is crucial in the case of salaries. The presented values should also include inflation. Over the past forty years, the value of a statistical package of goods and services has become four times higher. Moreover, health care costs have increased ten fold. We should also categorize American employees according to their sex, as well as to their age. Only looking at the statistics this way will we be able to see who is earning the most and which social group’s income decreased over the past forty years. Men earn less In 2015, the median income of an American household was at the level of 56k dollars, which is 23k dollars less than the average. Forty years ago, the median income was at the level of 11.8k dollars. However, if this median were expressed in the dollar’s current value (which includes current inflation level), half of American employees would earn more than 47k in 1975. Therefore, the household income increased 20% over the past forty years, which gives us an average 0.43% growth per year. Despite the fact that this value is symbolic, it might have been overvalued by the strong increase in the percentage of working women (from 43% to 54%). And even though the percentage of working men decreased at that time (from 71% to 66%), the larger contribution of women in the American labor market has likely had a positive impact on household incomes. It’s also worth noting that over the past four decades, the real income of American men who are over 18 years old decreased. According to the latest Census Bureau data, this index was at 38.2k dollars, while in 1974 it was at the level of 39.1k (including purchasing power). However, the real income of women increased from 14k to 24k dollars. The fact that the real income of American men decreased is especially vivid in both the 25-34 and the 35-44 age brackets. A decrease for these brackets was at the level of 15% (from 44k to 37k dollars) and 10% (from 54k to 49k dollars), respectively. However, this causes two questions. One – have real wages been stable for everybody? And two – if they haven’t, who has received a significant portion of their increase? Real income doubled for 5% of employees According to the OECD, the productivity of American employees has increased 70% since 1975. Theoretically, this should increase real incomes. However, the average real income per person of working age in 2015 was at the level of 31.7k dollars, whereas forty years before, this index was at 19.3k, which means an approximate 65% growth. Moreover, this growth has been created by people whose income is far above the median. This means that, basically, the income level has not changed for approximately 40% of people who earn the least. However, the incomes of the wealthiest 20% increased from 120k to 200k dollars. Moreover, the incomes of the wealthiest 5% almost doubled (from 182k to 350k dollars).
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9599593877792358, "language": "en", "url": "https://kingsbusinessreview.co.uk/the-rise-of-renewables", "token_count": 1344, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.064453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2f5ea990-30b0-47d5-8605-a0bcc05b8255>" }
In both the East and West where citizens increasingly value their privacy, regulators are clamping down on Big tech and e-commerce companies which have long exploited data they collected from users to further their dominance and curb competitors. A 2018 report convened by the United Nations Intergovernmental Panel on Climate Change concluded that the Earth’s temperature rose by 1°C compared to pre-industrial levels and is likely to increase by a whopping 2°C by the end of this century. This projection is in contrast with what 180 countries agreed in the Paris Climate Accord five years ago—namely keeping the temperature rise at 1.5°C. This could result in the coral reefs disappearing and extreme weather events occurring, amongst others, which would inflict a minimum of $8tn economic damage globally. On a positive note, it is possible to keep the warming levels at 1.5C by the end of the century. In the last decade, there has been a shift from fossil fuels towards its renewable alternatives. This shift is due to an expected peak in oil demand as early as 2025 (due to depleting oil reserves) and a fall in demand from that point onwards. These projections have pushed significant technological developments in solar, wind and transportation sectors which have brought us closer to a carbon-neutral future. What has changed in the last decade? In the aftermath of the 2008 crisis, massive investment into clean energy paved the way for rapid technological advancement. We saw an increase in the availability of solar, wind and transportation technologies, and remarkable cost reductions within these sectors. To put this in perspective, between 2010 and 2018, prices of solar energy and wind energy fell by 77% and by 35% respectively, and the electric vehicle (EV) industry saw rapid growth in just a decade. The improvements in technology are reflected in both declining costs and increased capacity for energy storage. In 2007, the price of one solar photovoltaic (PV) module (a core device used to generate electricity from sunlight) was $4.1/W. Between 2007 and today, the prices fell nearly thirteen-fold to $0.3/W. For wind energy generation, wind turbine costs have fallen 38% on average since 2009 and numerous performance improvements have increased yields. Unlike oil and gas, which are readily available to extract, solar and wind are intermittent, making energy storage an important issue. To tackle this, the past decade saw advancements in large lithium-ion batteries which was necessary to store more solar and wind energy. Such developments have made possible the tenfold increase in wind energy generation in the past decade. Whilst sources such as hydropower, biofuel or geothermal energy did not get much attention, the rise in renewable energy generation relative to total generation was still notable. It reached as far as 26.3% by 2018, and projections suggest that by 2050 half of all our world’s energy generation will be from renewables. Due to improved and, hence cheaper technologies, energy readiness in the last decade pushed countries to make radical changes. We have all become familiar with the pledges made by governments on a daily basis. Numerous nations have announced their comprehensive climate targets such as Japan, China, South Korea, Member States of European Union, the United Kingdom, and the United States under President-elect Joe Biden. The most recent one being Boris Johnson’s ten-point plan for a green industrial revolution in the United Kingdom. With the motto of ‘building back better’ in mind, he pledged £12bn for investment in green energy. According to the plan, this could potentially create 250,000 green jobs, going hand-in-hand with economic growth. He also pledged to ban new petrol and diesel-powered vehicle sales by 2030, following the Netherlands, California and many others. These pledges made by global economic players, in turn, triggered another, more aggressive response by the private sector. Attitudes towards clean energy firms in the stock market have changed significantly. Investors poured vast sums of money to climate-related technologies of a record $36bn, up from $17bn in 2015. This is certainly reflected in the surging of clean energy companies in the stock market. Over the last 12 months, the Global Clean Energy ETF (ICLN) rose by 68.8%. To be specific, shares of First Solar, a leading solar panel manufacturer in the US, are up 47% in 2020 as a result of the growing demand for solar panels in the US. Studies also show that investors are significantly less hesitant to include clean energy stocks in their portfolio compared to a decade ago. All in all, we can see a change happening. How did major oil companies react to this massive and rapid shift in energy demand? BP responded with a series of pledges to radically restructure the company, while their European and American peers have stayed silent on diversifying their energy portfolio. BP is so far the only company who vowed to restructure with the aim of reducing the production of oil and gas by 30% until 2030. Although it has been argued that this pledge has some loopholes, it is quite important for the shift towards renewable alternatives. With regards to the growing transportation industry, two significant factors have moved the market—namely, the improved battery technologies and governmental bans on petroleum/diesel-powered vehicles. Between 2011 and 2019, global electric vehicle (EV) sales jumped from 50,000 to 2 million. Tesla leads this market by a massive 18% share which is higher than the combined total of the next biggest competitors. BYD, a Chinese car manufacturer, has tripled its value over the last year and have been enjoying support from the Chinese government for years. We see a similar trend in Europe, as car manufacturing giants such as Volkswagen and BMW are changing their production line heavily as a response to the growing EV market and developments in governmental policies. Leaving behind the September of 2020 as the warmest September on record, we are optimistic that rapid advancements seen in the last decade will continue at a higher pace and bring us closer to a carbon-neutral future. Quantum computing holds much promise, and a few UK players are squaring up to the challenge. Several nations have flexed their Anti-Satellite capabilities recently, going as far as shooting down their own assets. This tiptoes around the armament prohibition in space and on celestial bodies as agreed in the OST; nonetheless, arms have voyaged there. The Convention aimed to criminalize all sorts of violence, whether it be physical, sexual or psychological, towards all minorities, most importantly women and people who identify as LGBTQIA+.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9731586575508118, "language": "en", "url": "https://marketline.com/blog/13rhbritish-economy-structurally-damaged/", "token_count": 709, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.045654296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:80afa841-779a-4e27-abf4-e5b702e3d751>" }
Large-scale lending has become a key component for the economic growth of Britain’s economy since the 1980s. It has aimed to incentivise British consumers to permanently increase their borrowing rates and consumption. This practice has boosted the profitability of the British financial system by an unimaginable scale. It has generated a permanent stream of cash flowing from consumers to the banking system based on the acquisition of mortgages and consumer credit to purchase durable and nondurable goods. It has also helped households to navigate through a prolonged period of house price increases to this date. Nearly a decade ago, this highly profitable but also highly leveraged system was shaken up by the financial crisis in 2008. In the aftermath of financial crisis, international financial markets became dysfunctional and credit dried up following the collapse of major banks and insurance companies in the US and in the UK. Consequently, commercial banks in Britain were desperate for cash to cover their asset and liability mismatches. As a short term solution, the Bank of England (BoE) tried to clean up the mess by adopting a highly unusual monetary policy. It pressed on with Quantitative Easing (QE) announcing that it would artificially create money by increasing its liabilities to support the financial system and, more broadly, the economy. The bank bought government debt (gilts) and other assets from pension funds, commercial banks, insurance companies and non-financial firms from 2009 to 2011. As of February 2016, the Bank of England has purchased £375bn worth of assets from financial institutions and, consequently, increased the money holdings of the financial sector by the same magnitude. The effectiveness of QE is difficult to measure according to Bank of England studies but it was expected that low interest rates and an increase in liquidity would bring consumer lending back on track and push inflation up. In addition, it would help the banks to clear bad debts. The figure below compares the magnitude of the BoE policy with the overall level of lending to consumers in the UK since the beginning of QE. It clearly shows that the effectiveness of QE to boost consumer borrowing has been negligible compared to the increase of £375bn of money holdings of the financial system. This leads us to conclude that banks mainly used the scheme to restructure their balance sheets. Central bank support to the financial system and levels of consumer debt At the same time, as shown in the figure below, lending to consumers, which has been expanding at a rate of 8.6% per year before the slowdown of 2008, suffered a significant reduction. Commercial banking in Great Britain has not been profiting as much from lending to consumers as it had been before the crisis hit. This adds up to their current struggle. Lending to consumers is expected to reach beyond £1,6 trillion by the end of 2016 well below the pre-crisis path. Lending from UK banks to consumers This brief analysis reveals that QE did very little to revive the lending to consumers in the UK and the injection of £375bn into the economy was mainly used by banks to clean bad debts from their balance sheet. The sharp reduction in banking lending to consumers has eroded banks’ revenues and profitability in recent years. Contrary to the aims of QE, banks will try to fill the gap by increasing, instead of decreasing, the interest rates charged on mortgages and credit cards. In addition, they should also introduce a wide range of paid services. Perhaps, the foundations of the British economy with its core centered on a robust banking system suffered an irreparable damage.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.934657096862793, "language": "en", "url": "https://smallbusiness.chron.com/advantages-fasb-accounting-standard-setting-35769.html", "token_count": 520, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.068359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3d045fe6-b96a-4276-a4f9-32c4f9825eae>" }
What Are the Advantages of FASB Accounting Standard Setting? The Financial Accounting Standards Board is a non-governmental, accounting-oversight organization that maintains accepted accounting rules that businesses in the U.S. follow. If businesses do not follow the standards set forth by the FASB, they risk audits from the Internal Revenue Service and fines from the Securities and Exchange Commission. Perhaps the most important advantage of the FASB standard setting for businesses is the uniform set of accounting principles it promotes. The FASB clearly states the generally-accepted accounting principles that businesses must follow to avoid confusion. For example, the FASB prevents businesses from using one method for calculating inventory at the beginning of a fiscal year and finishing the year with another method. Without the accounting standards set forth by the FASB, businesses could use accounting methods that portray financial data inaccurately to investors. The FASB standard setting provides a framework upon which potential accounting problems are identified and corrected. Because all businesses in the U.S. use the same accounting principles, any problems or inadequacies in the accounting process are quickly identified and reported to the FASB. The FASB then investigates the problem and, if needed, modifies or writes a new accounting rule for the accounting process. For example, if businesses find that reporting a certain type of liability on their income statement unfairly lowers their net income, they can appeal to the FASB so that it can identify problems with the standard setting. The FASB is a private entity with no affiliation to the U.S. government. Despite this, the Securities and Exchange Commission relies on the FASB to set the accounting rules that all companies in the U.S. must follow. The SEC can technically create an accounting oversight board or government agency to set accounting rules. However, using the FASB eases the burden on the U.S. government and lets the private sector dictate accounting rules. International Accounting Standard The FASB is advantageous because it actively promotes an internationally recognized set of accounting rules. Globalization has deeply connected foreign financial markets; a standard set of accounting rules would make financial reporting more accurate and fair between countries. One of the goals of the FASB is to make financial reporting more uniform globally with the cooperation of the International Accounting Standards Board. Aaron Marquis is a University of Texas graduate with experience writing commercials and press releases for national advertising agencies as well as comedy television treatments/stories for FOX Studios and HBO. Marquis has been writing for over six years.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9623530507087708, "language": "en", "url": "https://startupcrow.com/indian-economy-growth/", "token_count": 784, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0849609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0e0aebae-e58c-4692-a4e8-9fe73bbac4c1>" }
Indian economy growth is thinking beyond itself, and it sees India becoming a middle-income country. It states that India is currently one of the most dynamic and fast-developing emerging economies. It further says that India has gone through profound structural changes in the last decade and a half and that these changes have not been uniformly distributed across the Indian population. The report also says that India’s economic growth will depend heavily on its government’s policies, the management strategies of its various private and public sector institutions, and the domestic and foreign policies pursued by its private and public sector lenders and borrowers. The economic analysts believe that this year’s growth would be around 5% at the lowest, and they are expecting it to be slightly higher in the second and third quarters. If the analysts are right about this, it would mean an encouraging start for the Indian economy. The main reason behind this increase in economic activity is, more companies from abroad are investing in India. The primary reasons behind this are the low cost of labor, better quality infrastructure, and better administrative standards in India. The government’s policies, which are bringing this about, with the liberalization policies adopted by the Indian government. Apart from this, other factors have helped the Indian economy grow by leaps and bounds, and one such factor is the liberalization policies adopted by the Indian government. Foreign direct investment (FDI) is another important element contributing to improving the economic growth rate. Foreign investors prefer to invest in countries with easy and transparent rules for business, good quality infrastructure, a stable and predictable tax structure, competitive markets, a robust legal system, and a favorable policy for foreign ownership of commercial plots and land. They also prefer to invest in a country where they can get reasonable returns. A majority of such investors are from Europe, the USA, and Japan and form a substantial chunk of India’s overall foreign investment. Various sectors of the economy in India. Manufacturing – Indian manufactures the world’s best electronics and automobiles, which are exported worldwide. The market for these products is enormous as far as the consumers are concerned. Growth in this sector has been a significant force behind the Indian economic growth. Many small-scale and big manufacturers are also coming together to make their products available to the Indian consumer market. Retail trade – The retail trade encompasses a large chunk of the Indian economy as it is the core sector that contributes to the overall growth of the Indian economy. Consumers buy the same things over again from different retailers, which is the backbone of the Indian economy. If you look at the retail trade closely, you will see that it is being segmented into many segments such as wholesale, value-added, and even local retail. These segments are also being segmented depending on the location of the store. Direct labor – The direct labor market is another crucial factor that has affected the overall economy. The consumers of Indian goods also have an indirect route to get their hands on Indian goods. There are many small-scale units, which help the consumers get access to the Indian goods directly. The indirect way gets closed when there is a high level of competition among retailers. The competition leads to a reduction in the price and leads to an increase in quality, and helps the consumers save a lot of money. The government is taking various measures to reduce the impact of competition on the economy’s retail sector. Consumers are spending on the Internet for shopping The indirect impact will be felt through the employment rate of the people. When more people start working online, more jobs will be created. With the direct and indirect effects, it can be seen that the indirect impact of the Internet is reducing the Indian economy growth rate. Indirect consumer spending also leads to a reduction in the employment rate. In such a scenario, the economic growth of the country gets affected directly.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9493151903152466, "language": "en", "url": "https://www.genpaysdebitche.net/how-do-i-know-if-i-go-into-profit-with-crypto-currency/", "token_count": 701, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0869140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b6f506a6-a993-4bdc-aa62-c8fcef7d3dd2>" }
How Do I Know If I Go Into Profit With Crypto Currency – What is Cryptocurrency? Basically, Cryptocurrency is digital cash that can be utilized in location of standard currency. Essentially, the word Cryptocurrency comes from the Greek word Crypto which indicates coin and Currency. In essence, Cryptocurrency is just as old as Blockchains. However, the difference in between Cryptocurrency and Blockchains is that there is no centralization or ledger system in location. In essence, Cryptocurrency is an open source protocol based upon peer-to Peer deal innovations that can be carried out on a distributed computer network. One specific method in which the Ethereum Project is attempting to solve the issue of smart agreements is through the Foundation. The Ethereum Foundation was established with the aim of establishing software application options around wise agreement functionality. The Foundation has actually launched its open source libraries under an open license. For beginners, the significant distinction between the Bitcoin Project and the Ethereum Project is that the former does not have a governing board and therefore is open to contributors from all strolls of life. The Ethereum Project takes pleasure in a much more regulated environment. As for the projects underlying the Ethereum Platform, they are both making every effort to provide users with a brand-new method to take part in the decentralized exchange. The major differences in between the 2 are that the Bitcoin protocol does not utilize the Proof Of Consensus (POC) procedure that the Ethereum Project uses. On the other hand, the Ethereum Project has taken an aggressive approach to scale the network while likewise taking on scalability concerns. In contrast to the Satoshi Roundtable, which focused on increasing the block size, the Ethereum Project will be able to execute improvements to the UTX procedure that increase transaction speed and decline fees. The major difference in between the 2 platforms comes from the functional system that the two teams use. The decentralized aspect of the Linux Foundation and the Bitcoin Unlimited Association represent a standard model of governance that positions an emphasis on strong community involvement and the promo of agreement. By contrast, the heavenly foundation is committed to constructing a system that is versatile enough to accommodate modifications and include brand-new features as the requirements of the users and the market modification. This design of governance has been adopted by several dispersed application teams as a method of handling their tasks. The major distinction between the 2 platforms comes from the reality that the Bitcoin community is largely self-sufficient, while the Ethereum Project anticipates the participation of miners to fund its development. By contrast, the Ethereum network is open to factors who will contribute code to the Ethereum software stack, forming what is understood as “code forks “. As with any other open source innovation, much debate surrounds the relationship between the Linux Foundation and the Ethereum Project. The Facebook team is supporting the work of the Ethereum Project by offering their own framework and creating applications that incorporate with it. Merely put, Cryptocurrency is digital cash that can be utilized in location of traditional currency. Essentially, the word Cryptocurrency comes from the Greek word Crypto which indicates coin and Currency. In essence, Cryptocurrency is just as old as Blockchains. The difference in between Cryptocurrency and Blockchains is that there is no centralization or journal system in location. In essence, Cryptocurrency is an open source protocol based on peer-to Peer transaction innovations that can be performed on a distributed computer network. How Do I Know If I Go Into Profit With Crypto Currency
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9622209668159485, "language": "en", "url": "https://www.gov.scot/publications/land-reform-review-group-final-report-land-scotland-common-good/pages/28/", "token_count": 2018, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.01611328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0f94e044-7f0e-4c5d-a010-330746773dff>" }
Section 10 - Publicly Owned Mineral Rights 1 The land under Scotland's land surface is also an important resource because of the economic value and potential of the material or 'minerals' that can be extracted from it. 2 'Mineral rights' are a distinctive component of Scotland's system of land ownership. In this context, mineral rights might be summarised as a type of property right covering the authority to quarry, mine or otherwise extract sub-surface materials. 3 While the starting point in Scots law is that the owner of land owns everything above and below land, mineral rights can be owned separately from the surface of the land. Thus, generally, the mineral rights go with the land unless they have been sold or reserved by a previous owner, who may subsequently have sold them. Reserving the mineral rights has, for example, often been the practice of large private estates when selling land. 4 There are very few other examples in Scotland's system of land ownership of property rights in or over land that can be owned separately from the land itself (as a 'separate tenement' in legal terms). After mineral rights, the other conspicuous example of a right capable of being held as a separate tenement is the right of salmon fishing which is discussed in Section 31. Other examples are a number of Crown property rights discussed in the following section. 5 The 'mineral rights' that might be reserved or sold is a general right and not specific to any particular mineral. However, the rights to a number of specific minerals are held in the national interest. They are the right to gold and silver, the right to petroleum (oil and gas) and the right to coal. The public ownership of the rights to these natural resources is a very important part of public land ownership in Scotland. Therefore, the nature of the ownership and management of each is described briefly below. Gold and Silver 6 The right to gold and silver in all land in Scotland was reserved by the Crown early in the country's history and this continues to be the case. The current legislation, the Royal Mines Act 1424, is the oldest Act still in force from Scottish Parliaments before 1707. The other current legislation related to the right is an Act of 1592 and thus also amongst the oldest Acts. 7 The Crown in Scotland still owns the right to gold and silver throughout Scotland, except for over a few areas where the ownership was conveyed to others in ancient grants ( Fig. 8). Scotland's Crown right to gold and silver is administered by the Crown Estate Commissioners ( CEC) as part of the UK wide Crown Estate and discussed further in Section 11. Oil and Gas 8 During the First World War, when the British Government wanted to encourage companies to drill onshore for oil, the Petroleum (Production) Act 1918 was passed to confer on the Crown the right to control exploration and production in Great Britain and to grant licenses for that purpose. 9 The Board of Trade was made responsible for managing the Crown's right, with 'petroleum' defined in the Act to include " any mineral oil or relative hydrocarbon and natural gas existing in its natural condition in strata, but does not include coal or bituminous shales". 10 The Petroleum (Production) Act 1934 repealed the 1918 Act, while reaffirming that legal title to petroleum existing in its natural state in Great Britain was vested in the Crown. The Act provided for the Government to continue to license other persons to search for and get oil. 11 When the United Nations Conference on the Law of the Sea's Continental Shelf Convention 1958, was enacted into UK law by the Continental Shelf Act 1964, the rights over the UK continental shelf to the 200 nautical mile limit were vested in the Crown. The Act also applied the licensing provisions of the 1934 Petroleum Act to the UK continental shelf. 12 The Petroleum Act 1998 consolidated a number of the earlier enactments and contains the legislation that currently determines matters such as the vesting of ownership of oil and gas within Great Britain and its territorial sea in the Crown, the granting of oil licences and rules relating to submarine pipelines and the decommissioning of offshore installations. 13 Today, the UK Government issues licences for oil and gas through the Department of Energy and Climate Change. An annual rental is charged under each licence, but there is no longer a royalty regime on production. This was abolished on 1st January 2003. The UK Government raises the majority of its revenue from oil and gas through taxation. 14 Thus, while all Crown property rights in Scotland belong to Scotland as a sovereign territory, the Crown's ownership of 'petroleum' in Scotland is administered by the UK Government. 15 The ownership of mineral rights in Scots law included coal until the 1942. That year, the British Government nationalised coal reserves in the UK into the ownership of the Coal Commission. Coal in the Forest of Dean was an exception to protect the ancient rights of the Free Miners of the Forest of Dean. The Coal Commission had been constituted as a statutory corporation under the Coal Act in 1938. In 1946, the coal industry was nationalised and the Coal Commission replaced by the National Coal Board ( NCB). 16 The coal industry was subsequently privatised through the Coal Industry Act 1994. In that year, to replace the NCB, the Coal Authority was also established as a non-departmental public body under the Department of Energy and Climate Change ( DECC). " The Coal Authority owns, on behalf of the country, the vast majority of the coal in Great Britain, as well as former coal mines". Amongst other responsibilities, it grants licenses for coal exploration and extraction. 17 The ownership of Scotland's coal reserves in Scotland was therefore nationalised to the UK Government through the Coal Commission and its successor, the Coal Authority. This position appears to reflect the fact that the nationalisation of the coal industry in the 1940s involved the UK Government in substantial expenditure in acquiring the rights to existing mines and compensating the private owners. In that situation, claiming ownership of any unknown reserves through the legislation was an obvious step to take at the same time. However, the ownership of a separate property right across Scotland by the UK Government, as with coal reserves and the Coal Authority, appears to be unique. All other such presumptive property rights to particular assets in Scotland seem to be owned within Scotland by either the Crown or Scottish Ministers, rather than the UK Government. 18 The Review Group also noted this 'disconnect' in the current issues over restoring opencast coal mining sites in Scotland. The issues have arisen where a private owner mining a site has gone into administration and the insurance bonds placed with the local authority for the restoration of the site are inadequate to meet the costs. There are a number of these opencast sites in Scotland where this is currently an issue, with an estimated potential shortfall of £200 million. In this situation, where it appears there will be a need for public funds to contribute to the restoration, the Group considers there may be role for the Coal Authority. 19 The history of opencast or surface coal mining in Scotland has been relatively short. It was introduced as an emergency measure during the Second World War and grew to a peak of 21 million tonnes in 1991. While production has declined significantly since, surface mining exceeded deep mining production for the first time in 2005. As the deep mining decline has continued, surface mining's percentage share of production has grown. However, surface mining production is itself down to less than 5 million tonnes. As part of this decline, and contributing to issues over site restoration, the number of opencast coal mines producing coal in Scotland halved between 2000 and 2008. 20 The Coal Authority is, as described above, responsible for granting licences and leases for coal mining. In doing this, the Coal Authority seeks various securities from the operators to cover liabilities. However, while these include factors such as ground subsidence as a result of the mining, they do not include provisions for site restoration after surface mining. This is because, while "the Coal Authority owns the coal and abandoned underground coal working, once a surface mine is worked, the Coal Authority does not own any void that may be created or left above any seams". Therefore, with surface mining, it is the local authority that is responsible for putting in place securities for the site restoration through insurance bonds. 21 This position means that, while the Coal Authority requires payment for every tonne of coal mined, none of that income from surface mining contributes towards site restoration if there is a shortfall in the securities. All the money that the Coal Authority collects from these coal payments is remitted to the Treasury, except a small percentage retained by Authority to carry out its licensing function. It has been calculated that since privatisation in 1994, some £15.1 million has been collected for coal worked in Scotland, mostly from surface mined coal. In early 2014, the Scottish Government wrote to the UK Government to ask that "at least some" of the levies raised from coal produced in Scotland, should contribute to the shortfall over opencast site restoration costs. 22 The £15 million raised by the Coal Authority is a relatively small amount in relation to the overall costs of restoration. The real issue within the current debate is the shortcomings of the bonds. Having said this, the Review Group finds the degree of 'disconnect' between coal revenue and expenditure to be unacceptable and within the context of devolution, suggests it would be more appropriate for Scotland's coal reserves to be owned by Scottish Ministers and the licensing responsibility to be devolved to the Scottish Government. 23 While coal mining is a contracting industry within Scotland, it is still important in some areas. The Group considers making the proposed change would enable a substantially closer integration of the licensing and planning consents governing coal mining in Scotland, as well as a wider integration of coal mining with other aspects of public policy in Scotland.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9485335946083069, "language": "en", "url": "https://www.lawhelp.org/dc/resource/supplemental-security-income", "token_count": 100, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10888671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bf90ebc9-00ae-4201-a4af-f608e87e6cf4>" }
Supplemental Security Income Authored By: Social Security Administration - Read this in: - Spanish / Español SSI is short for Supplemental Security Income. It pays monthly benefits to people who are 65 or older, or who are blind, or who have a disability and don't own much or have a lot of income. Monthly benefits can go to disabled and blind children as well as adults. Read more about SSI in this booklet from the Social Security Administration web site.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9538939595222473, "language": "en", "url": "https://www.maan-ctr.org/magazine/article/2890/", "token_count": 1552, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0191650390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8acfafce-9074-4863-adf5-3fe8fd4dff00>" }
By: George Kurzom Exclusive to Environment and Development Horizons (Afaq magazine): A recent international report stated that the amount of increase in atmospheric gases in the past few years is unprecedented; despite the closures imposed by the countries around the world to stop the Covid-19 pandemic, the concentration of greenhouse gases in the atmosphere, according to data reported by the World Meteorological Organization last November, has reached its peak. The Organization estimates that despite the decrease in the volume of greenhouse gas emissions over the past year, by a rate ranging between 4.2% to 7.5%, due to the significant halt in transportation and other activities, still, the impact of the decline on the continuous accumulation of greenhouse gases due to human activity is negligible, even less than the expected natural annual decline. According to scientific calculations, by 2030 the emissions should decrease by half, so that there is a chance to prevent an escalation in the global average temperature of more than one and a half degrees Celsius, compared to the pre-industrial era; if the escalation in the global average temperature continues, then that could cause heat waves, droughts and floods. This could lead to hundreds of millions of people slipping into poverty. A strategic shift towards renewable energies Just before the outbreak of the Covid-19 pandemic crisis, the global energy market seemed to be heading towards a revolutionary transformation. After years of political-environmental struggle and due to the decreased cost of renewable energy production, technological improvements, and a growing public awareness of the climate crisis – this led to investment in wind and solar energy in many countries. More developed countries, such as Scandinavia and Germany, have actually generated tens percent of their electricity consumption from non-polluting energy sources. While it seemed before the crisis that the environmental movement had started to record successes in communicating its messages, there is a feeling that a major change has occurred in the public agenda regarding the interest of governments in the issue of the climate crisis. However, the biggest concern of the green companies and activists around the world is that the crisis will push governments to reduce or stop investing in renewable energies, and the economic feasibility of this sector will decrease due to the economic turmoil. For example, the collapse in oil prices in the past few weeks may delay the spread of electric vehicles, because driving a petrol-powered car will be very cheap, at least in the coming months. The good news is that an increasing number of countries and private companies are committed to reducing carbon dioxide emissions to zero. Only when carbon dioxide emissions are close to zero will the natural process of absorbing the gas into ecosystems such as seas and forests lead to a decrease in its concentration in the atmosphere. Israel’s low goals are not achieved Last October, the Israeli government decided that by 2030 the electricity produced from renewable sources will increase by 13% of the total electricity production. The government approved a project proposal submitted by the Israeli Ministry of Energy to increase the share of renewable energies from 17% to 30%. The remainder of Israel's energy needs (70%) will be covered by natural gas; since (Israel) discovered about ten years ago huge reserves of natural gas, by the Palestinian coast in the Mediterranean Sea. It is a known fact that natural gas and its facilities are among the most polluting sources of the climate and ecological structure. It is also worth mentioning that in all stages of the natural gas life cycle, large amounts of greenhouse gases are emitted. Oil and gas companies claim that generating all forms of power from natural gas in particular is less harmful than oil and coal. However, scientific data disproves this claim and confirms that natural gas is very harmful to the atmosphere and to people's health. During the process of extracting, treating, and then transporting the gas, much more methane is emitted, a lot more than it was expected. These emissions contain volatile organic matter, some of which are sure to be carcinogenic. In fact, the energy efficiency rating in (Israel) during 2020 was less than half the stated goal (according to the report of the " State Comptroller of Israel "). It is worth mentioning that the medium-term Israeli goal was to produce electricity from alternative energies (especially from solar energy) at a rate of 10% of the total electricity production capacity by the end of 2020. "Israel" has not been able to achieve the stated goals of diverting 20% of private traffic to public transportation, with the aim of reducing carbon dioxide emissions. Palestinians are more vulnerable to the effects of climate change Reducing carbon dioxide emissions is just one aspect of global concerns. But we also must keep in mind that our region, and Palestine in particular, is located on the sea coast between the Mediterranean and desert climates, which increases the sensitivity of our region and its exposure to climate change, at different levels. We may soon witness huge fires, like those in Australia; In addition to tsunamis coming from the west or devastating floods, it is also very likely that we will witness a rise in the sea level. For comparison, the carbon dioxide emissions per capita of Israelis are among the highest in the world, as they reach about 11 tons per year per capita (Haaretz newspaper, 9/20/2018), while the Palestinian per capita emissions do not exceed 0.5 tons per year. That is, the average Israeli per capita emissions is equivalent to 22 times the emissions of the Palestinian individual; It is even larger than most European countries, where public transportation and energy conservation are more advanced than Israel. When comparing the greenhouse gas emissions resulting from the Palestinians of the West Bank and Gaza Strip with the global or Israeli emissions, we will find that they are minimal. As the percentage of Palestinian emissions (West Bank and Gaza) is equal to 0.01% of the total global emissions (Environment Quality Authority, 2016. The Initial National Communication on Climate Change submitted to the United Nations Framework Convention on Climate Change), and this percentage does not exceed the emissions of one huge Israeli military factory. Added to that, the weakness and powerlessness of Palestinian self-autonomy, which does not have the independent political ability to work towards reducing climate risks. Despite all of this, the Palestinian Authority framed environmental policy plans, the most prominent of those plans was in the year 2016, in which it was stated that the Authority would allocate $3.5 billion to plans to adapt to climate change over the next ten years. The plans did not include indicators of how to attain this money! It is clear that the future of the energy economy will not be centralized or monopolistic based, as the case is on the Palestinian level now, it is moving toward a more regional or small municipal networks distribution. Therefore, the government plans and policies based on establishing a network of private power stations running on natural gas does not come in line with the current economic trends in the energy market, which puts the citizens in a high economic risk situation which also contradicts with the Palestinian international commitments on climate change. Also, in the transportation sector, we can increase the use of public transport by allocating more lanes in the current roads for this purpose, as well as using other means such as "congestion charges" at city entrances, we can also start reducing parking spaces and assigning them to pedestrians and cyclists. Adding to all of that, planning the imposing of vehicle taxes in an environmentally smart way; for instance, eliminating tax benefits for diesel vehicles and imposing vehicle purchase tax according to pollution level, not to mention replacing fuel tax with taxes according to driving distances. Translated by: Rasha Abu Dayyeh
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9500063061714172, "language": "en", "url": "https://www.pindula.co.zw/Foreign_Direct_Investment_(FDI)", "token_count": 2909, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0673828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d9861222-158b-4846-9a91-2d395f62547f>" }
Foreign Direct Investment (FDI) is where an individual or business from one nation, invests in another. This could be to start a new business or invest in an existing foreign owned business. For instance, Mr Shin, from China, has $1 million and wants to start a new company in Zimbabwe. He invests this, creating a new clothing manufacturing firm in the country. This would classify as a FDI. However, the definition is slightly different when it comes to investing in a foreign company's assets. According to the IMF, a foreign direct investment is where the investor purchases over a 10 percent stake in the company. Most Zimbabweans have a very limited understanding of what it is. The subject of foreign investment has many facets and structures. One must make a distinction between FDI and foreign indirect investments (FIIs). FIIs involve corporations, financial institutions and private investors buying stakes or positions in foreign companies that trade on a foreign stock exchange. In general, this form of foreign investment is less favourable, as the domestic company can easily sell off their investment very quickly, sometimes within days of the purchase. This type of investment is also sometimes referred to as a foreign portfolio investment (FPI). Indirect investments include not only equity instruments such as stocks, but also debt instruments such as bonds. One must be cognisant of the fact that it is corporate entities not governments that engage in FDI activities and as such, the primary business of a foreign investor is to make money or realise return on investment. As a rule, FDI does not create an economy, but augments a host nation’s economy. Some of the key features an investor looks for in a host country include strategic location, access to rapidly expanding markets, highly developed physical infrastructure, stable and reliable regulatory infrastructure, skilled man power, low labour costs, low tax rates, political stability, a high level of if not unrestricted financial autonomy and access to capital, open economy, etc. Types of Foreign Direct Investment (FDI) This is a form of Foreign Direct Investment where a parent company starts a new venture in a foreign country by constructing new operational facilities from the ground up. In addition to building new facilities, most parent companies also create long-term jobs in the foreign country by hiring new employees. Also known as cross-border merger and acquisition. The purchasing of an existing production or business facility by companies or enterprises for the purpose of starting a new product or service. This type of investment does not involve the new construction of plant operation facilities. Horizontal FDI is where funds are invested abroad in the same industry. In other words, a business invests in a foreign firm that produces similar goods. For instance BMW, a Germany based firm, may purchase Quest Motors, a Zimbabwean based firm. They are both in the automobile industry and therefore would be classified as a form of horizontal FDI. Vertical FDI is where an investment is made within the supply chain, but not directly in the same industry. In other words, a business invests in a foreign firm that it may supply or sell too. For instance, Hersheys, a US chocolate manufacturer, may look to invest in cocoa producers in Brazil. This is known as backwards vertical integration because the firm is purchasing a supplier, or potential supplier, in the supply chain. A conglomerate type of foreign direct investment is one where a company or individual makes a foreign investment in a business that is unrelated to its existing business in its home country. For instance, Implats, an SA mining conglomerate, may invest in Delta Corporation, a Zimbabwean beverages manufacturer. Since this type of investment involves entering an industry the investor has no previous experience in, it often takes the form of a joint venture with a foreign company already operating in the industry. Benefits of Foreign Direct Investment Boost to International Trade Foreign direct investment promotes international trade as it allows production to flow to parts of the world which are more cost effective. For instance, Apple was able to conduct FDI into China to assist with the manufacturing of its products. However, many of the components are also shipped in from elsewhere, generally from the region of Asia. For instance, the camera is made by Sony, which sources its manufacturing in Taiwan. There is also the case of the flash memory, which is sourced by Toshiba in Japan. We also have the touch ID sensor which is made in Taiwan, and the chipsets and processors, which are made by Samsung in South Korea and Taiwan. These are but a small handful of the components, but demonstrate how inter-connected the supply chain has become between countries. Both Samsung And Song have conducted investment in the likes of Taiwan, China, and Japan. As a result, it has created new jobs in the region and boosted trade between the nations. Reduced Regional and Global Tensions As we have seen with the Apple example, a supply chain is created between countries. In part, this is created by the division of labor. For instance, South Korea may make the batteries, Taiwan the ID sensors, and Japan the cameras. As a result, they are all dependent on each other. If there is a revolt in Taiwan, the whole process could fall apart. Without the ID sensors, the final product cannot be made, so the need for other components is also reduced. This means workers in Japan and South Korea are also affected. As a result of this interconnected supply chain, it is in the interest of all parties to ensure the stability of its trading partners. So FDI can create a level of dependency between countries, which in turn can create a level of peace. To use a famous metaphor, you don’t bite the hand that feeds you. In other words, if nations are reliant on each other for their income, then the likelihood of war is also reduced. Sharing of Technology, Knowledge, and Culture Foreign direct investment allows the transfer of technology, knowledge, and culture. For instance, when a firm from the US invests in another from India, it has a say in how the firm is run. It is in its interest to ensure the most efficient use of its resources. From the businesses perspective, foreign direct investment reduces risk through diversification. By investing in other nations, it spreads the companies exposure. In other words, it is not so reliant on Country A. For instance, Target derives its entire revenues from the US. Should an economic recession hit Stateside, it’s almost guaranteed to harm its profits. Lower Costs and Increased Efficiency Foreign direct investments can benefit from lower labor costs. Often, businesses will off-shore production to nations abroad that offer cheaper labor. Now there is an ethical element to this than is often debated, but we will leave that aside for now. Whether it is ethical or not is irrelevant as it is a benefit to the business. Although labor costs are lower, we must also consider productivity. For instance, one person in China may produce one unit for $1 an hour. However, an employee in the US may be able to produce 20 units for $10 an hour. So whilst a Chinese employee is cheaper, they only make 1 unit per $1, compared to 2 units per $1 in the US. Reduced levels of corporation tax can save big businesses billions each and every year. This is why big firms such as Apple use sophisticated techniques to off-shore money in international subsidiaries. Countries with lower tax regimes are usually those that are favoured. Examples include Switzerland, Monaco, and Ireland, among others. Furthermore, there are also tax incentives by which the foreign government offers tax breaks to investors in a bid to encourage FDI. Employment and Economic Boost When money is invested in another country, it creates jobs, new companies, and new factories/buildings. This brings about new opportunities for local residents and can stimulate further growth. Challenges facing Zimbabwe in attracting FDI Investor optimism following the November 2017 fall of the late former President Robert Mugabe has weakened as President Emmerson Mnangagwa’s government has been slow to follow through on reforms to improve the ease of doing business, and a protracted currency crisis strains the economy. The Transitional Stabilisation Programme, announced in 2018, includes structural and fiscal reforms that, if fully implemented, would resolve many of the economy’s fundamental weaknesses. The new government did move quickly to amend the restrictive indigenization (local ownership) law to apply only to the diamond and platinum sectors, opening other sectors to unrestricted foreign ownership. Nevertheless, investors remain cautious. Zimbabwe has attracted low investment inflows of less than USD500 million annually over the past decade. Between 2014 and 2017, foreign direct investment inflows fell from USD545 million to USD289 million, but rose to approximately USD470 million in 2018. The government announced its commitment to improving transparency, streamlining business regulations, and removing corruption, but the last two years have brought only modest progress. Foreign Direct Investment Statistics Foreign direct investment, net inflows (% of GDP) in Zimbabwe was reported at 3.0629 % in 2018, according to the World Bank collection of development indicators, compiled from officially recognized sources. Zimbabwe - Foreign direct investment, net inflows (% of GDP) - actual values, historical data, forecasts and projections were sourced from the World Bank on August of 2020. Foreign direct investment are the net inflows of investment to acquire a lasting management interest (10 percent or more of voting stock) in an enterprise operating in an economy other than that of the investor. It is the sum of equity capital, reinvestment of earnings, other long-term capital, and short-term capital as shown in the balance of payments. This series shows net inflows (new investment inflows less disinvestment) in the reporting economy from foreign investors, and is divided by GDP. |Foreign Direct Investment||2017||2018||2019| |FDI Inward Flow (million USD)||349||745||280| |FDI Stock (million USD)||4,688||5,433||5,713| |Number of Greenfield Investments||7||18||17| |Value of Greenfield Investments (million USD)||415||6,114||709| According to a Reserve Bank of Zimbabwe report Foreign Investment sharply fell by 31% by October 2020. Foreign investment inflows to Zimbabwe in 2020 fell by 23.7% to US$40.06 million compared to the same period in 2019, with analysts attributing the decline to the outbreak of Coronavirus (Covid-19) pandemic. According to a Monetary Policy Statement released by Reserve Bank of Zimbabwe (RBZ) governor John Mangudya on 18 February 2021, foreign investment declined from US$53.47 million to US$40.06 million in 2020 while international and diaspora remittances (Remittances and economic growth in Zimbabwe]] increased by 57.6% to US$1.002 billion. Zimbabwe has been struggling to attract significant foreign investment due to toxic policies and unstable political environment, among other factors. According to the United Nations Conference on Trade and Development (UNCTAD) 2020 World Investment Report, FDI inflows decreased significantly to US$280 million in 2019, compared to US$745 million recorded in 2018. Zimbabwe ranked 140th out of 190 countries listed in the World Bank’s 2020 Doing Business Report, gaining 15 places from the previous year’s report. Policy inconsistency, administrative delays and costs, and corruption are some of the challenges hindering business facilitation in the country. Zimbabwe does not have a fully online business registration process, though one can begin the process and conduct a name search online via the ZimConnect web portal. In February 2020, the government passed legislation creating the Zimbabwe Investment and Development Agency (Zida) to act as a one-stop investment centre, replacing the Zimbabwe Investment Authority. What to consider when investing in Zimbabwe Zimbabwe's strong points in terms of attracting FDI include: - abundant mineral resources (platinum, gold, diamond, nickel); - agricultural wealth (maize, tobacco, cotton); - potential for tourism development; - membership of the Southern African Development Community (SADC); - normalisation of relations with the international community. The factors hindering foreign investment in Zimbabwe include: - economic and financial situation hit by a long period of hyperinflation; - shortage of cash; - under-investment in infrastructures (especially energy infrastructure); - precarious food and health situation: the majority of the population depends on international aid; - AIDS prevalence rate among the highest in Africa and in the world. Government Measures to Motivate or Restrict FDI While the government of Zimbabwe has implemented since 2009 a number of measures designed to attract foreign direct investment (FDI), many of its macroeconomic policies, such as the indigenization and economic empowerment laws, acted as significant deterrents. Following recent political changes, the new government amended indigenization, or local ownership laws, to reduce the restriction to only the diamond and platinum sectors; other sectors are now open to unrestricted foreign ownership. Moreover the government has announced its commitment to improving transparency and removing corruption. Zimbabwe’s incentives to attract FDI include tax breaks for new investment by foreign and domestic companies and allowing capital expenditures on new factories, machinery, and improvements to be fully tax deductible. The government also waives import taxes and surtaxes on capital equipment. Tax inventives may be obtain in certain sectors such as pharmaceuticals, energy, construction, agriculture and mining. - Geoffrey Makina, , Zimbabwe Independent, Published: 7 June, 2019, Accessed: 12 August, 2020 - Paul Boyce, , BoyceWire, Published: 18 July, 2020, Accessed: 12 August, 2020 - , U.S. Department of State, Accessed: 12 August, 2020 - , Trading Economics, Accessed: 12 August, 2020 - Dumisani Nyoni, , The News Hawks, Published: 27 February, 2021, Accessed: 28 February, 2021 - , Lloyds Bank, Accessed: 12 August, 2020
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9422010183334351, "language": "en", "url": "https://www.pv-magazine.com/2020/09/16/australia-and-germany-shake-hands-on-green-hydrogen-future/", "token_count": 624, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.07421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f2d9620e-5fe2-4ef4-a216-3cff865ebc3a>" }
From pv magazine Australia. Australia’s National Hydrogen Strategy has made another inroad in the global hydrogen market after signing a new agreement with Germany for a joint feasibility study to investigate the supply chain between the two countries on green hydrogen. Both Australia and Germany are pursuing green hydrogen as a source of energy with enormous potential. The Australian government is now inviting research and industry consortia to partner with German industry on this feasibility study, which will examine production, storage, transport and use of renewable hydrogen. The agreement follows similar deals by Australia with Japan and South Korea. Germany has committed itself to becoming greenhouse gas neutral by 2050 with the added aim of cuttings its emissions by 55% from its 1990 levels by 2030. To that end, the European giant is hungry for the kind of clean sources of energy Australia is wealthy with beyond comprehensible measure. Australian minister for trade, tourism and investment, Simon Birmingham, said that these kinds of partnerships “will be critical to further developing our emerging hydrogen industry and Australia’s future as a powerhouse in clean energy exports.” Similarly, minister for resources Keith Pitt said that clean hydrogen “is a transformational fuel that can be used to power vehicles, generate heat and electricity, and [be used] as a chemical feedstock in major industrial applications. Australia has what it takes to be a world leader in hydrogen production and exports.” The Western Australian (WA) state government, which is seeking to position itself at the heart of Australia’s future hydrogen economy, has welcomed the ‘joint declaration of intent' between the two countries. WA regional development minister Alannah MacTiernan said that “Germany currently imports up to 70% of its energy and is eyeing renewable hydrogen for its future energy needs. Our government has already undertaken significant work over the past two years with the German government and industry to lay the foundations of our fledgling hydrogen industry.” MacTiernan added that she thought this partnership “will help drive forward our local hydrogen industry and support global efforts to reduce carbon emissions.” Of course, WA is not alone in its green hydrogen ambitions. Last week, the Green Hydrogen Australia Group received the green light for the first of three large scale green hydrogen plants in Queensland, with the first to be the AU$300 million (US$220 million) Bundaberg Hydrogen Hub. Australia’s minister for energy and emissions reduction, Angus Taylor, shared news of the international agreement by pointing to Australia’s future hydrogen industry’s potential to “generate 7,600 new jobs by 2050, many in regional Australia, with exports estimated to be worth around AU$11 billion a year in additional GDP.” Australian research and industry bodies can submit an expression of interest to the feasibility study at GrantConnect. This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: [email protected].
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8839962482452393, "language": "en", "url": "https://economicskey.com/plblic-policy-toward-oligopolies-6350", "token_count": 230, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.447265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:11694eca-940a-49c3-945c-23eff22c9052>" }
PUBLIC POLICY TOWARD OLIGOPOLIES One of the Ten Principles of Economics in Chapter I is that governments can sometimes improve market outcomes. The application of this principle to (monopolistic markets is, as a general matter, straightforward. As we have seen, cooperation among oligopolists is undesirable from the standpoint of society as a whole because it leads to production that is too low and prices that are too high. To move the allocation of resources closer to the social optimum, policymakers should try to induce firms in an oligopoly to compete rather than cooperate! Let’s consider how policymakers do this and then examine the controversies that arise in this area of public policy. [av_button label='Get Any Economics Assignment Solved for US$ 55' link='manually,http://economicskey.com/buy-now' link_target='' color='red' custom_bg='#444444' custom_font='#ffffff' size='large' position='center' icon_select='yes' icon='ue859' font='entypo-fontello']
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9573163986206055, "language": "en", "url": "https://restless.co.uk/money/everyday-finance/a-simple-guide-to-credit-cards/", "token_count": 3454, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.049072265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d8761faf-6592-4601-80c9-4bdb89dfb3ad>" }
When used responsibly, credit cards can provide a safe, flexible way to pay for things, offering protection on purchases, rewards on spending and allowing you to spread out larger costs. However, if you’re not able to clear your balance quickly, credit cards can rapidly lead to increasing debt, high interest charges and damage your financial credibility along with your ability to apply for credit in the future. Here, we explain how different types of credit cards work, the pros and cons, and how to use them effectively, so you can decide whether a credit card is the right option for you. How do credit cards work? A credit card is a payment method where you’re essentially loaned the amount you want to spend on goods and services. Instead of using money from your bank account, credit card providers lend you the money, which you’ll be billed for monthly. The bill will include a statement outlining all your purchases, a total balance – which is the amount needed to pay the bill off in full – a minimum repayment, and a deadline for the payment. If you pay the balance you owe back in full, then you won’t usually have to pay any interest. However, if you fail to pay the full amount, any outstanding payments will be carried over to the next month, and you’ll be charged interest until you repay the total balance. This is usually backdated as well, meaning if you made a purchase at the beginning of the month, you’ll be charged a whole month’s interest on this amount. Always avoid using your credit card to withdraw cash. You’ll be charged fees, up to 4% with some providers and unlike with purchases made on a credit card, with cash withdrawals you’ll typically be charged interest from the day you make the cash withdrawal, even if you clear the balance in full at the end of the month. If your credit card application is successful, your provider will set you a credit limit and interest rate. Interest is what credit card providers charge you for the opportunity to borrow money, and a credit limit is the maximum amount that you can spend on your card (this can be anything from a few hundred pounds, to several thousand). This will vary with different providers and will reflect your personal circumstances; including your income and credit history. For example, loans to those with the best credit scores will typically have lower interest rates and higher credit limits than loans to those with a less than perfect credit rating. You can get tips on how to improve your credit score here. What are the advantages of using a credit card? They offer flexible payment options Credit cards offer flexible payment options, such as the ability to spread purchases out, or ‘buy now, pay later’, which can be very convenient. For example, if you don’t have the necessary money available until your next payday, or if you want to make a significant purchase, it means you can temporarily borrow from your credit card or spread the cost over monthly payments. Just be sure that you are not tempted to spend more than you otherwise would, purely because you have the extra payment flexibility. Credit cards are safer than cash If you lose your credit card or it’s stolen, you can call your bank immediately and they’ll be able to cancel it for you. This means that if your card’s stolen or used fraudulently, you’re much more likely to get your money back than if the same happened with cash. Credit cards offer purchase protection If you buy something using your credit card and something goes wrong, your provider has to offer you a level of protection and help to get your money back. This can cover situations such as if a product that you brought never arrived, or if a company that you booked a holiday through went out of business. Under section 75 of the Consumer Credit Act, you’re protected if you make a purchase using your credit card for anything over £100 and up to £30,000. The law means that your credit card provider has equal responsibility with the company you purchased from if there’s an issue with anything you’ve brought, or if the company goes bankrupt. You can read more about credit card payment protection here, including what section 75 of the Consumer Credit Act covers, when you’re covered, and how to claim money back. They may come with freebies, rewards or promotional offers Credit cards can sometimes bring added bonuses such as airmiles, reward points, and cashback. You can find out more about reward credit cards here. Some cards also offer lengthy 0% introductory periods on balance transfers or purchases, giving you the opportunity to pay back what you owe over time without being hit by steep interest charges. What are the disadvantages of using a credit card? High-interest payments can lead to debt When using a credit card, if you don’t pay back what you borrow each month, your provider will usually start charging you interest on the outstanding amount. Unless you are on a promotional interest rate, the interest rates on most credit cards are typically very expensive and can rack up fast – making it harder to pay down your balance. If used incorrectly, they can negatively impact your credit score Parts of your credit card activity will be documented in your credit report, including how many cards you have, your credit limit, and how many cash withdrawals you’ve made. Crucially, credit reports also record your repayment history for up to six years, which will cover any missed or late payments. Typical behaviour that can negatively affect your credit score includes withdrawing cash, missing or being late with your repayments, fully utilising your credit limits and only ever making minimum repayments. These are some of the behaviours that can negatively impact your credit score and make it harder to apply for credit in the future – such as with a mortgage application. They can incur extra fees and charges Charges will vary from card to card, however most credit cards will charge additional fees for certain things. These could include penalties for exceeding your credit limit or missing a payment. You’ll also be charged interest and an additional fee for withdrawing cash, usually around £3 for each transaction. Similarly, credit card cheques can be expensive because they’re treated like cash withdrawals with higher interest rates and often some added fees on top. They can be expensive to use abroad (and online in shops based abroad) Most cards are expensive to use abroad and will charge you steep fees to withdraw cash or purchase something, though charges will vary depending on which card you have. This is true whether you are physically travelling in a country, or simply shopping online from the comfort of your own home if the website is based abroad and charges in a foreign currency such as dollars. However, there are some credit cards that are specifically designed to be very cost effective when used abroad. You can read more about cheap credit cards to use abroad here. How can I use my credit card effectively? To make sure you get the most out of your credit card without affecting your credit score or running into unwanted debt, here are some helpful points to consider. Stay on top of your bills and consider setting up a direct debit When your bill comes through each month, it might seem tempting to just push back a little amount until next month, and then a little more the next. But it’s important to stay on top of your payments and be wary of continually paying only the minimum repayment because interest will add up quickly. To avoid getting into debt, aim to pay off your credit card bill in full each month in order to avoid paying interest. However, if you’re unable to do this every month, try and pay off as much as you can to reduce the amount of interest you will be charged. The minimum repayment amount will depend on a few things including how big your bill is, and who your credit card provider is. However, a typical minimum repayment will usually be around 1% to 2.5% of the total amount each month, or between £5 and £25, whichever is higher. Usually, this will include any interest or charges you’ve incurred. If you’re worried about staying on top of your payments, consider setting up a monthly direct debit. This will automatically transfer money from your bank account to pay your credit card bill each month, so you won’t have to worry about manually doing it yourself. Setting up a direct debit is quick and easy, and can usually be done online via banking apps. Information on how to do this will differ depending on your bank, so it’s best to contact them directly. Don’t spend anything you don’t think you’ll be able to repay Credit card payment methods such as ‘buy now, pay later’, or spreading purchases over several monthly repayments can offer greater freedom and financial flexibility. While this may initially seem great, before opting for these, take time to consider whether you’ll be able to repay the amount, taking into consideration any interest you might have to pay on top. Avoid using your credit card to withdraw cash or write cheques As mentioned earlier, credit cards charge you to make cash withdrawals and interest will be added to your account immediately, even if you pay off the balance before the due date. Not only is it an extremely expensive way to borrow money, but cash withdrawals might show up on your credit record and could impact any future credit applications you make. So avoid doing this at all costs. Avoid exceeding your credit limit If you exceed your credit limit, you’ll face additional charges, so make sure you stay within the limit sets. It’s also helpful to make sure you don’t fully utilise any credit limit you have, as consistently borrowing the full amount each month can make your credit history look like you’re not fully in control of your borrowing. If however, you’ve accidentally exceeded your credit limit by a few pounds, then it might be worth contacting your credit card provider immediately and request it be cleared free of charge. It’s also important to be aware that some places like hotels, car rental agencies, and tour operators may use your credit card for a pre-authorisation. This is so that if you use services such as the mini bar or the spa and don’t pay for it, they can charge you. If this happens, it will involve them putting a hold on your credit card, for example £500. When in place, you won’t be able to spend that money, meaning your credit limit can be affected. Even after they remove the hold, it’s common for there to be a few days in between your credit limit returning to normal. Therefore, it’s important to be mindful of any purchases you make where this might happen to help you ensure you remain within your credit limit in these situations. Choose a credit card that meets your needs There are many different types of credit cards available and if you have a good credit history, it can be helpful to have different cards for different uses. Before applying for a credit card, it’s best to consider what you’ll be using it for so you can choose one that’s most appropriate to your needs. Below are some of the options available: Balance transfer cards: If you’ve got existing debt or a balance that you’d like to pay off quicker and with less interest, you may be able to do so with a balance transfer credit card which offers a lengthy 0% introductory rate. Whilst many 0% balance transfer cards will have an initial, one-off balance transfer fee to pay, the aim is that you will be able to pay back what you owe without being charged interest, potentially saving you significant sums of money. Purchase cards: These cards can help spread the cost of a large purchase. They usually offer an interest-free period, which can make them a cheaper way to borrow. Dual credit cards: These cards combine the benefits of balance transfer cards and purchase cards. It can help you spread out the cost of large purchases, while also reducing the amount of interest you pay, usually in exchange for a fee. Reward cards: With this type of card, you get rewards for using it. For example, you might get air miles, cashback, or shop discounts based on a percentage of how much you spend. These cards often come with high interest rates so are usually most suited to those who will be paying their bills off in full each month. Otherwise you are in danger of the interest charges outweighing the benefit of any rewards. Money transfer credit cards: These cards essentially let you borrow cash. You can transfer money from the card into your bank account, usually in exchange for a small fee. They’re often used to help clear bank overdrafts. Credit builder cards: If you’ve got a low credit rating, these cards can help you build your credit history. Because they’re designed for people seen as high risk applicants, these will typically have high interest rates and low credit limits. However, if you successfully pay your monthly bills on time and in full, these cards can help improve your credit score over time, and therefore increase your chances of being able to borrow again in the future. Travel credit cards: If you’re going abroad, specialist travel credit cards can help reduce the cost of using a credit card in another country. To be eligible for most credit cards, you’ll need a good credit score. There are a number of services available today that allow you to check your credit file for free – MoneySuperMarket’s Credit Monitor tool is one such service where you can check your credit score and see which credit cards you might be eligible for. And if you’re seeking further guidance, you can read more about how to choose and apply for a credit card here. Is a credit card the right option for me? - If you don’t feel confident that you won’t spend more than you can afford to repay, then a credit card might not be right for you. If you’re already struggling with debt, then it’s much better to address any existing bills and debts before taking on any more. If you’re struggling with debt and would like help, you might find our article Serious debt: your options explained useful. - If you already have a poor credit history, it’s worth considering whether your credit card application is likely to be successful. Remember, any credit applications will show up on your credit history, regardless of whether they were successful or not. Too many credit applications can put lenders off because they’ll deem you to be desperate to borrow and therefore a higher risk applicant. If you’d like to have a look at your current credit rating for reference, consider using one of the three main UK credit scoring agencies – Experian, Equifax, and TransUnion (formerly Callcredit) where you can check your credit score. If you don’t want to pay a fee to access your credit score, a number of services are now offering free access to your credit score. For example, MoneySuperMarket’s Credit Monitor tool enables you to check your credit score and report free of charge using data from TransUnion. Experian has a free service that enables you to sign up and check your credit score with them, and ClearScore is another free credit checking service that accesses Equifax data. Equally, if your application is successful but you’ve got a poor or limited credit history, credit card providers may charge you higher interest rates because you’ll be deemed a higher risk of not being able to make your repayments. Interest rates can be sky high and debt can easily pile up very quickly if you don’t pay your bills off in full each month, so it’s important to carefully consider if you think you’ll be able to repay what you borrow before committing. There are various things to think about when applying for a credit card. The possibility of getting into debt and seeing the fees and charges rack up might, quite rightly, seem daunting or put you off altogether. If however, you are confident that you can stay on top of your payments and pay back what you spend each month, credit cards can be an extremely helpful payment tool with some added benefits such as rewards and purchase protection. Have you got a credit card and do you have any other tips to share? If so, we’d be interested in hearing from you. You can join the money conversation on the Rest Less Community forum or leave a comment below.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9611184000968933, "language": "en", "url": "https://www.daviddarling.info/encyclopedia/M/missing_dollar_problem.html", "token_count": 294, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.4140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d58e5084-8901-4393-8877-993797708c77>" }
missing dollar problem Three people have dined at a restaurant and received a total bill for $30. They agree to split the amount equally and pay $10 each. The waiter hands the bill and the $30 to the manager, who realizes there's been a mistake and the correct charge should be only $25. He gives the waiter five $1 bills to return to the customers, with the restaurant's apologies. However, the waiter is dishonest. He pockets $2, and gives back only $3 to the customers. So, each of the three customers has paid $9 and the waiter has stolen $2 making a total of $29. But the original bill was for $30. Where has the missing dollar gone? (See solution below.) A version of this problem first appeared in R. M. Abraham's Diversions and Pastimes in 1933.1 See also nine rooms paradox. There is no missing dollar (of course!). Adding $27 and $2 (to get $29) is a bogus operation. They paid $27, $2 went to the dishonest waiter, and $25 went to the restaurant. You have to subtract $27 minus $2 to get $25. There never was a $29; it's a phony calculation designed to confuse the unwary. 1. Abraham, R. M. Diversions and Pastimes. London: Constable & Co., 1933. Reprinted , 1964.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9448941946029663, "language": "en", "url": "https://www.eria.org/research/measuring-the-pro-poorness-of-urban-and-rural-economic-growth-in-indonesia-20042014/", "token_count": 242, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.03369140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:43e00b9b-25be-476c-975e-4f7cc7277602>" }
Measuring the Pro-Poorness of Urban and Rural Economic Growth in Indonesia, 2004–2014 This study measures the pro-poorness of urban and rural economic growth by region from 2004 to 2014 in Indonesia using pro-poor growth indexes, with data from the National Socio-Economic Survey (Susenas). It also conducts a probit analysis to explore the determinants of poverty. All regions (Sumatra, Java–Bali, Kalimantan, Sulawesi, and East Indonesia) experienced a substantial increase in expenditure inequality in both urban and rural areas; thus, the change in poverty incidence due to redistribution effects is positive. Apart from East Indonesia, they reduced the incidence of poverty in both areas, but their growth was not pro-poor in the strict sense. According to the pro-poor growth indexes, urban areas performed better than rural areas; in most regions, the growth of urban areas was moderately pro-poor, while that of rural areas was weakly pro-poor or anti-poor. The government needs to take urban–rural and regional differences into account when formulating poverty alleviation policies and programs since these differences would affect economic growth and changes in inequality.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9636595249176025, "language": "en", "url": "https://www.genpaysdebitche.net/where-to-ethereum/", "token_count": 1087, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.39453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fe506957-be03-485a-ba54-823a1900f202>" }
Where To Ethereum – The term “Ethereum Cryptocurrency ” is a fairly new term in the world of finance and relates to digital currency itself. What is Ethereum, you may ask? Well, it is a form of currency that is constructed on the “Ethereum ” platform. So what does that mean, exactly? Now, digital currencies are actually just digital transactions between individuals. If you desire to send out cash abroad, all you do is convert the currency you ‘re utilizing into whatever currency the recipient is utilizing. What is required is a method for people to make transactions without having to deal with any currency at all. Essentially, this indicates you can take your money and make a deal that includes no currency at all. In order to accomplish this, you would require to use something called “cryptocoins “. These are little wise contracts that operate on the “blockchain “. They are accountable for making the entire deal as safe and secure and safe as possible. Lots of people still aren ‘t rather sure what the “blockchain ” is, so this becomes their big concern. Basically, the “blockchain ” resembles the Internet with money. Think about it as a journal where anything that ‘s been done is logged in. Any brand-new deals are then added to the ledger. Much like the Internet, there ‘s a great deal of capacity for abuse with the ledger, which is why there ‘s always somebody who ‘s trying to get a piece of it. That ‘s why we require cryptography in order to ensure that the ledger remains safe. The issue with the majority of digital currencies is they have too numerous resemblances with conventional currencies. Even if you understood how to track down all of the different federal governments ‘ currency logs, you still wouldn ‘t be able to figure out their interest rates, their political activities, and even their newest financial reports. By utilizing a digital currency based on cryptography, you ‘ll be able to make secure transactions that will be tough to foil. You ‘ll also be able to make sure that you aren ‘t spending more than you should, because there won ‘t be any paper routes left behind. As you understand, governments worldwide are fretted about terrorism, which is why they keep a close eye on any kind of transactions that are made online. There are some business out there that are dealing with developing brand-new kinds of cryptography that will be used on the Internet. In the mean time, there are several widely known cryptosystems that you can utilize for now. Some popular examples of these include Zcash, Vitalik, Prypto, and ECDSA. Because the Internet is utilized around the world, you want to make sure that there isn ‘t going to be an issue when sending private messages between your computer systems. That ‘s what it ‘s really all about. When searching for this sort of service, try to find something called a private crucial service. It ‘s very comparable to what you would use for an ATM, just it ‘s a lot more personal and advanced. The majority of the time, you can get this type of cryptography totally free, however if you ‘re prepared to spend for it, you ‘ll be able to get more security than ever previously. This is just one of the numerous functions that come with using this type of system. Even though there are plenty of locations to buy this innovation, you need to make sure that you ‘re handling a genuine business that has a great reputation. You wear ‘t wish to put your monetary details at danger. Keep in mind that there are plenty of phishing sites out there that will guarantee to let you in on some extremely classified information, just to rob you blind. Discover a relied on professional to manage your shopping for ERC Cryptography. What ‘s fantastic about it is that it ‘s been shown to be safe, so it shouldn ‘t be difficult to make the modification from utilizing passwords and codes to making this kind of individual identification system obligatory. There ‘s nothing worse than having all of your details taken, isn ‘t it? It ‘s definitely not a very good sensation when someone gets hold of your social security number or other personal information. The term “Ethereum Cryptocurrency ” is a relatively brand-new term in the world of finance and is associated to digital currency itself. Numerous individuals still aren ‘t rather sure what the “blockchain ” is, so this becomes their big concern. Simply like the Internet, there ‘s a lot of potential for abuse with the ledger, which is why there ‘s constantly someone who ‘s trying to get a piece of it. You ‘ll also be able to make sure that you aren ‘t spending more than you should, given that there won ‘t be any paper routes left behind. What ‘s terrific about it is that it ‘s been proven to be protected, so it shouldn ‘t be hard to make the change from utilizing codes and passwords to making this kind of personal recognition system compulsory. Where To Ethereum
{ "dump": "CC-MAIN-2021-17", "language_score": 0.973243236541748, "language": "en", "url": "https://www.insuranceopedia.com/definition/459/principle-of-indemnity", "token_count": 877, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0289306640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:949dc181-c13d-4654-907c-4d58e5ef4fa0>" }
Definition - What does Principle of Indemnity mean? To indemnify someone means to “make someone whole.” The principle of indemnity is one of the fundamental principles of insurance because it is the part of an insurance contract that ensures the insured has the right to compensation and sets limits on how much they can get. The principle of indemnity states that an insurance policy shall not provide compensation to the policyholder that exceeds their economic loss. This limits the benefit to an amount that is sufficient to restore the policyholder to the same financial state they were in prior to the loss. In other words, the principle of indemnity ensures that the insured gets made whole from their loss but will not benefit, gain, or profit from an accident or claim. Nor will you get less than what is necessary to restore you to the same financial position. For example, if you suffer a loss to your home due to a fire and it is estimated that it would cost $50,000 to repair the damage, then that is what you would get from the insurance company subject to limits of insurance selected and other terms and conditions of the insurance policy. If you are underinsured however - as in you did not purchase a high enough limit of insurance to allow yourself to be fully “made whole”, this principle still holds as you are not profiting from your insurance policy. Insuranceopedia explains Principle of Indemnity The principle of indemnity is a central, regulatory principle in insurance that applies to most policies, except personal accident, life insurance, and other similar policies. This exception is because it is impossible to accurately quantify a human life in monetary terms. According to the principle of indemnity, the insured would get enough money to be “made whole” or to return them to the same financial position they were in prior to the loss. In other words, they would be compensated based on the actual amount of loss sustained subject to limits of insurance selected by the insured and other policy terms and conditions. This basic tenet ensures the policyholder receives an amount in benefits equivalent to their actual losses so they do not make a profit from it. Because of this, it is linked to another central insurance principle, that of insurable interest, as the policyholder cannot receive a sum that goes beyond their insurable interest. Here are some basic examples to help illustrate this principle: If an insured purchased a limit of insurance of $50,000 on his car and got into a crash. After taking it to a certified body shop, the mechanic estimates it would cost $10,000 to repair the damage and return the car to its original condition. In that case, according to the principle of indemnity, the insured would only be entitled to $10,000 in compensation (or “indemnity”) from the insurer as that is what is required to return them to their pre-loss financial position. No more, no less. Just because they had purchased $50,000 of insurance does not mean they will get $50,000 in compensation every time. Payment is made by the insurance company based on the actual amount of loss you have sustained. There are caveats as the principle of indemnity can be overwritten by other terms and conditions. If the insured purchased a limit of $10,000 on his car and got into a crash that is estimated to cost $15,000 to repair, the insured would only be entitled to $10,000 in indemnity from the insurer even though the principle of indemnity is supposed to guarantee them $15,000 (the amount required to make him whole). This is because the principle of indemnity is subordinate to the limit of insurance purchased or other terms like coinsurance penalties. The rationale behind this principle is to protect insurance companies by eliminating moral hazard. Insureds won’t be as tempted to commit insurance fraud if there was no way to profit from a claim as all they would get back is what they lost. How Well Do You Know Your Life Insurance? The more you know about life insurance, the better prepared you are to find the best coverage for you. Whether you're just starting to look into life insurance coverage or you've carried a policy for years, there's always something to learn.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9467130899429321, "language": "en", "url": "http://power-posts.com/2010/06/different-types-of-investments/", "token_count": 352, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0302734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:29d19d9f-4966-4529-aeeb-6418a526da1c>" }
Investment refers to purchasing an asset, keeping funds in a bank account or giving a loan with the hope of generating future returns. Different investment options are offering different risk return trade off. Analysis of the options and understanding of the concept help investor to build a portfolio which maximizes the return and minimizing the risk exposure. Types of investments: There are different types of investments: - Cash investments: Cash investments like savings in certificate of deposit, bank accounts, and treasury bills. These investments give low interest rate and these are risky options in inflation period. - Debt securities: In this investment there are fixed periodic payments and possibility of capital appreciation at maturity. This is more risk free investment than equities and safer. Generally the returns are lower than other securities. - Stocks: Purchasing stocks means buying ownership of the business and you have the right to hold the share of the profits earned by company. These are more riskier and changeable than bonds. - Mutual funds: Mutual-funds are collection of bonds and stocks. In this, you don’t have to bother about tracking the investment because it involves paying a professional manager to select specific securities for you. So this is first advantage of mutual-fund. There can be stock, bond or index-based mutual funds. - Derivatives: Derivatives are financial agreements, the values of which are acquired from the underlying assets like commodities, stocks, securities and etc. on which they are based. Swaps, futures and options are different forms of derivatives. These are helpful to reduce the risk of loss arising from fluctuations in the value of the underlying assets. And not only these, you can invest on commodities and real estate to get good returns.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9034855961799622, "language": "en", "url": "http://www.familyfinancefavs.com/2016/08/teach-kids-rule-of-72-for-money-doubling.html", "token_count": 493, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0137939453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4d39119a-0b31-4f36-ab94-ed84590f1777>" }
“The safest way to double your money Is to fold it over and put it in your pocket.” Kin’s trick is the quickest way to double your money too. It’s great advice for kids who are too hasty to pull the trigger on spending. But what if your kids already got the memo on delaying gratification and investing patiently? They still might be eager to know when that money might double. That’s when it’s handy to teach your kids the rule of 72. It’s a simple piece of mental math to estimate how long it will take to double your money when earning compound interest. Just divide 72 by the interest rate. For example, if your money is earning an 8% annualized return, it will take roughly 9 years to double. That’s because 72 divided by 8 equals 9. You can also use the rule when you know the desired doubling time but not the interest rate. Just divide 72 by the time instead. Want to double your money in 6 years? You’ll need to earn roughly 12%. (Because 72 divided by 6 equals 12.) In other words, when it comes to doubling your money: Time X Interest = 72 Now, arm your kids with an online interest calculator, and wow them with your mental agility! “Kids, if you enter $100 for present value, double that — or $200 — for future value, and 10 for the number of years, I bet you’ll need... oh let’s see... thinking, thinking... something close to 7.2% for an interest rate. How’d I do?” “Wow, that’s really close Dad. It’s 7.18%! I don’t care what Mom says, you’re pretty smart!” (But beware smarty pants, interest rates in the 6% to 10% range yield the best approximations when using the rule of 72.) Once your kids are suitably impressed, let them in on your little math trick so they can impress their friends. Congratulations. You got kids talking about the fundamental math of investing. Like this tip? Get the next one in your inbox by subscribing here. Want to turn these tips into action? Check out FamZoo.com.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8942787647247314, "language": "en", "url": "https://ageconsearch.umn.edu/record/140788", "token_count": 268, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.054443359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:942f29b5-8c29-4ab5-bab9-df6135bd4f96>" }
In many important economic problems, a variable is maximized subject to constraints. In a subset of such problems, a linear combination of decision variables is maximized subject to linear constraints. The latter subset is amenable to linear programming analysis. Efforts to expand the usefulness of linear programming methods usually involve incorporating nonlinear elements either in the criterion function or in the constraints. Such efforts frequently result in discovering ways to incorporate the nonlinear element in some acceptable linear form, thus retaining the usual linear programming procedure but broadening the researcher's capacity to apply the method to important economic problems (9).1 Discrete programming is a case in point. Discrete programming problems and ordinary linear programming problems are about the same, except that a side condition is imposed that some of the decision variables must take on discrete values, usually nonnegative integers. The resultant, noncontinuous nature of the criterion function or of the constraints places discrete programming in the class of nonlinear programming (10). Sufficient conditions for a solution to discrete programming problems have been known for several years (15). Recently, systematic procedures for solving discrete programming problems have been put forward (14, 16). This paper discusses one of them. Decks and tapes for solving such problems on high-speed computers are not yet abundant, but it would be easy to supply them should the demand arise.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9480385184288025, "language": "en", "url": "https://collegegrad.com/education/business/management", "token_count": 388, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.040283203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a357749b-5692-4fc2-a52a-f5f57526ced2>" }
It's almost impossible to think of an industry or organization that does not need qualified professionals to oversee its operations. Mangers direct and supervise employees, and often oversee the functioning of the organization they work for, or the part of the organization they are hired to manage. Just a few of many management specialties include: - Project manager - Sales manager - Technology manager - Purchasing manager - Administrative manager - Business manager - Construction manager Regardless of the size or type of organization, a good manager uses interpersonal and leadership skills to help an organization and its employees excel. How to Prepare for a Career in Management To learn the professional skills you need to excel in the field, a management degree is often highly valuable. For entry-level management positions, a bachelor's or associate's degree is usually a requirement. Senior management positions often call for a master's degree such as an MBA (master's of business administration).Most management programs involve teaching skills such as: business leadership, communication skills, leadership and development, accounting and administrative practice and industry-specific management skills, i.e. health service management. Job outlook for managers is often specific to the type of industry one works in. Technology managers, for example, should see faster than average job growth, according to the U.S. Bureau of Labor Statistics, because of the growth of computer and information systems as an industry. Some other fast-growing industries for managers include: - Medical and Health Services Management (Average annual salary: $88,750) - Human Resources, Training, and Labor Management (Average annual salary: $58,230) - Construction Management (Average annual salary: $89,770) Having a well-rounded management education can prepare you to confidently make the transition from student to professional. Don't wait to earn your management degree--before you know it, you could rule the boardroom.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9529674649238586, "language": "en", "url": "https://politicstoday.org/the-transformation-of-money-and-blockchain-technology/", "token_count": 1749, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0830078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3e1c385d-02fb-40eb-bcd2-9005f976ae9c>" }
Being more than just a type of metal or paper, money is an element of a trust-based relationship between the creditor and debtor. In this respect, rather than being a measure that assesses the value of a commodity, money is the measure of trust in other people. Essentially, all monetary systems are based on trust and today’s coins and banknotes are not actually evaluated according to a standard established on precious metals. While states render banknotes trustworthy and financial regulations make electronic funds credible, crypto-currencies have the same trust via the blockchain technology, which offers security to the increasingly popular crypto-currencies. Developments in technology, international trade and the globalization of finance brought innovations for money. With the emergence of the electronic fund, money began to be transferred through electronic means defined as Electronic Fund Transfer (EFT). EFT was then followed by the credit card and Automatic Teller Machine (ATM) technologies. With the onset of millennium, monetary transactions via credit/debit cards instead of cash, and along with the new technological developments rendered money increasingly digital and virtual. Today, banknotes have been replaced by electronic funds – two thirds of personal transactions in the United States are conducted through credit and debit cards. As money has entered the cyber world, various electronic payment methods emerged. Electronic funds and methods of payment, such as the credit card, PayPal, e-cash, electronic check, mobile payment, eScrip, IPIN, PcPay, and First Virtual, have increased in terms of both volume and variety. This spread of electronic funds created issues such as information security and high costs in transactions. In this respect, people need systems that are secure, low on cost and swift in order to freely handle their transactions. The blockchain technology and cryptocurrencies There are various approaches to the concept of cryptocurrencies. The most significant of these is the decline in trust in central banks and other financial institutions following the 2008 Financial Crisis. The 2008 crisis brought many changes to the global financial system – national stock markets collapsed, credit-rating agencies lost reputation, financial institutions and companies were on the verge of bankruptcy while certain worldwide banks went bankrupt. More important than all, the public perception of the finance sector slipped, leading to a feeling of increased distrust towards banks. As a result of this increasing distrust, Bitcoin was proposed as an alternative to powerful reserve currencies such as the U.S. Dollar and Euro. The crypto-currency Bitcoin was first mentioned in an article written by Satoshi Nakamoto, titled ‘Bitcoin: A Peer-to-Peer Electronic Cash System.’ A technical analysis on an alternative currency and its timing was intriguing, as it was brought at a time when following the crisis, people were distrustful towards the financial world. In the mentioned article, Nakamoto defines Bitcoin as a peer-to-peer electronic system that relies on encryption. In his work Nakamoto criticizes intermediary services conducted by banks and argues that banks are not required for trade due to the increasing trend of electronic trade. While the article technically explains how the blockchain infrastructure was developed and how it functions, it also attempts to provide a new perspective as to how to resolve the post-crisis issue of trust through technology. Even though crypto-currencies were being discussed prior to 2009, the introduction of the blockchain technology made them more appealing. So, what is blockchain? It is a distributed database in which all records are connected to each other through cryptographic elements (hash functions). Not being a centralized system, all data is stored by users integrated into the system. Blockchain, the distributed database that tracks encrypted transactions, is kind of a digital ledger known as Distributed Ledger Technology (DLT). With blockchain, crypto-currencies remove the intermediary and replace the prerequisite mutual trust with a mathematically-precise technology, thus establishing a trust mechanism. The trust here is actually in the security of crypto-currencies backed by the blockchain technology. Besides not requiring an intermediary and being transparent, the most prominent aspect of the blockchain technology is its high security. In this respect blockchain possesses a great potential as it accelerates transactions , decreases costs, increases security and expands operational capacities. Acting as a digital registry without a particular storage, blockchain is a precious database that can be used for various purposes including storage of property ownership documents, birth and death certificates, management of smart contracts and financial documents. Blockchain provides an unprecedented opportunity for its users to control their digital identity. A transparent global registry, blockchain is used for various purposes such as securing, managing or storing in numerous fields. The facilities it provides to digital identity renders blockchain the key of trust economy. In this respect, blockchain offers the same advantages to businesses as well. For this reason, some argue that blockchain will rock the markets and will be at the center of the fourth industrial revolution. While Sweden was the first country to take steps in utilizing blockchain in order to secure and manage documents by storing land registry on the database, Bitfury signed the largest state blockchain deal with Ukraine, which will see the transfer of all the country’s electronic information to the database. Similarly, South Korea struck a deal with Samsung for blockchain-based technologies, and Hong Kong and Singapore announced that they were going to use blockchain technology in order to resolve their trade issues and to unify trade platforms. Various steps are being taken in order to integrate multiple international sectors together with blockchain technology. In order to facilitate international trade, IBM is attempting to establish a digital trade consortium through blockchain by including the largest banks of Europe such as Deutsche Bank, HSBC, KBC, Natixis, Rabobank, Societe Generale and Unicredit. IBM’s interest in this technology continues as it has announced a partnership with UBS, Bank of Montreal, CaixaBank, Erste Group and Commerzbank for the establishment of a blockchain-based trade finance platform. These kinds of developments indicate that the finance sector will quickly adapt to future digital technologies such as blockchain. Bitcoin and the future of virtual currencies Blockchain’s popularity increased in tandem with the crypto-currency Bitcoin. Expanding its volume and market share tremendously after its inception in 2009, Bitcoin has become a phenomenon in itself in recent years. Being the most valuable crypto-currency, the value of Bitcoin increased from $970 to $20,000 in 2017 alone. Today, Bitcoin’s share in the crypto-currency market is around 45% and its volume is around $265 billion. The crypto-currency market is worth around $600 billion in total. Following the introduction of Bitcoin to the market and the ever increasing demand for it, hundreds of other virtual currencies have emerged. Crypto-currencies other than Bitcoin such as Ethereum, Ripple, Bitcoin Cash, Litecoin, Cardano, IOTA, and Dash are defined as alt-coins (alternative coins). As of late 2017, there are 1367 crypto-currencies in the market. However, Bitcoin is still the most discussed crypto-currency not only because it has the largest share in the market but also because it is accepted by certain firms and companies. Technological developments may allow for the digitalization of money to take a different turn. The risks and opportunities posed by Bitcoin will not only be observed by governments. Despite its high profit margin, Bitcoin is a volatile currency prone to speculation and can be used for money laundering and other illegal transactions. As is observed in the case of Bitcoin, crypto-currencies are an attractive and lucrative means for transaction due to their swiftness, low costs and minimal risks. Even though they are new to the market, they have no legal status and therefore can be used for illegal activities. In this sense therefore, crypto-currencies possess the potential of becoming an alternative means for investment or even currency. Indeed, it is argued that blockchain, the underlying element of digital currencies, will have the most impact on our lives in the near future, taking over social media, big data, robots and artificial intelligence. Some even claim that its affect will be the “real” revolution. It is becoming compulsory for states to implement legal regulations that will enable them to adapt to this level of change. As is known, crypto-currencies already have a stock market. Some countries deem virtual currencies as a commodity and tax profits gained from their transactions. It could be asserted that governments will attempt to control digital currencies through blockchain technology in the upcoming years. Thus, for this reason, instead of deeming them as a hoax, Turkey should evaluate the possible advantages and risks of digital currencies through observing their usage, while enacting laws to protect individuals.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9495670795440674, "language": "en", "url": "https://splashplan.net/2010/03/22/understanding-the-laws-of-supply-and-demand/", "token_count": 396, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2216796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3aaa280b-3617-4ad1-bf50-40758d03e851>" }
Understanding The Laws Of Supply And Demand Before I went to college, I thought there was one law of supply and demand. There are actually two laws: One law of supply, and one law of demand. Supply and demand may be interrelated, but when you look at the laws you are looking from two different viewpoints. One as the consumer and one as the supplier. If you sell a product that is in high demand, you can sell it for a higher price and will not see a decline in sales. The higher you go in price, the more demand will drop off, until it drops off a cliff (i.e. no sales). You will want to price where you make the most overall profit, which will be some mathematical formula where profit per item times units sold will be the greatest amount. For some items, a small change in price will eliminate sales; for others it takes a large change in price for sales to drop off. This difference is called the elacticity of demand. If you are the purchaser or consumer of an item, supply will tend to drive your price. The more different vendors you can buy from, the greater the supply of an item, or your ability to substitute another item will give you leverage to get a better price. As a reseller of an item, your best bet is to buy an item in plentiful supply to you that is in great demand in the marketplace; so much so it outpaces the retail supply. If it is difficult for other businesses to enter this market and sell to the consumer, it is better for businesses already servicing the market. If you can be in a market where all of these things happen, you’ll have a perfect storm that will create a whirlwind of profit. However, this is pretty rare so don’t hold your breath… you just want to look for the best possible situations that work in your favor. Please review the following resources for more information:
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9552991390228271, "language": "en", "url": "https://www.cati.com/blog/2020/12/5-ways-cae-has-led-to-improvements-in-sustainability/", "token_count": 1190, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.041748046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cec37cbc-30b6-4c9b-a3d9-61802ac81260>" }
Advanced simulations helped companies spanning a range of industries discover new ways to contribute to sustainable manufacturing. Over the past decade and more, achieving more sustainable product designs has become a top priority for companies around the globe. The drivers behind this newfound focus are many. Some have been motivated by a concern for the environment and a desire to reduce their company’s negative impact. Others recognize the economic benefits that come from reducing waste in their production process and creating more efficient designs. And others still are aware of the consumer-driven demand for greener products. For most, the interest in sustainability is a combination of all three. However, as much as companies may want to achieve more sustainable benchmarks, it is harder to actually get there—and to demonstrate you have done so with reliable data. This is where CAE simulation becomes invaluable. Using advanced computer simulations, companies can test whether their designs have met their sustainability goals before those designs enter the prototyping stage. They can also use these simulations to demonstrate the improved sustainability of their products to key stakeholders, including board members, consumers, and regulators. CAE simulation has already been used across many industries to drive improvements in sustainability. Here’s how. 1. Improvements in the design of green energy technology. One of the most high-profile green initiatives is the move away from fossil fuels and toward green energy solutions, including wind, solar, and hydroelectric. This has led to a higher demand not just for the manufacture of more green energy stations, but of designs that better capture and store energy. CAE has an impressive track record in aiding the development of new energy sources. While CFD simulation can be applied to the development of both wind and hydropower turbines, Abaqus has long been used to strengthen the nuclear plants against earthquakes and improve the structural integrity of battery assemblies under a variety of stress conditions. 2. Increased efficiency of electric vehicles. The growing market in electric vehicles has been another highly publicized advancement in sustainable technology, spurred on by the rise of Tesla in the automotive market. Improvements in battery technology have broken down several of the factors limiting wider adoption of electric vehicles, including more efficient battery storage, which has increased the range of electric vehicles, and better optimization of on-board systems to reduce their drain on battery life. As might be expected, EM simulation has played a key role in helping engineers improve electric motor design, but thermal simulation and analysis has also been invaluable in resolving design challenges related to the heating and cooling of EV components. Chemical simulations have helped improve the energy storage of electric batteries and fuel cells, while FEA simulations have allowed engineers to test the integrity of new designs under extreme stress conditions. 3. Reduction of material consumption across industries. Creating a more eco-friendly product sometimes comes with increased costs, but one area in which economics and the environment are perfectly aligned is in reducing the raw materials required for manufacturing. However, finding ways to reduce waste in a product design is a challenge, especially when doing so can’t compromise the quality and durability of the end result. CAE simulations can test the strength and durability of a design, ensuring that changes to the construction of a component don’t result in unacceptable quality failures. 4. Development of new sustainable advanced materials. Material development is a difficult but promising field of engineering focused on developing new alloys, nanomaterials, plastics, foams, rubbers, and composites that can be used in new and innovative ways. Many of these materials have potentially unique physical properties that engineers could use to resolve various design challenges. The development of biomaterials can also aid sustainability efforts by reducing the environmental burden of various products. CAE is crucial to the development of new materials by helping engineers test their behavior in various scenarios. Nonlinear FEA, in particular, is especially necessary in guiding engineers toward the right choices in material properties and applications. Nonlinear FEA solutions (of which Abaqus is a known leader) offer an advantage in this field, as they provide accurate simulation of extreme use cases, such as high heat and high stress. 5. Creating of efficient aircraft designs with reduced carbon footprints. Finally, despite the move toward electric vehicles, fossil fuels are still necessary for many vehicle designs—especially aircraft. So long as electric airplanes remain largely theoretical, the emphasis in the aerospace industry has been toward more designs that consume less fuel and are more streamlined, so as to have a lower carbon footprint. CFD and FEA analysis can help engineers identify ways in which the design of an aircraft is contributing to drag or turbulence – and decide what to do about it. FEA also aids in lightweighting a design to reduce the weight of aircraft components, including structurally critical ones, through both redesign and material substitution. These lead to improvements in overall range and fuel efficiency while potentially reducing manufacturing costs. We can help you use CAE simulation to achieve more sustainable designs. Although many companies want to improve the sustainability of their products—for economic, environmental, and social reasons—not all have the in-house simulation resources at hand to do so. Those that do have engineers devoted to CAE simulation may not have the bandwidth to handle their full workload. In either case, we are ready to help. Our engineers can work with your team to run simulation models that will show how a design behaves under various conditions, including whether it is meeting the sustainability targets that your company is aiming for. Our expertise with Abaqus can also help create designs that can withstand mechanical stressors from the environment, resulting in more durable, longer-lasting energy sources. Furthermore, our team of engineers is comprised of experts in diverse fields who can bring their knowledge to your projects. If you would like to learn more about the skills our team of engineers can bring to your company, contact us today.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9499555230140686, "language": "en", "url": "http://bilbo.economicoutlook.net/blog/?p=46934", "token_count": 4044, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1630859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e6717320-db56-4fac-8641-2d522e18c4c4>" }
Here are the answers with discussion for this Weekend’s Quiz. The information provided should help you work out why you missed a question or three! If you haven’t already done the Quiz from yesterday then have a go at it before you read the answers. I hope this helps you develop an understanding of Modern Monetary Theory (MMT) and its application to macroeconomic thinking. Comments as usual welcome, especially if I have made an error. Central banks provide reserves to the commercial banking system usually at some penalty rate. However, this compromises their capacity to target a given monetary policy rate. The answer is True. The facts are as follows. First, central banks will always provide enough reserve balances to the commercial banks at a price it sets using a combination of overdraft/discounting facilities and open market operations. Second, if the central bank didn’t provide the reserves necessary to match the growth in deposits in the commercial banking system then the payments system could be impaired and there would be significant hikes in the interbank rate of interest and a wedge between it and the policy (target) rate – meaning the central bank’s policy stance becomes compromised. Third, any reserve requirements within this context while legally enforceable (via fines etc) do not constrain the commercial bank credit creation capacity. Central bank reserves (the accounts the commercial banks keep with the central bank) are not used to make loans. They only function to facilitate the payments system (apart from satisfying any reserve requirements that might be in place). Fourth, banks make loans to credit-worthy borrowers and these loans create deposits. If the commercial bank in question is unable to get the reserves necessary to meet the clearing requirements from other sources (other banks etc) then the central bank has to provide them. But the process of gaining the necessary reserves is a separate and subsequent bank operation to that involved in the deposit creation (via the loan). Fifth, if there were too many reserves in the system (relative to the banks’ desired levels to facilitate the payments system and the required reserves then competition in the interbank (overnight) market would drive the interest rate down. This competition would be driven by banks holding surplus reserves (to their requirements) trying to lend them overnight. The opposite would happen if there were too few reserves supplied by the central bank. Then the chase for overnight funds would drive rates up. In both cases the central bank would lose control of its current policy rate as the divergence between it and the interbank rate widened. This divergence can snake between the rate that the central bank pays on excess reserves (this rate varies between countries and overtime but before the crisis was zero in Japan and the US) and the penalty rate that the central bank seeks for providing the commercial banks access to the overdraft/discount facility. So the aim of the central bank is to issue just as many reserves that are required for the law and to meet the banks’ own desires. Now the question seeks to link the penalty rate that the central bank charges for providing reserves to the banks and the central bank’s target rate. The wider the spread between these rates the more difficult does it become for the central bank to ensure the quantity of reserves is appropriate for maintaining its target (policy) rate. Where this spread is narrow, central banks “hit” their target rate each day more precisely than when the spread is wider. So if the central bank really wanted to put the screws on commercial bank lending via increasing the penalty rate, it would have to be prepared to lift its target rate in close correspondence. In other words, its monetary policy stance becomes beholden to the discount window settings. The best answer was True because the central bank cannot operate with wide divergences between the penalty rate and the target rate and it is likely that the former would have to rise significantly to choke private bank credit creation. You might like to read this blog posts for further information: - Money multiplier and other myths - Money multiplier – missing feared dead - 100-percent reserve banking and state banks - US federal reserve governor is part of the problem - Building bank reserves will not expand credit - Building bank reserves is not inflationary If the real interest rate (difference between nominal interest rate and inflation) is constant, then a currency-isuing government, which matches its net spending $-for-$ with debt issuance, could double its fiscal deficit without pushing up the public debt ratio. The answer is True. Again, this question requires a careful reading and a careful association of concepts to make sure they are commensurate. There are two concepts that are central to the question: (a) a rising fiscal deficit – which is a flow and not scaled by GDP in this case; and (b) a rising public debt ratio which by construction (as a ratio) is scaled by GDP. So the two concepts are not commensurate although they are related in some way. A rising fiscal deficit does not necessary lead to a rising public debt ratio. You might like to refresh your understanding of these concepts by reading this blog – Saturday Quiz – March 6, 2010 – answers and discussion. While the mainstream macroeconomics thinks that a sovereign government is revenue-constrained and is subject to the ‘government budget constraint’, Modern Monetary Theory (MMT) places no particular importance in the public debt to GDP ratio for a sovereign government, given that insolvency is not an issue. The mainstream framework for analysing the so-called “financing” choices faced by a government (taxation, debt-issuance, money creation) – the ‘government budget constraint’ – is written as: Which you can read in English as saying that Budget deficit = Government spending + Government interest payments – Tax receipts must equal (be “financed” by) a change in Bonds (B) and/or a change in high powered money (H). The triangle sign (delta) is just shorthand for the change in a variable. Remember, this is merely an accounting statement. In a stock-flow consistent macroeconomics, this statement will always hold. That is, it has to be true if all the transactions between the government and non-government sector have been corrected added and subtracted. So from the perspective of MMT, the previous equation is just an ex post accounting identity that has to be true by definition and has not real economic importance. For the mainstream economist, the equation represents an ex ante (before the fact) financial constraint that the government is bound by. The difference between these two conceptions is very significant and the second (mainstream) interpretation cannot be correct if governments issue fiat currency (unless they place voluntary constraints on themselves to act as if it is). That interpretation is inapplicable (and wrong) when applied to a sovereign government that issues its own currency. But the accounting relationship can be manipulated to provide an expression linking deficits and changes in the public debt ratio. The following equation expresses the relationships above as proportions of GDP: So the change in the debt ratio is the sum of two terms on the right-hand side: (a) the difference between the real interest rate (r) and the GDP growth rate (g) times the initial debt ratio; and (b) the ratio of the primary deficit (G-T) to GDP. A primary fiscal balance is the difference between government spending (excluding interest rate servicing) and taxation revenue. The real interest rate is the difference between the nominal interest rate and the inflation rate. If inflation is maintained at a rate equal to the interest rate then the real interest rate is constant. A growing economy can absorb more debt and keep the debt ratio constant or falling. From the formula above, if the primary fiscal balance is zero, public debt increases at a rate r but the public debt ratio increases at r – g. So if r = 0, and g = 2, the primary deficit ratio could equal 2 per cent (of GDP) and the public debt ratio would be unchanged. Doubling the primary deficit to 4 per cent would require g to rise to 4 for the public debt ratio to remain unchanged. That is entirely possible. So a nation running a primary deficit can obviously reduce its public debt ratio over time or hold them constant if growth is stimulated. Further, you can see that even with a rising primary deficit, if output growth (g) is sufficiently greater than the real interest rate (r) then the debt ratio can fall from its value last period. Furthermore, depending on contributions from the external sector, a nation running a deficit will more likely create the conditions for a reduction in the public debt ratio than a nation that introduces an austerity plan aimed at running primary surpluses. Clearly, the real growth rate has limits and that would limit the ability of a government (that voluntarily issues debt) to hold the debt ratio constant while expanding its fiscal deficit as a proportion of GDP. The following blog may be of further interest to you: Assume that inflation is stable, there is excess productive capacity, and the central bank maintains its current interest rate target. If on average the government collects an income tax of 20 cents in the dollar, then total tax revenue will rise by 0.20 times $x if government spending increases (once and for all) by $X dollars and private investment and exports remain unchanged. The answer is False. This question relates to the concept of a spending multiplier and the relationship between spending injections and spending leakages. It is designed to help you think about how the automatic stabilisers linked to tax revenue respond to growth. We have made the question easy by assuming that only government spending changes (exogenously) in period one and then remains unchanged after that – that is, a once and for all increase. Aggregate demand drives output which then generates incomes (via payments to the productive inputs). Accordingly, what is spent will generate income in that period which is available for use. The uses are further consumption; paying taxes and/or buying imports. We consider imports as a separate category (even though they reflect consumption, investment and government spending decisions) because they constitute spending which does not recycle back into the production process. They are thus considered to be “leakages” from the expenditure system. So if for every dollar produced and paid out as income, if the economy imports around 20 cents in the dollar, then only 80 cents is available within the system for spending in subsequent periods excluding taxation considerations. However there are two other “leakages” which arise from domestic sources – saving and taxation. Take taxation first. When income is produced, the households end up with less than they are paid out in gross terms because the government levies a tax. So the income concept available for subsequent spending is called disposable income (Yd). In the example we assumed an average tax rate of 20 cents in the dollar is levied (which is equivalent to a proportional tax rate of 0.20). So if $100 of new income is generated, $20 goes to taxation and Yd is $80 (what is left). So taxation (T) is a “leakage” from the expenditure system in the same way as imports are. You were induced to think along those lines. The relevant issue to resolve though is – What is the new income generated? The concept of the spending multiplier tells us that the final change in income will exceed the initial injection (in the question $X dollars). Finally consider saving. Households (consumers) make decisions to spend a proportion of their disposable income. The amount of each dollar they spent at the margin (that is, how much of every extra dollar to they consume) is called the marginal propensity to consume. If that is 0.80 then they spent 80 cents in every dollar of disposable income. So if total disposable income is $80 (after taxation of 20 cents in the dollar is collected) then consumption (C) will be 0.80 times $80 which is $64 and saving will be the residual – $26. Saving (S) is also a “leakage” from the expenditure system. It is easy to see that for every $100 produced, the income that is generated and distributed results in $64 in consumption and $36 in leakages which do not cycle back into spending. For income to remain at the higher level (after the extra $100 is created)in the next period the $36 has to be made up by what economists call “injections” which in these sorts of models comprise the sum of investment (I), government spending (G) and exports (X). The injections are seen as coming from “outside” the output-income generating process (they are called exogenous or autonomous expenditure variables). For GDP to be stable injections have to equal leakages (this can be converted into growth terms to the same effect). The national accounting statements that we have discussed previous such that the government deficit (surplus) equals $-for-$ the non-government surplus (deficit) and those that decompose the non-government sector in the external and private domestic sectors is derived from these relationships. So imagine there is a certain level of income being produced – its value is immaterial. Imagine that the central bank sees no inflation risk and so interest rates are stable as are exchange rates (these simplifications are to to eliminate unnecessary complexity). The question then is: what would happen if government increased spending by, say, $100? This is the terrain of the multiplier. If aggregate demand increases drive higher output and income increases then the question is by how much? The spending multiplier is defined as the change in real income that results from a dollar change in exogenous aggregate demand (so one of G, I or X). We could complicate this by having autonomous consumption as well but the principle is not altered. So the starting point is to define the consumption relationship. The most simple is a proportional relationship to disposable income (Yd). So we might write it as C = c*Yd – where little c is the marginal propensity to consume (MPC) or the fraction of every dollar of disposable income consumed. We will use c = 0.8. The * sign denotes multiplication. You can do this example in an spreadsheet if you like. Our tax relationship is already defined above – so T = tY. The little t is the marginal tax rate which in this case is the proportional rate (assume it is 0.2). Note here taxes are taken out of total income (Y) which then defines disposable income. So Yd = (1-t) times Y or Yd = (1-0.2)*Y = 0.8*Y If imports (M) are 20 per cent of total income (Y) then the relationship is M = m*Y where little m is the marginal propensity to import or the economy will increase imports by 20 cents for every real GDP dollar produced. If you understand all that then the explanation of the multiplier follows logically. Imagine that government spending went up by $100 and the change in real national income is $179. Then the multiplier is the ratio (denoted k) of the Change in Total Income to the Change in government spending. Thus k = $179/$100 = 1.79. This says that for every dollar the government spends total real GDP will rise by $1.79 after taking into account the leakages from taxation, saving and imports. When we conduct this thought experiment we are assuming the other autonomous expenditure components (I and X) are unchanged. But the important point is to understand why the process generates a multiplier value of 1.79. Here is a spreadsheet table I produced as a basis of the explanation. You might want to click it and then print it off if you are having trouble following the period by period flows. So at the start of Period 1, the government increases spending by $100. The Table then traces out the changes that occur in the macroeconomic aggregates that follow this increase in spending (and “injection” of $100). The total change in real GDP (Column 1) will then tell us the multiplier value (although there is a simple formula that can compute it). The parameters which drive the individual flows are shown at the bottom of the table. Note I have left out the full period adjustment – only showing up to Period 12. After that the adjustments are tiny until they peter out to zero. Firms initially react to the $100 order from government at the beginning of the process of change. They increase output (assuming no change in inventories) and generate an extra $100 in income as a consequence which is the 100 change in GDP in Column . The government taxes this income increase at 20 cents in the dollar (t = 0.20) and so disposable income only rises by $80 (Column 5). There is a saying that one person’s income is another person’s expenditure and so the more the latter spends the more the former will receive and spend in turn – repeating the process. Households spend 80 cents of every disposable dollar they receive which means that consumption rises by $64 in response to the rise in production/income. Households also save $16 of disposable income as a residual. Imports also rise by $20 given that every dollar of GDP leads to a 20 cents increase imports (by assumption here) and this spending is lost from the spending stream in the next period. So the initial rise in government spending has induced new consumption spending of $64. The workers who earned that income spend it and the production system responds. But remember $20 was lost from the spending stream via imports so the second period spending increase is $44. Firms react and generate and extra $44 to meet the increase in aggregate demand. And so the process continues with each period seeing a smaller and smaller induced spending effect (via consumption) because the leakages are draining the spending that gets recycled into increased production. Eventually the process stops and income reaches its new “equilibrium” level in response to the step-increase of $100 in government spending. Note I haven’t show the total process in the Table and the final totals are the actual final totals. If you check the total change in leakages (S + T + M) in Column (6) you see they equal $100 which matches the initial injection of government spending. The rule is that the multiplier process ends when the sum of the change in leakages matches the initial injection which started the process off. You can also see that the initial injection of government spending ($100) stimulates an eventual rise in GDP of $179 (hence the multiplier of 1.79) and consumption has risen by 114, Saving by 29 and Imports by 36. The total tax take is thus $36 after the multiplier process is exhausted. For those who are familiar with algebra, the total change in teax revenue is equal to 0.2*1.79*$X, which in English says equals the tax rate times the multiplied initial change in aggregate demand. So while the overall rise in nominal income is greater than the initial injection as a result of the multiplier that income increase produces leakages which sum to that exogenous spending impulse. At that point, the income expansion ceases. The following blog posts may be of further interest to you: That is enough for today! (c) Copyright 2021 William Mitchell. All Rights Reserved.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.920098066329956, "language": "en", "url": "https://gillettestore.com/qa/quick-answer-what-are-security-processes.html", "token_count": 808, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.014892578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b1f6ec18-8541-4221-aeb7-3909b757171a>" }
- What is difference between policy and procedure? - What is a security guideline? - What are the 7 layers of security? - Why is security important? - What is security concept? - What are security procedures and guidelines? - What are the two types of security? - What is the full name of security? - What is policy and guidelines? - What are the 3 types of security? - What are the types of security? - What are the five aspects of security? What is difference between policy and procedure? Policies set some parameters for decision-making but leave room for flexibility. They show the “why” behind an action. Procedures, on the other hand, explain the “how.” They provide step-by-step instructions for specific routine tasks. They may even include a checklist or process steps to follow.. What is a security guideline? Standards and baselines describe specific products, configurations, or other mechanisms to secure the systems. These are areas where recommendations are created as guidelines to the user community as a reference to proper security. … For example, your policy might require a risk analysis every year. What are the 7 layers of security? 7 Layers of SecurityInformation Security Policies. These policies are the foundation of the security and well-being of our resources. … Physical Security. … Secure Networks and Systems. … Vulnerability Programs. … Strong Access Control Measures. … Protect and Backup Data. … Monitor and Test Your Systems. Why is security important? Effective and reliable workplace security is very important to any business because it reduces insurance, compensation, liabilities, and other expenses that the company must pay to its stakeholders, ultimately leading to increased business revenue and a reduction in operational charges incurred. What is security concept? Three basic information security concepts important to information are Confidentiality, Integrity, and Availability. If we relate these concepts with the people who use that information, then it will be authentication, authorization, and non-repudiation. What are security procedures and guidelines? Standards and safeguards are used to achieve policy objectives through the definition of mandatory controls and requirements. Procedures are used to ensure consistent application of security policies and standards. Guidelines provide guidance on security policies and standards. What are the two types of security? Types of SecuritiesEquity securities. Equity almost always refers to stocks and a share of ownership in a company (which is possessed by the shareholder). … Debt securities. Debt securities differ from equity securities in an important way; they involve borrowed money and the selling of a security. … Derivatives. Derivatives. What is the full name of security? S-Sensible E-Efficient in workFull form of Security is: S-Sensible E-Efficient in work C-Claver U-Understanding R-Regular I-Intelligent T-Talent Y-Young. What is policy and guidelines? Clarifying the difference between guidelines vs policies helps employees understand expectations. Simply put, guidelines are general, non-mandatory recommendations. Policies are formalized statements that apply to a specific area or task. Policies are mandatory – employees who violate a policy may be disciplined. What are the 3 types of security? There are three primary areas or classifications of security controls. These include management security, operational security, and physical security controls. What are the types of security? Security is a financial instrument that can be traded between parties in the open market. The four types of security are debt, equity, derivative, and hybrid securities. What are the five aspects of security? Security isn’t a tangible property either; it’s an umbrella term for a whole class of goals. Rather, privacy, authentication, identification, trust, and verification — mechanisms of applied cryptography — are what provide the most commonly desired types of security.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.7514294981956482, "language": "en", "url": "https://kaoshi.china.com/wangxiao/koolearn/news/134008.htm", "token_count": 2578, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.412109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:13f3bd3f-5ab6-4aae-aee9-86dcc63fb038>" }
发布时间:2016-06-24 来源:新东方在线 发布人: 1、推理判断题的标志:infer, imply, suggest,All of the following/statements………NOT true/ correct/ mentioned EXCEPT It looks/sounds like/as if:看/听上去好像,实际并不是。如大纲样题(1997年真题第5篇)的首句“Much of the language used to describe monetary policy, such as ”steering the economy to a soft landing“ or ”a touch on the brakes“, makes it sound like a precise science.”下划线的字面意思直译“使之听上去好像一门精确的科学”,作者实际表达的意思货币政策并不是一门精确的科学。 虚拟语气:虚拟以所谓的反事实假设,即作者是既表达的意思和字面意思相反。如1996年第五篇的末句“And so it does - and all would be well were reason the only judge in the creationism/evolution debate.”本句话字面意思直译是“如果理性成为创世论和进化论这场争议的惟一评判标准,那么一切都会变得好了”,作者实际表达的意思就是理性既不是惟一评判标准,而今天情况也并不好。 让步论述:让步论述就是先假设作者观点负面成立,从而引发出一系列荒谬的、不合理的结局,倒过来再次论证作者自身观点的正确性。因为有假设能成立的过程,事实上并不能成立,因此字面意思和实际意思依然是相反的。如大纲样题(1997年第5篇)首段“Hence the analogy that likens the conduct of monetary policy to driving a car with a blackened windscreen, a cracked rearview mirror and a faulty steering wheel.”如果把货币政策比成开汽车的话(前文论证过货币政策不能比喻成开汽车,这里就是假设其观点的负面成立),那么你就是开着一辆前挡风玻璃是黑的,后视镜是碎的,方向盘是坏的车(荒谬的不合理的结局)。倒过来论证货币政策不能比成开汽车。 引号:引号可以起一个反语的作用。如1996年第5篇中“”Scientific“ creationism, which is being pushed by some for ”equal time“ in the classrooms whenever the scientific accounts of evolution are given, is based on religion, not science.”引号表征是所谓的科学,作者表达的意思就是创世论并不科学。 反问句:反问也是一种正话反说。如2005年第2篇首段“That the evidence was inconclusive, the science uncertain?”字面意思“证据不确定,那么科学也不确定了吗?”很明显作者表达的意思是科学是certain的,而不是字面的 uncertain. 文化背景:在某些特定的文化背景当中,作者实际表达意思和字面意思相反。如2001年第5篇第一段“A lateral move that hurt my pride and blocked my professional progress prompted me to abandon my relatively high profile career although, in the manner of a disgraced government minister, I covered my exit by claiming”I wanted to spend more time with my family“。”作者在这里只是借自身来反讽某些政府部长,即作者并不是政府部长,也不是想和家人共度美好时光。 Over the past century, all kinds of unfairness and discrimination have been condemned or made illegal. But one insidious form continues to thrive: alphabetism. This, for those as yet unaware of such a disadvantage, refers to discrimination against those whose surnames begin with a letter in the lower half of the alphabet. It has long been known that a taxi firm called AAAA cars has a big advantage over Zodiac cars when customers thumb through their phone directories. Less well known is the advantage that Adam Abbott has in life over Zoe Zysman. English names are fairly evenly spread between the halves of the alphabet. Yet a suspiciously large number of top people have surnames beginning with letters between A and K. Thus the American president and vice-president have surnames starting with B and C respectively; and 26 of George Bush‘s predecessors (including his father) had surnames in the first half of the alphabet against just 16 in the second half. Even more striking, six of the seven heads of government of the G7 rich countries are alphabetically advantaged (Berlusconi, Blair, Bush, Chirac, Chrétien and Koizumi)。 The world’s three top central bankers (Greenspan, Duisenberg and Hayami) are all close to the top of the alphabet, even if one of them really uses Japanese characters. As are the world's five richest men (Gates, Buffett, Allen, Ellison and Albrecht)。 Can this merely be coincidence? One theory, dreamt up in all the spare time enjoyed by the alphabetically disadvantaged, is that the rot sets in early. At the start of the first year in infant school, teachers seat pupils alphabetically from the front, to make it easier to remember their names. So short-sighted Zysman junior gets stuck in the back row, and is rarely asked the improving questions posed by those insensitive teachers. At the time the alphabetically disadvantaged may think they have had a lucky escape. Yet the result may be worse qualifications, because they get less individual attention, as well as less confidence in speaking publicly. The humiliation continues. At university graduation ceremonies, the ABCs proudly get their awards first; by the time they reach the Zysmans most people are literally having a ZZZ. Shortlists for job interviews, election ballot papers, lists of conference speakers and attendees: all tend to be drawn up alphabetically, and their recipients lose interest as they plough through them. 47、What can we infer from the first three paragraphs? [A] In both East and West, names are essential to success. [B] The alphabet is to blame for the failure of Zoe Zysman. [C] Customers often pay a lot of attention to companies' names. [D] Some form of discrimination is too subtle to recognize.(文章主题为正确答案) 48、The 4th paragraph suggests that [A] questions are often put to the more intelligent students. [B] alphabetically disadvantaged students often escape form class. [C] teachers should pay attention to all of their students.(should为正话反说,改选项实际表达的意义就是老师没有关注所有的孩子) [D] students should be seated according to their eyesight. 50、Which of the following is true according to the text? [A] People with surnames beginning with N to Z are often ill-treated. [B] VIPs in the Western world gain a great deal from alphabetism. [C] The campaign to eliminate alphabetism still has a long way to go. [D] Putting things alphabetically may lead to unintentional bias.(文章主题为正确答案)
{ "dump": "CC-MAIN-2021-17", "language_score": 0.922612726688385, "language": "en", "url": "https://www.anthropocenemagazine.org/2020/05/when-it-comes-to-decarbonization-no-one-can-go-it-alone/", "token_count": 824, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.193359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b297bdbc-2b33-4bdf-b05b-6184e96a9c96>" }
Various countries and regions around the world are making bold plans and announcements about decarbonization. The European Union (EU), for example, wants its economy to be climate-neutral by 2050. But there’s a fatal flaw in this piecemeal approach to decarbonization, a new analysis suggests. In a globalized economy, if individual countries or regions decarbonize while others maintain the status quo, a large portion of the carbon saved in one part of the world will wind up being emitted elsewhere. Effective climate policy will have to find ways of managing this carbon ‘leakage,’ University of Copenhagen economists Wusheng Yu and Francesco Clora write in a briefing paper produced as part of the EUCalc project, a collaborative effort involving 12 institutions across 9 countries to sketch out how the EU could undertake the transition to a green economy. The analysis rests on a computer model that simulates various EU economic sectors and trade linkages with the rest of the world, based on data from EU member states, the United Kingdom, and Switzerland. A web-based tool enables policymakers to use the model to explore different ways of reaching net-zero emissions both for the EU as a whole and for individual member states. In the paper, the researchers detail the economic and other knock-on effects of two different emissions scenarios for the EU: a least-ambitious scenario, in which the region actually dials back its current decarbonization efforts, and a most-ambitious scenario, in which policymakers lean on all possible levers to promote decarbonization. If the EU undertakes ambitious decarbonization and the rest of the world doesn’t, this will result in a big EU trade deficit, the model shows. A big trade deficit isn’t necessarily a bad thing, but a sudden increase can cause lots of economic disruption, especially in parts of the economy that have become less competitive on a global scale. So that’s something that policymakers will have to be alert for and manage, the researchers say. Moreover, “changing external trade patterns and trade flows caused by the actions to reduce the EU’s internal emissions means that the direct emissions reductions may be partially offset by increased emissions elsewhere,” the researchers write. The most ambitious decarbonization pathway in the EU is likely to yield a carbon leakage rate of 61.5%, the model predicts. That is, for every kilogram of carbon dioxide or equivalent emissions avoided or sequestered within the EU, about six-tenths of a kilogram will be emitted elsewhere in the world. The net decarbonization achieved is therefore less than half of what it first seems. The analysis shows how this would play out in various parts of the economy. For example, if carbon-intensive industries such as concrete, steel, and chemicals become greener in Europe, this will increase the cost of EU products relative to those produced by China and the United States (again, assuming the climate status-quo prevails in those countries). The result will be fewer such goods produced in Europe, and more imports of less-green products made elsewhere. Similarly, reduced demand for fossil fuels in Europe means falling prices, facilitating increased fossil fuel consumption elsewhere in the world. Climate-friendly consumer choices by Europeans such as less red meat consumption could also reverberate through the global food system to increase emissions elsewhere in the world. The upshot is that decarbonization depends on coordinated action of countries around the world. “A green transition in the EU alone cannot significantly reduce global CO2 emissions,” Yu said in a statement. “We need to find ways to get others on board. Otherwise, the impact of our efforts will be largely offset by increased emission elsewhere, making it impossible to meet the Paris Agreement targets in time.” Source: Yu W. and F. Clora. “Implications of decarbonizing the EU economy on trade flows and carbon leakages.” EUCalc Policy Brief #7, 2020.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9117660522460938, "language": "en", "url": "https://www.vocationaltraininghq.com/how-to-become/emergency-management-director/", "token_count": 3521, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.12255859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9b1fcbb4-8ee0-4742-948f-0f3f5dcef8e3>" }
What is an Emergency Management Director? An Emergency Management Director is responsible for creating plans in case disaster strikes. They are also responsible for contacting the people in charge of events, whether that’s political leaders, nonprofit organizations, or government agencies. The most important role for an Emergency Management Director is to ensure that the plan is put in place and executed to perfection. Emergency Management Directors will respond efficiently to any issue that may arise, whether it’s a natural disaster, terrorist attack, bombing, etc. The Emergency Management Director is in charge of creating the plan and executing it. It is important to have an eye for detail as an Emergency Management Director because you will need to evaluate areas and come up with escape plans for people in case of emergencies. Some of the things you can expect to do as an Emergency Management Director on a daily basis include: - Develop plans and procedures for emergencies - Analyze resources - Provide staff information so they can implement a procedure - Revise plan and gather resources - Coordinate with police officers and firefighters - Maintain command center in case of emergencies - Monitor and manage emergency operations On average in the United States, an Emergency Management Director can make around $142,000 a year. If you are someone that is just starting out in the career, it is more likely that you will make around $124,000 a year to begin. When you have more experience within the field and have worked as an Emergency Management Director for some years, it is possible to make up to $160,000 a year. Some of the things that might factor into the variation in salary are education, certifications, specializations and even the location of where you work. The number of years that an Emergency Management Director has worked can also factor into salary. Annually National Average Salary: $82,530 Monthly National Average Salary: $6,833 Hourly National Average Salary: $39.68 Average Annual Salary by State |State||Avg. Annual Salary| |Arizona||- NA -| |District of Columbia||$120,680| Annual Average Salary: Top 5 States The top earning state in the field is District of Columbia, where the average salary is $120,680. These are the top 5 earning states in the field: Average Monthly Salary by State |State||Avg. Monthly Salary| |Arizona||- NA -| |District of Columbia||$10,000| Monthly Average Salary: Top 5 States The top earning state in the field is District of Columbia, where the average salary is $10,000. These are the top 5 earning states in the field: Average Hourly Salary by State |State||Avg. Hourly Salary| |Arizona||- NA -| |District of Columbia||$58.02| Hourly Average Salary: Top 5 States The top earning state in the field is District of Columbia, where the average salary is $58.02. These are the top 5 earning states in the field: Conducted by: Bureau of Labor Statistics, Department of Labor. * Employment conditions in your area may vary. How to Become an Emergency Management Director Step 1Have a High School Diploma or GED No matter what type of field you plan on going into, if you want a college degree you will need to have a High School Diploma or a GED. Some of the subjects that are important when thinking about a career as an Emergency Management Director include: - Social science - Life science Most colleges will not accept those that have a GPA of 2.0 into their programs. This is something to think about before you start applying for schools. Before entering a college program to become an Emergency Management Director, you’ll want to make sure you have skills like: - Communication – Emergency Management Directors must write, read, and speak well so that others can hear them. - Critical thinking – Emergency Management Directors need to have multiple plans of action in case of issues with their previous plan. - Leadership – Organization and leading groups of people is a large part of this career. Step 2Earn a Bachelor's Degree Now that you have applied to the school that you want, it’s time to earn your degree. If you want to become an Emergency Management Director, you’ll need a Bachelors’s degree in business, public administration, accounting, finance, public health or emergency management. Sometimes it’s even required to have a degree in computer science, information systems administration, or some other tech degree. Some of the things that you can expect to learn in a typical public administration or emergency management are: - Organizational governance - Loss prevention - International Business Emergency Management Directors can work practically anywhere, so there may be a variety of subjects you learn based on your location and standards. Step 3Get Work Experience After you have obtained a Bachelors’s degree, it’s time to put in the hours at work. It probably won’t come easy at first, because these jobs can be hard to find. Most times, in order to become an Emergency Management Director, you’ll need to have several years of job experience. There are some different ways to achieve this: - Law enforcement - Fire safety - Other Emergency Management field Previous work experience in these areas shows employers that you have the skills and knowledge to keep people safe in case of an emergency. Obtaining employment can be difficult for this career, however, it is a very serious job and should not be taken lightly. This means that most, if not all, Emergency Management Directors should have experience in law enforcement or another emergency response team career. Step 4Become Certified After you gain experience, it’s possible that you’ll need to become licensed. Some states require this, and some states opt-out. Many states and agencies offer voluntary certifications to show additional skills. Some employers may even prefer that their employees become Certified Emergency Managers or Certified Business Continuity Professionals. Those that are interested in the Certified Emergency Manager certification can find this certificate through the International Association of Emergency Managers. Others that are interested in the Certified Business Continuity Professional certification can find more information through the Disaster Recovery Institute International. In order to obtain either of these certifications, you’ll need several years of experience. In order to become an Emergency Management Director, a Bachelors’s degree is required. Some people opt to start their college degrees at community colleges and then go on to larger universities after obtaining their Associate’s degree. Others start off at a four-year college in hopes of gaining their Bachelor’s all in one step. Either way is great, but if you want a Bachelors degree to become an Emergency Management Director, you’ll want to study: - Public Administration - Emergency Management - Public Health - Fire Science - Homeland Security Emergency Management Directors that work with private companies may need a degree in: - Information Systems Administration - Computer Science Most of the courses that a person who wants to become an Emergency Management Director will take are the same across the majors, however, there may be some differences. The main classes include: - Business Ethics - Financial Management - Critical Thinking - Management Theory and Practice - Finance for Business - Global Business Strategies A more specific degree in Emergency Management can include courses like: - Policing in Society - Civil Rights and Liberties - Urban Politics - Crime and Technology - Criminal Law Usually, when first starting out in college, students like to begin their core classes that aren’t as specific to their major. As they gain more experience in college, the classes become more specialized. The typical Bachelor’s degree can take around four years to complete. There are many important factors when it comes to deciding which major to begin when pursuing a career as an Emergency Management Director. Keep in mind the basis of the career is to prevent emergencies and keep people safe. The curriculum of the degree you plan to achieve will likely cover sensitive topics, and will be based on the understanding of current trends and public safety as it relates to management. With a Bachelors’s degree in public safety or another similar field, the focus is on leadership, public policy, politics, and administration. Video About The Career Now that you have your shiny new Bachelors’s degree, let’s talk about becoming certified. You’ll need to have a little bit of experience, typically 3-5 years, before being eligible to become a Certified Emergency Management Director. Although some states do require that their Emergency Management Directors become certified, it’s not necessary everywhere, so make sure you check where you work. There are a couple of different types of certifications that an Emergency Management Director can earn. The first is the Certified Emergency Management Director. This certification is given through the International Associations of Emergency Managers. In order to be eligible for this certification, several things are required: - Three years of comprehensive emergency management experience - Bachelors degree in Emergency Management or equivalent - List of 6 professional contributions made in the last year - One signed letter of reference and three other references - A score of 75% out of 100 on the certification exam - 100 hours of general management training The certification fee is $400 and must be renewed every five years The other is called the Certified Business Continuity Professional. This certification is given by the Disaster Recovery Institute International. In order to be eligible for the Certified Business Continuity Professional certification, hopefuls must: - Demonstrate knowledge and working experience - Have at least two years of experience as an Emergency Management Director - Demonstrate specific practices in five different subject matters You’ll also need to pass the exam with a 75% out of 100. There are typically several essay questions on this exam, and these subjects areas include: - Business Impact Analysis - Developing Business Continuity Strategies - Developing and Implementing Plans - Maintaining Plans The exam costs $400 to take and must be renewed every year. For both exams, the references must be at least two people who can document the experience of the applicant in the subject area. Average Training Program Duration: 4+ Years The average training program to become an Emergency Management Director is around eight years. This time includes the four years that it takes to earn a Bachelors’s degree. It also includes the time that it takes to gain certifications and experience in the field. Many employers will not hire people that don’t have the experience, so this is extremely important. Popular Degree Programs Although the job of an Emergency Management Director is not expected to grow very much in the next ten years, only 5 percent, it is still an important job. The importance of preparing for disasters and having a plan will allow this career to continue to grow. The risk of emergencies in situations like rallies, political conferences, and other types of community gatherings will sustain this career as well. One of the downsides to this job is that competition for the career will be strong. There aren’t many job opportunities for this type of career, but the ones that are available are very lucrative. Employment Growth Projection: 5% That's a higher than average projected growth of 500 jobs. EM Director: Interest Over Time Should You Become an Emergency Management Director? Overall Satisfaction: High For those people who like to protect others and help out in scary situations, the job of an Emergency Management Director is the perfect fit. This is also a great job for people who enjoy working outdoors and don’t mind being around different people all of the time. The level of stress that this job offers can affect overall satisfaction. Also, typically Emergency Management Directors work more than 40 hours a week. Many Emergency Management Directors say that their job is meaningful, which also makes it satisfactory. Average Salary: High The average salary for an Emergency Management Director is around $140,000 a year in the United States. Those that have more experience as an Emergency Management Director, including education, specializations and certifications can expect to make around $160,000 a year. Emergency Management Directors that are just starting out in the career can expect to make a little less, at around $124,000 a year to start. The location and company that an Emergency Management Director works for can have an effect on salary as well. Job Growth Outlook: Low There has always been an importance to eliminating risk, but even more so now and in the future. That is why the job growth for an Emergency Management Director is expected to grow around 5 percent in the next ten years. That is slower than some other occupations, but still rising, which is good. The job of an Emergency Management Director may be hard to find, but one that will provide an incredible career. Retirements over the next decade may provide more career opportunities for incoming Emergency Management Directors. Education Duration: 4+ Years The average person spends about four years earning a Bachelors’s degree. Then, once a degree is obtained, a hopeful Emergency Management Director will need several years of experience in the field. The amount of time that it takes to begin a career as an Emergency Management Director also depends on the type of experience one has. Some people go into the military, some join law enforcement. It can take a while to become an Emergency Management Director, but it is worth it. Personal Skills Needed It is important that an Emergency Management Director is someone who can stay calm in chaos, and able to assist others to safety in emergency situations. Some of the skills that a hopeful Emergency Management Director should possess include: - Exceptional critical thinking skills - Amazing attention to detail - Communication and cooperation skills - Time management skills - Organization skills - Ability to delegate and supervise If you are the person that people run to when times get scary, then you may want to start a career as an Emergency Management Director. Frequently Asked Questions Q. How much does an Emergency Management Director make? On average in the United States, an Emergency Management Director can make around $140,000 a year. Those that are just starting out working as Emergency Management Directors can expect to make less than that at around $124,000 a year. When an Emergency Management Director has many years of experience, a wealth of knowledge and certifications, they can expect to make around $160,000 a year. Q. What does an Emergency Management Director do? The Emergency Management Director is the one in charge of keeping everyone safe at events and gatherings. They are also responsible for making plans to keep everyone safe is disaster strikes. Some of the things that an Emergency Management Director might do on a daily basis can include: - Speaking to law enforcement - Creating escape plans - Speaking with employees - Running drills Q. How long does it take to become an Emergency Management Director? It can take around four years to earn a degree to become an Emergency Management Director. However, it also takes a couple of years of experience in a related field to get hired in most cases. The fields that a person can gain experience in are areas like: - Fire departments - Homeland Security - Police offices The formal education only takes four years, but the knowledge one must obtain in this career will last a lifetime. Q. Is there a demand for Emergency Management Directors? The short answer is yes and no. Yes, there will always be a demand for Emergency Management Directors because they are the ones who will keep others safe in case a disaster happens. However, the career is difficult to get into and there aren’t many jobs available as of right now. This means that although the job is in demand, it’s hard to find the right employment. Q. How much does it cost to become an Emergency Management Director? It can be scary to think about how much it may cost to begin a career that you’ll probably love, so looking into the cost to become an Emergency Management Director is a smart idea. Since it requires a Bachelors’s degree to work as an Emergency Management Director, it can cost around $35,000-$55,000 to earn a degree. Then you have to think about certification. It can cost $400 to take a certification exam, and if you fail you must take it again. Plus, every couple of years you will have to renew that certification for $250.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9397780895233154, "language": "en", "url": "http://extension.msstate.edu/news/feature-story/2019/bonnet-carr%C3%A9-spillway-closes-impact-seafood-industry-continues-linger-msu", "token_count": 777, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.00052642822265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0a6507ba-5318-4664-8a72-dda506833d35>" }
As Bonnet Carré Spillway closes, impact to seafood industry continues to linger, MSU economist says Contact: James Carskadon STARKVILLE, Miss.—While the U.S. Army Corps of Engineers is closing the Bonnet Carré Spillway this week, economic impacts of its months-long opening are expected to be felt in the seafood industry for years to come. Mississippi State’s Benedict Posadas, associate extension and research professor at the university’s Coastal Research and Extension Center, has been studying the economic impact of disasters on the Gulf Coast since Hurricane Katrina, including the oil spill in 2010 and the spillway’s opening in 2011. “It will take time to fully assess the economic impact of the spillway opening,” said Posadas, who holds a research appointment with the Mississippi Agricultural and Forestry Experiment Station. “This cuts across a lot of areas in the coastal economy.” The Bonnet Carré Spillway was opened twice this year to control flooding along the Mississippi River. As a result, freshwater has been pouring into the Mississippi Sound, altering marine ecosystems along the Gulf Coast. Gov. Phil Bryant has assigned a task force, led by the University of Southern Mississippi, to monitor environmental conditions and assess the impacts to marine life. MSU personnel are assisting in those efforts by monitoring dolphin and sea turtle deaths to determine potential causes. One way Posadas is evaluating the economic damage is by studying data on this year’s commercial harvests and comparing it to five-year baseline averages. Using preliminary 2019 sampling data from the Mississippi Department of Marine Resources and five-year baseline data from the National Oceanic and Atmospheric Administration, Posadas has found significant impacts on oyster, blue crab and shrimp fisheries. The decreased salinity of the water in the Mississippi Sound has decimated this year’s oyster harvests. From 2012-2016, the most recent years for which data is available, approximately $1,376,000 in oysters was harvested every year in Mississippi. In that same time frame, approximately $848,000 in blue crabs landed at Mississippi docks every year. Between March and June of this year, blue crab harvests declined by approximately 25 percent of their previous five-year average. Wild shrimp harvests are down by approximately 40 percent for the months of May and June compared to previous years, with shrimp landings from 2012-2016 having an average annual worth of $17,766,000. “This data just scratches the surface of the impacts,” Posadas said. “Beyond the initial harvests, there are impacts on companies involved in the processing, wholesaling and retailing of these products.” Posadas noted that documenting economic damage is important as legislative leaders seek emergency assistance funds from the federal government. Analyses compiled by Posadas and other MSU personnel were used to help secure funding after the oil spill and the 2011 spillway opening. At the other end of the state in the Mississippi Delta, Extension agents are conducting agricultural damage assessments in counties affected by the ongoing flooding in the region, providing documentation that will aid producers as they seek financial reimbursements. The Coastal Research and Extension Center is part of MSU’s Division of Agriculture, Forestry and Veterinary Medicine. It is structured to provide education and outreach for Mississippi coastal residents regarding almost every aspect of the coastal environment— fisheries, seafood processing, aquaculture, wetland management, marine industry, recreation, economics and law. For more, visit www.coastal.msstate.edu. MSU is Mississippi’s leading university, available online at www.msstate.edu. Read this story on the Mississippi State University site.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8900070190429688, "language": "en", "url": "https://an-essay.com/exam-spring-2016", "token_count": 1961, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0245361328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7523db43-8bd6-4041-99cb-02b24a2d37ca>" }
EXAM SPRING 2016 1 1.) Pleaseconsider “Sustainable Economic Development” by the Institute forSustainable Communities (see Week One). On page 6, five “KeyChallenges and Needs” are identified. Please select one of these(for example, Collaboration) and discuss the economics of thechallenge or need. In other words, what is the allocation problemand why can’t the private participants achieve an optimal outcome. Be sure to define and related terms. Financingpresents a formidable challenge to ensuring sustainable economicdevelopment. Both private and public entities are required to makesignificant investments so as to acquire renewable technologies.However, financial institutions have limited resources to spare forrenewable technologies. In this regard, new financing models areneeded to provide capital for the creation of a sustainable economy(ISC, 2011). The allocationproblem concerns the fact that available financial resources do notsuffice to cater for all uses. In many instances, activities that areurgent and productive are usually prioritized during the allocationof resources (ISC, 2011). Therefore, setting aside resources forsustainable economic development is often overlooked. This is becausesustainable economic development does not seem urgent or evennecessary. Furthermore, the rewards from sustainability do not occurmomentarily. It usually takes considerable time for the benefits ofsustainability to be visible (ISC, 2011). Privateparticipants fail to achieve an optimal outcome because it is unclearwhich activities would lead to better sustainability. It is alsodifficult to highlight the role of local governments in helpinginstitutions to acquire finances for sustainable development (ISC,2011). Private participants also fail to attain a favorable outcomesince there lacks a framework for forming public-privatepartnerships. Therefore, private partners lack enough motivation topursues renewable technologies. 2.) Pleaseconsider Pezzey and Toman 2002 “The Economics of Sustainability: AReview of Journal Articles.” On page 23 in their concludingremarks, they assert “a sustainability objective or standard…ismore than a simple PV criterion.” Please discuss. Sustainability isan enduring quality that companies and industries strive to attain.The levels of utility and consumption are assumed to approach zero inthe long-term. Such an eventuality primarily occurs after an initialthreshold has been obtained. Nevertheless, a sustained level ofutility and consumption is impossible to attain. Therefore, it isexpected that nonrenewable resources will continue to be scarce.Also, utility will be governed by positive discount rates (Pezzey &Toman, 2002). Consumption of a resource is usually concentratedduring the period which it exists in large quantities. In lateryears, consumption plummets along with resource availability. The gradualdepletion of resources limits the level of output. In this respect,capital investment does not suffice to cover the negative effects ofdepletion on output. Also, the constant depletion of resourcesundermines the effectiveness of sustainable development. Therefore,ongoing technical input is required to counter the debilitatingeffects of resource depletion (Pezzey & Toman, 2002). Examiningthe shifts in consumption level can help to show that having asocietal objective of sustainability differs from present valueoptimality. The economicgrowth theory stipulates that the maximization of consumption levels.On the other hand, constant consumption weakens sustainability andreduces the maximization of present value consumption. Sustainabilitycan be achieved if environmental externalities and marketinefficiencies are internalized (Pezzey & Toman, 2002).Therefore, a standard for sustainability is more than aninstantaneous, present value criterion. 3.) Please defineand discuss the Environmental Kuznets Curve. Siebert 2005 (page 279)and Harris and Roach 2013 (page 410) provide insight. EnvironmentalKuznets Curves can be understood through studying the impact ofeconomic growth on environmental quality. A wealthier nation couldutilize consume more energy and resources while producing morewastes. On the other hand, it is possible that richer nation wouldinvest more resources in renewable energy, formulate effectivepolicies, and install sophisticated equipment. Environmental qualityis adjudged to be both a normal and luxury good (Harris & Roach,2013). As a normal good, people would ordinarily spend more as theirincome level rises. As a luxury good, people would spenddisproportionately higher portions of their income as their incomelevel rises. Initially,economic growth would lead to environmental degradation. However, inthe long-run, a wealthy nation would have sufficient resources toenhance environmental quality. The Environmental Kuznets Curve (EKC)arises from the supposition that environmental impact rises witheconomic growth up to a certain level. Beyond a particular incomelevel, environmental impacts begin to decrease. This is becauseabatement activities and innovative technologies are applied toreduce harmful emissions (Siebert, 2005). The EKC hypothesis showsthe relationship between environmental impacts and income in aninverted U-shape curve. This relationship is well-established forpollutants such as nitrogen oxides, particulate matter, and sulfurdioxide. Nevertheless, the EKC hypothesis does not hold for carbondioxide emissions (Harris & Roach, 2013). Therefore, the EKChypothesis has been used to discredit the promotion of economicgrowth as a means of reducing greenhouse gas emissions. 4.) Pleaseresearch the ultimatum game and its game theory application todecision-making and market coordination. How might the ultimatumgame help solve the assurance game problem introduced by Hanley etal. 1997 (see page 14-17). The ultimate gamerefers to where a player makes a proposal and thereby gives the otherplayer the chance to either accept or reject the proposal. The firstplayer also acts as the proposer. In this respect, he acquires acertain sum of money and proposes a method of sharing the sum withthe other player. The second player has the discretion to choosewhether to accept the proposal. If the proposal is accepted, then themoney is shared accordingly. On the other hand, neither player isentitled to the money when the proposal is rejected. The ultimategame can be used to solve the assurance game problem. In reducingcarbon emissions, two countries can simultaneously decide not topursue environmental quality because such an action favors theirinterests. A simultaneous decision aimed at maximizing utility wouldyield an unfavorable outcome in comparison to the co-operativesolution (Hanley, Shogren, & White, 1997). Strictly dominatedstrategies could lead countries to miss out on the benefits ofreducing greenhouse emissions. Therefore, the ultimate game can beused to acquiring binding commitments that can override the‘prisoners’ dilemma. The first country can propose a method ofsharing the profits provided the second state guarantees it will cutits emissions. Having such a mutual interest will ensure that bothcountries reduce greenhouse emissions. 5.) Please answerHarris and Roach 2013, Discussion Question #2 (page 429): “What steps ifany, do you think should be taken to promote a green economy in yourcountry or region? What steps would be most effective? Can youpropose policies that businesses might support?” Several steps canbe taken to promote a green economy. First, the government needs toincrease its commitment to creating a green economy. Monetary andfiscal policies can be formed so as to ensure extensive research anddevelopment into renewable energy technologies. The government couldsubsidize the cost of renewable technologies so as to enablecompanies to make a smooth transition into a green economy withoutincurring costs. Public expenditure on unproductive ventures can bereduced so as to create funds for renewables (Harris & Roach,2013). Fuel subsidies should also be gradually reduced so as todiscourage the use of fossil fuels. Legal andregulatory measures can also be formulated so as to promote a greeneconomy. A gap analysis can be used to address the seemingdifferences between national environmental laws and global bestpractices. Such an analysis can be utilized as a guide forformulating legal reforms. For example, the use of fossil fuels canbe outlawed beyond prescribed limits. The wastes produced byparticular company could also be used as raw materials in a differentindustry. Financial instruments can also be used to incentivize theadoption of green technologies. Nevertheless, the most practical stepconcerns governmental intervention and support. Businesses mightsupport policies that reward the adoption of renewable technologies.They may also support the creation of working groups set up toevaluate the effectiveness of proposed renewables for different firms(Harris & Roach, 2013). Also, businesses may support regularsensitive analysis aimed at assessing the merits of renewabletechnologies. Hanley, N., Shogren, J. F., & White, B. (1997). Environmentaleconomics: In theory and practice. New York, NY: OxfordUniversity Press. Harris, J. M. & Roach, B. (2013). Environmental and naturalresource economics: A contemporary approach. Armonk, NY: Sharpe. Institute for Sustainable Communities (ISC). (2011). Sustainableeconomic development. New York, NY: The Rockefeller Foundation. Pezzey, J. & Toman, M. A. (2002). The economics ofsustainability. Burlington, Vt.: Ashgate/Dartmouth. Siebert, H. (2005). Economics of the environment: Theory andpolicy. New York, NY: Springer.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9405053853988647, "language": "en", "url": "https://ecowatch.noaa.gov/regions/northeast", "token_count": 421, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.018310546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e1c414e6-a948-4cf0-9083-f431ed390437>" }
Description of time series: Between 2013 and 2017, average annual commercial revenue from the Northeast was substantially higher than historical patterns, although there is no trend in values. Given the historically low level of landings over that same period, the difference is derived from a substantially higher price per pound of fish received by fishermen. Description of gauge: Between 2013 and 2017, average annual commercial revenue from the Northeast was greater than 90% of all annual revenue from 1950 to 2017. Description of Commercial Fishing (Landings and Revenue): Commercial landings are the weight of, or revenue from, fish that are caught, brought to shore, processed, and sold for profit. It does not include sport or subsistence (to feed themselves) fishermen or for-hire sector, which earns its revenue from selling recreational fishing trips to saltwater anglers. Commercial landings make up a major part of coastal economies. U.S. commercial fisheries are among the world’s largest and most sustainable; producing seafood, fish meal, vitamin supplements, and a host of other products for both domestic and international consumers. The weight (tonnage), and revenue from the sale of commercial landings provides data on the ability of marine ecosystems to continue to supply these important products. Extreme Gauge values: A value of zero on the gauge means that the average revenue or landings over the last 5 years of data was below any annual value up until that point, while a value of 100 would indicate the average value over that same period was above any annual value up until that point. Commercial landings and gross revenue were downloaded from the National Marine Fisheries Service’s annual commercial fisheries landings query tool which can be found at https://foss.nmfs.noaa.gov/apexfoss/f?p=215:200::::::. State pounds landed and revenue generated were aggregated to the appropriate region, and all revenue data was deflated to 2017 constant dollars using the Bureau of Labor Statistic’s Consumer Price Index (series CUUR0000SA0).
{ "dump": "CC-MAIN-2021-17", "language_score": 0.962700605392456, "language": "en", "url": "https://essaysprofessor.com/samples/business/international-business.html", "token_count": 1400, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.42578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:99c5d546-baf6-4d85-805b-e319311a5841>" }
Friedman believes that America has to open up to free trade because it can derive many benefits from participating in free trade with other countries. He believes that opening up to free trade will help the country merge with the new concept of flat world or globalization. The advantages for America operating in free trade are numerous. One of the aspects the author focuses on is the decrease in unemployment. This can be done if the United States opens up the global trade in the agricultural sector. This way many unemployed people will have more chances of finding jobs in this closed sector. The demand for goods will rise and this will be further stimulated by the demand for innovation to increase the production level. The other advantage of opening up to free trade is that it will help reduce the rates of migration from one country to the other. This comes about from the fact that an increase in the employments opportunities will prevent people, especially the youth, from going to other countries to look for employment opportunities. The Americans can further benefit from free trade if they use Ricardo’s theory on the comparative advantage. This theory states that specialization in production of goods with comparative advantage when it comes to its costs and trading it with other states for other goods will increase the income of the country. This is true according to Friedman, who believes that free trade will open up the market and allow the Americans to practice outsourcing and offshoring. The concept of outsourcing is important for any business thinking of getting some services done, but do not have the skills needed internally (Friedman 45). The companies in international business utilize this activity because it is cost effectiveness and the people doing the work are expertise. Technology makes it possible for outsourcing with the fiber optics and the World Wide Web. Offshoring is another aspect of international business that makes it easier for businesses to deal with the production and operational costs in their host countries. This is the practice of taking the production processes of the company to a different country. Offshoring tries to minimize the costs of production. Friedman makes it clear that the arguments some people have against free trade do not hold. The arguments that free trade will lead to decline because it will lower the wage levels is not true because as the demand for goods and services increase so will the prices of the goods. This way, people will still earn substantial wages. The increased demand for goods and services will require more employees to produce them, hence raise the wages and the demand for labor. He uses China and India as examples of countries that opened up to free trade and are now among the top countries economically. The idea of taking the right classes in order to learn the right thing is one of the things the author focuses on in chapter seven. College classes help one learn the right things at the right time. Some of the things one does in college to help them learn the right thing is practice how to forge good relationship with others instead of constantly doing transactions. This way one learns to understand the different concepts and applies them while working. The other thing college teaches students is the importance of engaging in solving novel problems. This is a shift from the routine where a person learns to solve common problems. The other important lesson is learning how to face problems and issues as a whole subjects instead of focusing on individual aspects. It is important to adapt to new changes in the setting to help one fit in to the dynamic business environment (Friedman 67). The skill of “CQ + PQ > IQ is used by Friedman to explain how students should use curiosity and passion to learn about new concepts. The ability of a student to think beyond the obvious helps him or her understand concepts and put them into practice. Passion, on the other hand, helps build skills that will enable the person to tackle different problems. Critical thinking is some of the lessons that help one develop a deep sense where students are able to question things and learn to appreciate different aspects of content. Liking other people and learning to get along with them helps one navigate through social networks and careers. Communication skills help the students learn how to relate with other people and get along with them. This is an important characteristic for students to develop. This is because the skill cannot be outsourced from one company to another. The human relation courses are particularly important in helping the students learn how to do their jobs well. There is an increasing demand for workers who can perform different tasks. This is the concept Friedman discusses in the Tubas and Test Tubes. This idea requires an integration of different talents and abilities that a student has in making decisions. This is something that anyone can do because people have multiple talents. The College of Business can fulfill this goal by nurturing different talents that students have. This can be included in the curriculum by setting aside times to test the different skills students have. This is something they can do by putting in some time for talent show casing, where students with different talents can come up and compete. This should not be part of the course content because students may feel that they have to do it instead of it being a voluntary action. The students can do what they feel they are best at and at the end the COB can identify those with outstanding skills in different areas. Moreover, they can help them build on it by providing resources to help them nurture their talents further. The other way this can be applied in the curriculum is by having competitions held once every month to identify who has skills like persuasion or the ability to convince buyers. This can be done by creating an interclass competition where the class members have to sell goods and achieve a certain goal at the end of the day. This way the school will identify those students with marketing skills and other skills needed in the business world. The introduction of the iPads and iPods has made the world a really small place and it had made connectivity between individuals easy and convenient. Every day I spent about three or four hours browsing and learning new things. This has made learning interesting as I get to learn new things, interact with students from other countries, and share ideas. This has increased my attention span as I get to concentrate on the class work more now as compared to when I had to go to the library and manually search for content. I try as much as possible to interact with other people from different parts of the world and in the process, I get to know about the things they go through, the variation in culture, the challenges they face and even their daily routines. This way I get to learn different cultures and new things that I integrate into my daily life to make it worthwhile. The connectedness via social media provides a platform where people interact and get to know more about each other. I try to balance the time I spend on social networks and most times, I do not believe all the things I see in social networks. The information I get from the articles and postings on social networks help me know current issues and make it easy to understand what is happening outside this city.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.963570237159729, "language": "en", "url": "https://islamicmarkets.com/education/decline-of-the-ottoman-cash-waqfs", "token_count": 1998, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07666015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:37de8458-6c57-4e62-99c9-c5b33fa9f518>" }
Decline of the Ottoman Cash Waqfs At this point the reader may wonder about the relative staying power of the cash waqfs vis-a-vis the real estate waqfs. Bearing in mind that some major sultanic real estate waqfs could be maintained for centuries and many are still in service, it may be thought that the real estate waqfs should have much greater possibilities for survival. But the relevant question here is not the survival rate of some major sultanic waqfs, which could be extended even over a millennium, but the average rate of survival of all. In any case, unfortunately, we are not yet in a position to conduct comparative research because the survival rate of the real estate waqfs in any particular locality has not yet been studied. But the survival rate of the cash waqfs of Bursa has been calculated. Our research has revealed that slightly more than 20% of the Bursa cash waqfs survived for more than a century (Çizakça, 1995: 317-320). Thus, although probably less impressive than the real estate waqfs, the survival rate of the cash waqfs should not be underestimated. If we look at the problem of survival not from the perspective of cash versus real estate, but cash waqfs as a whole, we encounter a totally different picture. Thanks to recent research, we have been informed about the substantial decline in the relative importance of cash waqfs as a source of credit (Öztürk, 1995: 26; eyhun, 1992). Öztürk has shown that in the year 1908 the total capital of the cash waqfs was equal to 90,750,000 gru and rose to 321,989,000 gru in 1923 and to 11,111,423,000 gru in 1943. He thus gives us the impression that the system was doing perfectly well but Eldem informs us that in the same year the Ziraat Bankasi, an agricultural bank, alone advanced 563,000,000 gru , as credit. The credit advanced by the Ottoman Bank, on the other hand, had reached a staggering 1,102 million gru (Eldem, 1970: 234). In short, modern banks as suppliers of credit superseded the cash waqf system. Apparently there were two distinct reasons behind this decline: economic and administrative. Let us first concentrate on the former. It has already been mentioned that the cash waqfs charged a fixed rate of “economic interest” which did not change over the long run. The rigidity of this rate was caused by conditions stipulated by the founders at the time of the establishment of these endowments. Once determined by their founders, these rates could not be changed in response to the changing economic conditions and any attempt to do so was considered to be against the law. While the rates charged by cash endowments thus remained fixed, there developed other sources of finance, which were not hampered by such limitations. The sarrafs, money changers, charged rates determined by the supply and demand for money. Consequently, there developed a capital market in which two different rates of interest prevailed. It was argued above that under these conditions, it would make sense to borrow money from cash waqfs, which supplied the relatively cheaper capital and then sell this to the sarrafs who would re-sell it with a mark-up to the public. It was further argued that the trustees of the cash waqfs were in an ideal position to perform such transactions and indeed, it was shown as evidence for the above argument that they were emerging as major borrowers of capital from the very endowments that they controlled. Even more definitive evidence supporting this idea has been found in the archives of the Chamber of Commerce of Marseille. The correspondence of French merchants residing in Istanbul inform us that, indeed, the market rate of interest prevailing in that city was substantially higher than the “economic interest” charged by the cash waqfs of Bursa. In one of the French documents, it is clearly stated by the two “deputés” Conston and Reimond that the situation in Istanbul differs substantially from that of Europe. They report that the sarrafs obtain capital at 12% to 13% interest, which they then lend to the members of “our” nation with at least a 20% interest without any regard to usury prohibitions. This approximate rate of 12-13% is roughly 2% above the rate at which the cash waqfs provided capital. The 2% difference therefore may represent a mark up charged by the trustees when they re-sold the capital to the sarrafs. That the sarrafs, indeed, borrowed capital from the cash waqfs has been proven also by an original Ottoman document. Moreover, the trustees themselves could also become sarrafs. In this case, the profit of the trustee/sarraf would increase up to 8% or more. In short, the trustee/sarraf would borrow capital cheaply from the cash waqf managed by himself and lend it at a higher rate to a third party. This process naturally closely resembles the essential character of conventional deposit banking and the sarrafs may be considered the original deposit bankers in the Ottoman Empire. Evolution from charitable foundations to banks has also been observed in Europe. All the seven banks of the 17th century Naples were engaged in charity and functioned much like the Ottoman cash waqfs granting loans upon pledge. The details of how these Italian charitable foundations evolved into the powerful public banks and the comparison of this process with the emergence of powerful Ottoman sarrafs need to be searched separately (Avallone, 1999: 111-115 and Kazgan, 1991). But the primary reason for the disproportionate financial powers of the two institutions must be sought in their organisational structure: whereas the capital of cash waqfs is constituted by the savings of a single person, that of the deposit banks is constituted by the savings of the masses. It is true, some cash waqfs did apply what we have called above “supply side capital accumulation” with one cash waqf donating part of its profits to another, but this was basically of a voluntary nature and quite unsystematic. Consequently, the huge discrepancy presented above concerning the relative financial powers of the two institutions should not surprise us. This discrepancy would become even more striking if we take into consideration the fact that the Ziraat Bank was only 20 years old when it had so obviously superseded the cash waqfs, an institution that has been in existence since at least the fifteenth century, as a source of credit. Turning our attention to administrative reasons for the decline of the cash waqfs, we must note a major development that affected the entire waqf system, not only cash endowments but also real estate waqfs. This was the centralization drive initiated by Abdulhamid I and continued rigorously by the following sultans, particularly Mahmud II. Although this process will be analysed in detail later, it should suffice here to note that cash waqfs also could not escape Mahmud’s iron grip. A directive promulgated on the nineteenth Cemaziyellevvel 1280/1863 made it clear that cash waqfs fell within the jurisdiction of the Evkaf-ı Humayun Nezareti, Ministry of the Imperial Endowments. Article 14 of the directive instructed the trustees that the annual return of endowments not assigned for a specific social service must be sent directly to the treasury and recorded in the registers rather than kept by the trustees. This Article is of interest not only because it indicates clearly that the cash endowments did not escape the centralization drive of Mahmud II, but also because it confirms the arguments made above pertaining to the tendency of the trustees to exploit the resources of the cash waqfs to their own advantage. It is self evident that the trustees did not just keep the money in their possession but lent it at a higher rate to the sarrafs or to the public. The demise of the cash waqfs under the Republic can be summarised as follows. As the Ottoman Empire was dying in Istanbul, cash waqfs contributed substantially to the newly established nationalist government in Ankara. The Law of Endowments dated 1935 had articles pertaining to the profitable administration of the cash waqfs. The death warrant was issued in 1954 when all the capital of these endowments was transferred to Vakıflar Bankasi, the Turkish Bank of Endowments. The group A shares issued by the bank were purchased by endowed cash. These shares constituted 55% of the bank’s capital and remained the property of the General Directorate of Endowment. Consequently, they could not be sold to third persons (Hatemi, 1979: 635). The shares in group B, constituted 20% of the capital of the bank and were owned by the endowments managed by their own trustees. In 1967 another law introduced a rule of conversion, istibdal, and made it obligatory to convert all endowed cash into bank shares thus destroying whatever was left of the judicial personality of the cash waqfs. Ironically, however, 1967 can also be considered as the year of the re-birth of modern Turkish cash waqfs. Put differently, while the judicial personality of the Ottoman cash waqfs was being destroyed, new and exciting possibilities were being opened up for the Turkish cash waqfs. These developments will be presented below. Source: Murat Cizakca, A History of Philanthropic Foundations: The Islamic World From the Seventh Century to the Present. Republished with permission.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9619390368461609, "language": "en", "url": "https://learn.age-up.com/blog/the-looming-longevity-crisis/", "token_count": 1322, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.33984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:15c1cfc1-e940-460d-ba1d-da39f000b3aa>" }
Back when Social Security was created in 1935, the average life expectancy in the U.S. was only 61 years, and just 6% of Americans were age 65 or older. The good news? Today, 15% of us are 65+, and that number is expected to pass 22% by 2050. Not only are more of us living to see retirement, but retirements are getting much longer. In 2000, there were 50,281 centenarians—people who live to 100—in the United States. In 2014, that number was 72,197, and by 2050 there will be 378,000 Americans age 100 or over, according to the Pew Research Center. Americans are living much longer than we used to, but that’s only half of the story. We’re also having fewer children: women born in 1935 had an average of three children, and today that number is 1.9. The Census Bureau predicts that just 11 years from now, there will be more senior citizens in America than people under 18. So what’s the problem? It really is great news that people are living longer, but an aging population has some pretty severe side effects when it comes to retirement. In 1960, there were over five workers for every retiree receiving Social Security, but by 2040, the ratio will be close to two workers per retiree. The vast majority of Social Security’s funding comes from payroll taxes (88% in 2017), so the fewer workers there are paying in, the harder Social Security is to fund. The Social Security and Medicare Board of Trustees reports that the Social Security trust will be exhausted by 2034. After that, payroll taxes will only support 79% of promised benefits for 2034, and the problem grows worse from there. This doesn’t mean Social Security is doomed, but it does mean retirement planning can’t be put off indefinitely. The other problem: disappearing pensions Company pensions used to be a standard benefit in America, but since the advent of the 401(k), they’ve steadily declined in the private sector. That’s unfortunate, since 401(k)s shift two distinct risks from the employer to the individual: investment risk and longevity risk. As anyone who tried to retire in the mid-to-late 2000s knows, the stock market can be volatile. That’s fine if you have 20 or 30 years to wait for a rebound, but not so helpful if you need the money today. Pensions also provide a perfect hedge against longevity risk. In a given group, some people will die young, some will live an average lifespan, and some will live much longer. Pooled pension funds are like a form of insurance, where the risk of outliving one’s resources is distributed among the wider group, ensuring the money goes where it’s needed most. The rise of the 401(k) means individuals are responsible for their own retirement, and Americans aren’t saving nearly enough to account for decades of life after their incomes stop. Among families aged 56-61, the average savings is just $167,577, according to The Economic Policy Institute. And $167,577 is the average, which isn’t necessarily representative of the average family, since a small number of high net worth individuals skew the mean upwards. Many, many families approaching retirement age have little to no savings whatsoever, which is clear when you look at families in the middle, rather than the average. Among 56- to 61-year-olds, the median retirement savings is just $17,000, according to the same Economic Policy Institute report. TL;DR: We’re living longer, having fewer children, saving little, and don’t have pensions. That’s a dangerous combination. There is a solution… sort of Even though pensions are becoming an endangered species, there are options on the market that more or less replicate their benefits. The simplest solution would be to use a chunk of your retirement savings to buy a lifetime income annuity. These annuities guarantee lasting and dependable retirement income, and solve both the longevity risk and investment risk issues for individuals. So if pensions are great and income annuities simulate most or all of their benefits, what’s the problem? Why don’t people just create personal pensions as a standard part of retirement? First, income annuities aren’t for everyone. If a 65-year-old woman has $50,000 saved for retirement, using it to buy a single premium income annuity (SPIA) would only net around $250 a month at current rates, which isn’t enough for even basic necessities. She’d also be left with nothing in case of a medical or other financial emergency. Clearly that’s not an ideal solution. But there are millions of people at or near retirement age for whom income annuities do make good financial sense. Still, few of them are buying, with annuities accounting for less than 10% of America’s retirement assets. That may be partially due to the desire to leave an inheritance to loved ones, but annuities are underused even among retirees with no heirs. A study by researchers at UCLA and Duke University investigates why annuities are less popular than many economists think they should be. In the study, survey subjects’ income, marital status, sensitivity to risk, and financial know-how were measured. Surprisingly, none were predictive of how willing someone was to purchase an annuity. The variable that was predictive? Fairness, or rather one’s sensitivity to it. With a standard income annuity, if a person dies immediately, the insurance company keeps the money. (In reality, the money from people who die early is used to pay for those who live a long time, but it’s the perception that matters.) It’s unfortunate that so many have an inherent aversion to sharing longevity risk, since it would be in most people’s best interests. That becomes apparent when you think of it in terms of insurance, rather than an investment. Few people will suffer a catastrophic house fire, but homeowners insurance is still a good idea. After spending time getting deep background on the issues, we started thinking about new solutions to the problem, and created AgeUp as part of the answer.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9447757005691528, "language": "en", "url": "https://pocketsense.com/cash-flow-statement-formula-dividends-paid-5066.html", "token_count": 1045, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.1796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e5eedd22-b8af-499a-ac7c-ae9d0a1bb314>" }
If you own shares in a publicly traded company, the chances are good that you are familiar with dividend payment. When a public company generates cash through their business operations, they typically allocate a portion of this revenue to their shareholders via dividend payments. When dividends are paid to shareholders, this cash transfer is often reported as an outflow on the company's cash flow statement. Using a simple formula, you can determine what proportion of outward cash flow is devoted to dividend payments. A cash flow statement allows individuals to better asses the extent to which dividends are being paid to shareholders and what percentage of outward cash flow is represented by these dividends. Understanding The Fundamentals It is important to remember that not all outbound cash flow is devoted to dividend payments. In fact, the dividends appearing as part of the outward cash flow typically represent payments made to holders of common stock, or stock that offers dividends on a discretionary basis to shareholders. Unlike preferred stocks, which offer consistent dividend payments, it is quite possible that dividends may not be paid at all to holders of common stock. However, given the fact that common stock shareholders have ownership of the company proportional to the number of shares they hold, these individuals are also able to influence the direction and decisions of the company's board of directors to a certain extent. Likewise, interest paid on loans and bonds will also appear as cash outflow, making it that much more important for investors to understand how to properly read and assess the information available on cash flow statements. Calculating Cash Flow and Dividends In order to determine the proportion of outflow devoted to common stock dividend payments, you will first need to know the current dividend payments and the number of shares to which dividends are being paid. So, if there is a $2 quarterly dividend on 2,000,000 outstanding shares, we would know that there is $4,000,000 outflow in dividend payments per quarter. This information can be particularly helpful when you are weighing the risks and benefits of purchasing shares in a company. If, for example, a company is reaping significant stock market gains but devotes little outflow to common stockholders, it may not be worth your time to invest. However, if you notice significant cash outflow to stockholders in lean economic times, this may appear to be a seemingly unstable decision that would also cause you to think twice before investing. With this information in mind, it is easy to understand why a cash flow statement can act as an excellent prognosticator of a company's current values and future success. With that in mind, it strongly recommend that you take the time to review cash flow statements regularly as part of your evaluative research into companies who you may be interested in investing in. - AccountingCoach: Cash Flow Statement - Cash Flow Statement: Explanation and Example | Bench Accounting - U.S. Securities and Exchange Commission. "Dividend." Accessed June 17, 2020. - Fidelity Investments. "What Are Dividends?" Accessed June 17, 2020. - Corporate Finance Institute. "Important Dividend Dates." Accessed June 17, 2020. - Corporate Finance Institute. "Dividend." Accessed June 17, 2020. - Fidelity Investments. "Preferred Stock." Accessed June 17, 2020. - Rice University. "Record Transactions and the Effects on Financial Statements for Cash Dividends, Property Dividends, Stock Dividends, and Stock Splits." Accessed June 17, 2020. - Corporate Finance Institute. "Special Dividend." Accessed June 17, 2020. - Pennsylvania Department of Revenue. "Dividends." Accessed June 17, 2020. - Corporate Finance Institute. "Dividend Payout Ratio." Accessed June 17, 2020. - U.S. Securities and Exchange Commission. "Form 10-K Coca Cola Co." Accessed June 17, 2020. - Charles Schwab & Co. "Dividend Yield and Dividend Growth: Fundamental Value Analytics." Accessed June 17, 2020. - Internal Revenue Service. "Publication 550 (2019): Investment Income and Expenses (Including Capital Gains and Losses)," Page 19. Accessed June 17, 2020. - Value Line. "Dividends Come Out of Cash Flow, Not Earnings." Accessed June 17, 2020. - Pillsbury Law. "SEC Disclosure Update and Simplification." Accessed June 17, 2020. - Corporate Finance Institute. "Dividend Reinvestment Plan (DRIP)." Accessed June 17, 2020. - Robinhood. "What Is a Dividend Reinvestment Plan (DRIP)?" Accessed June 17, 2020. - U.S. Securities and Exchange Commission. "American Financial Group Inc. - Dividend Reinvestment Plan." Accessed June 17, 2020. Ryan Cockerham is a nationally recognized author specializing in all things innovation, business and creativity. His work has served the business, nonprofit and political community. Ryan's work has been featured at Zacks Investment Research, SFGate Home Guides, Bloomberg, HuffPost and more.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9563266038894653, "language": "en", "url": "https://www.deputy.com/blog/evolution-of-payroll", "token_count": 811, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.043701171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ac30c539-9460-4e9f-bf64-1ef8414939a6>" }
We’ve come a long way in the race to digitize business operations across industries. There’s no greater evidence of this than with payroll processing. From manual bookkeeping in the 1940s and the introduction of computers for payroll processing in the 1960s, to the founding of the American Payroll Association in the 1980s to today. Today, payroll processing is operated in the cloud from smartphones and integrated with other business solutions. It’s amazing to see how far technology has taken payroll processing. Here’s a look back at how payroll moved from the dark ages to the 21st century. This time period was truly the dark ages for payroll. Tallying employee hours was done manually in ledgers and checks were handwritten with numbers and bank codes, hardly automated! The term “outsourcing payroll” actually meant that you were paying someone else to manually calculate and tally up payroll, instead of doing it yourself. It wasn’t until General Motors established an “automation department” that automated payroll, or any business workflow for that matter, became a topic of interest. This was the decade where payroll processing got a major facelift. Even though IBM’s first computer came out in 1953, it wasn’t until 1962 that computer science became an actual area of study in the United States, and soon thereafter, an official requirement for businesses looking to stay competitive and automate manual processes. By the 1980s dozens of payroll companies began to emerge. Payroll innovators were hungry to find the “the latest” computer tech to make payroll processing simple and more streamlined. At this time, payroll management was finally becoming easier to do and so affordable that companies of all sizes could end their payroll headaches by outsourcing this complicated, yet critical, task. Today, there are thousands of enterprise-level payroll processing solutions, like ADP, Xero, Intuit and PayChex, as well as solutions optimized for smaller businesses such as Gusto and Square – all of which integrate with Deputy. In fact, Deputy integrates with more than 15 payroll software providers and counting. By integrating Deputy with an existing payroll provider, business owners can: Integrate sales and labor costs into Deputy for a realistic view of business performance Optimize employee scheduling based on the highs and lows of projected business sales or traffic patterns Seamlessly track and monitor employee overtime, late clock-ins and tips Organize scheduling in real time based on employee availability and time-off requests Instantly share schedules with all employees via email, SMS or push notifications via the Deputy app Allow employees to clock in and out of work from Deputy Kiosk using facial detection technology, smartphones with GPS validation or even via text message Review and approve employee time cards straight from your mobile phone via the Deputy app Instantly process payroll with any payroll service provider This kind of automation saves managers hours on end each month, and helps business owners and finance teams have a more accurate view of cash going out and cash coming in. For example, Derek Belnap, owner of 3 Cups Café in Utah, integrates Deputy with payroll processing provider Xero to obtain a more accurate view of his café’s financial performance in real time, based on sales and payroll. Using our employee scheduling software, Derek can also create employee schedules based on internal forecasts of peak business times, all via his smartphone. Here at Deputy, we’re always pushing the limits of what’s possible by moving the needle forward when it comes to making daily business workflows quicker and more efficient. Do you have a workflow productivity or Deputy integration tip to share? Tweet to us @DeputyApp to let us know! Interested in learning how you can make payroll processing easier and more efficient by integrating with Deputy? Try Deputy for free today at Deputy.com or call us at 1-855-6-DEPUTY (855-633-7889).
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9506137371063232, "language": "en", "url": "https://www.nerdwallet.com/article/finance/are-we-in-a-recession", "token_count": 1122, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0299072265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e92f5368-ae93-445f-9712-6a349cf4d4ee>" }
There’s a lot of uncertainty regarding the coronavirus pandemic. But one thing is clear: It’s inflicting considerable damage. More than 100,000 people in the United States have died, nearly 2 million have fallen ill and millions have stayed home. That’s sending ripple effects throughout the economy and leading many to wonder, are we in a recession? We look to a committee with the National Bureau of Economic Research, a nonprofit research organization, for the declaration of a recession. It typically requires gathering several months of data to make such an announcement. The NBER made its official designation in early June: Yes, the U.S. entered a recession in February 2020. The NBER looks for declines in employment, industrial production and other economic activities when assessing whether a recession is taking place. These factors are reflected in current trends. Let’s dive into a few: The unemployment rate began to climb in March and soared to 14.7% by April, according to data from the U.S. Department of Labor. For comparison, the unemployment rate peaked at 9.5% during the recession of 2007 to 2009. While the rate has started to fall, nearly 21 million people are still unemployed. Reduced spending and manufacturing Physical and mental barriers are leading people to consume and produce less. Many businesses have shut down physical locations, and companies are slowing production as demand for goods and services drops. And many are choosing not to spend even when they have the option. Consumer confidence waned in March and April due to market volatility, unemployment and uncertainty about the effects of the pandemic, according to a survey from The Conference Board, a nonprofit business think tank. Consumer confidence began to stabilize in May. A recession is a significant decline in economic activity “lasting more than a few months,” according to the NBER’s definition. While it’s still on the early side — shelter-in-place orders and the rise in unemployment began in March — the decline is likely to continue for at least a couple more months. The White House's Guidelines for Opening Up America Again don’t offer a deadline. States and regions must meet specific criteria before instituting the three-phase plan to loosen restrictions, which seemingly will be a prolonged process. Even as orders begin to lift, many predict it will take time for the economy to return to a sense of normalcy. “It’s not going to come roaring back. You can’t just turn the light switch off and then turn it back on without some problems. So we’re in for a long slog,” says Ryan Sweet, senior director of economic research at Moody’s Analytics, a financial intelligence company. How long will it last? Historically, recessions have lasted 17 1/2 months on average, according to the NBER. But this economic climate presents unique circumstances that make it difficult to draw a direct comparison to past events. “It’s not driven by economic factors. It’s driven by a health situation. So we’ve got to treat it differently and understand it differently. Under that set of circumstances, there’s no simple way of knowing when,” says Joel Naroff, president of Naroff Economics LLC, an economic consulting firm in Holland, Pennsylvania. The economy could rebound in the second half of 2020, predicts Lynn Reaser, chief economist at Point Loma Nazarene University in San Diego. However, the recession’s impact may linger long afterward. “It’s going to take years to recoup all the jobs that were lost and recoup all the lost output,” Sweet says. Will it lead to a depression? The country hasn’t entered a depression since the decade-long Great Depression in 1929. Persistent unemployment and uneasiness are raising fears that the present-day economy could be heading down that path. Many experts say it’s a possibility — albeit an unlikely one. “What people need to understand about the Great Depression is that it was a series of rolling recessions,” Naroff says. “The risk here is opening the economy too soon. If you open before all is clear, and there’s a reignition of the virus and you have to start shutting things down again, basically what happens is that all of the actions taken will have been lost.” The severity of the recession also depends on consumer behavior. If people are reluctant to spend once the economy reopens, that could spell trouble for recovery, Naroff says. Whether or not this recession becomes a depression, labels won’t matter much in the end. “To many people, this could feel like a depression because unemployment is going to be high for a long period of time,” Sweet says. How to cope Millions are struggling to manage their finances, but they aren’t without options. “What’s really important is that people understand the policy response and how that can help them if they’ve been impacted by the coronavirus,” Sweet says. Start by looking into assistance provided by federal and state governments. For example, the Coronavirus Aid, Relief, and Economic Security Act includes expanded unemployment benefits and relief for homeowners, small businesses and student loan borrowers.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9593233466148376, "language": "en", "url": "https://2012books.lardbucket.org/books/policy-and-theory-of-international-trade/s09-04-monopolistic-competition.html", "token_count": 694, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7b529d03-8882-44f9-ace8-b99300a28be1>" }
This book is licensed under a Creative Commons by-nc-sa 3.0 license. See the license for more details, but that basically means you can share this book as long as you credit the author (but see below), don't make money from it, and do make it available to everyone else under the same terms. This content was accessible as of December 29, 2012, and it was downloaded then by Andy Schmitz in an effort to preserve the availability of this book. Normally, the author and publisher would be credited here. However, the publisher has asked for the customary Creative Commons attribution to the original publisher, authors, title, and book URI to be removed. Additionally, per the publisher's request, their name has been removed in some passages. More information is available on this project's attribution page. For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there. To download a .zip file containing this book to use offline, simply click here. Monopolistic competitionA market structure that is a cross between the two extremes of perfect competition and monopoly. refers to a market structure that is a cross between the two extremes of perfect competition and monopoly. The model allows for the presence of increasing returns to scale in production and for differentiated (rather than homogeneous or identical) products. However, the model retains many features of perfect competition, such as the presence of many, many firms in the industry and the idea that free entry and exit of firms in response to profit would eliminate economic profit among the firms. As a result, the model offers a somewhat more realistic depiction of many common economic markets. The model best describes markets in which numerous firms supply products that are each slightly different from that supplied by its competitors. Examples include automobiles, toothpaste, furnaces, restaurant meals, motion pictures, romance novels, wine, beer, cheese, shaving cream, and much more. The model is especially useful in explaining the motivation for intraindustry tradeTrade between countries that occurs within the same industry; for example, when a country exports and imports automobiles.—that is, trade between countries that occurs within an industry rather than across industries. In other words, the model can explain why some countries export and import automobiles simultaneously. This type of trade, although frequently measured, is not readily explained in the context of the Ricardian or Heckscher-Ohlin models of trade. In those models, a country might export wine and import cheese, but it would never export and import wine at the same time. The model demonstrates not only that intraindustry trade may arise but also that national welfare can be improved as a result of international trade. One reason for the improvement in welfare is that individual firms produce larger quantities, which, because of economies of scale in production, leads to a reduction in unit production costs. This means there is an improvement in productive efficiency. The second reason welfare improves is that consumers are able to choose from a greater variety of available products with trade as opposed to autarky. Jeopardy Questions. As in the popular television game show, you are given an answer to a question and you must respond with the question. For example, if the answer is “a tax on imports,” then the correct question is “What is a tariff?”
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9203619360923767, "language": "en", "url": "https://economics.stackexchange.com/questions/4608/adjusting-gdp-for-environmental-and-resource-impacts/12744", "token_count": 199, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.021240234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8bac4cec-1e8b-426a-a9d8-b59f4c32f634>" }
It is widely recognised that GDP as conventionally measured does not reflect an economy’s impact on the environment and its consumption of natural resources. There have been various attempts to develop broader measures of economic performance via adjustments to conventional GDP (some of which also address non-environmental limitations of conventional GDP), for example: - Measure of Economic Welfare (Nordhaus & Tobin 1972) - “True NNP inclusive of natural resource stock diminution” (Hartwick 1990) - Gross Sustainable Development Product (GSDP) (Global Community Assessment Centre) - Genuine Progress Indicator (GPI) (eg Anielski 2001) - Green GDP (a term whose precise content appears to be disputed (see eg Boyd 2006)) Is this just a proliferation of alternative measures, or is it possible to discern progress or convergence towards a best or most useful measure that adjusts GDP for environmental and resource impacts, or perhaps different best measures for different purposes?
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9347768425941467, "language": "en", "url": "https://morioh.com/p/a269a1a8871b", "token_count": 569, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0439453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a51fbea6-f7f5-4826-976d-fb972e1f8edf>" }
Optimizing Complex Systems. The magic of combinatorial optimization. The systems that drive the economy and which producers are part of are complex systems involving interactions between multiple agents. Efficiency has been a critical concept ever since the industrial revolution. Increasing productivity and efficiency have been core assumptions and goals of the field of economics since day one. Increasing efficiency is a complex challenge and there are numerous ways to do so. Typically, efficiency can be approached at a unit-level or at a system-level. Most producers are part of a larger system which interacts to create additional value. For example, every beef farmer out there is part a larger system (the food supply chain) composed of slaughterhouses, meat packers, food suppliers, truckers, supermarkets and restaurants. This entire system interacts in numerous different ways to create end value in the form of goods and services. Driving efficiency at a unit-level entails better technological innovation — faster trucks, better cold storage, better animal feed and so on. While unit-level gains driven by technological advancement can be tremendous, system-level optimization can drive even greater savings and therefore efficiencies. Driving system-level efficiency requires optimization. System-level efficiency involves asking questions like: The systems that drive the economy and which producers are part of are complex systems involving interactions between multiple agents. Optimizing complex systems to find the “ideal x” involves optimizing a large set of variables. The possibilities that result from combining these different variables can be immense. There might be 1,000,000 possible routes a food supplier can take across the state with different levels costs and speed involved. In Conversation With Dr Suman Sanyal, NIIT University,he shares his insights on how universities can contribute to this highly promising sector and what aspirants can do to build a successful data science career. Online Data Science Training in Noida at CETPA, best institute in India for Data Science Online Course and Certification. Call now at 9911417779 to avail 50% discount. Data Science and Analytics market evolves to adapt to the constantly changing economic and business environments. Our latest survey report suggests that as the overall Data Science and Analytics market evolves to adapt to the constantly changing economic and business environments, data scientists and AI practitioners should be aware of the skills and tools that the broader community is working on. A good grip in these skills will further help data science enthusiasts to get the best jobs that various industries in their data science functions are offering. The biggest problem we face today is the commoditization of education. Individuals and corporations alike would like quality courses to be offered by the best faculty at the lowest price For this week’s latest data science job openings, we have come up with a curated list of job openings for data scientists and analysts.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9490239024162292, "language": "en", "url": "https://www.bcg.com/publications/2016/why-the-technology-economy-matters", "token_count": 2579, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.048828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b966a801-c849-4e10-8f98-51860265b20e>" }
This is the second in a series of articles on technology economics. Despite technology’s starring role in business and everyday life, many observers openly question whether it has really had much of an impact on the global economy. Their skepticism is misplaced. As we demonstrated in the first article in this series, technology plays a vital role in boosting company performance. (See “Why Technology Matters,” BCG article, September 2016.) In short, we’ve found that companies with high technology intensity have high gross margins. (Technology intensity is a proprietary metric that analyzes technology spending relative to a company’s and an industry’s revenues and to their operating expenses.) But if technology is so important, many economists ask, why hasn’t the digital revolution generated the hoped-for increases in traditional macroeconomic metrics such as GDP and productivity? For example, annual productivity growth in the US from 2007 through 2015 hovered at a sluggish 1.3% average rate, half the rate from 2000 to 2007. The US economy experienced three consecutive quarters of falling productivity, from the fourth quarter of 2015 through the second quarter of 2016, the longest slide since the late 1970s. Critics point to technology’s failure to deliver. The failure may be one of imagination rather than of technology itself, however. As we will show in this article, declines in technology investment are followed by startling drops in macroeconomic growth. In fact, you can see that the technology economy has close relationships with GDP, productivity, and other measures of economic health—if you look closely. For years, economists have cast doubts on the importance of technology to economic growth. The apparent powerlessness of new technologies to improve productivity has become known as the Solow paradox, named after Nobel Prize–winning economist Robert Solow. “You can see the computer age everywhere but in the productivity statistics,” Solow said in 1987. In his book The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War, economist Robert Gordon argued that the new technologies of today are not as world changing as were, for example, electrification, cars, and wireless communications during the second Industrial Revolution. Others argue that information technology could be at a stage of development in which its potential impact has not yet revealed itself, just as early-20th-century inventions, such as electric lighting, failed to immediately lift the slow productivity growth that prevailed after their introductions. John Fernald, a leading expert on productivity at the Federal Reserve Bank of San Francisco, has determined that the recent slowdown in productivity was not connected to a host of factors, including housing, educational attainment, capital intensity, and the Great Recession that started after 2007. Technology itself was the cause. As industries reorganized after the internet explosion that began in the mid-1990s, the potential for transformative gains from technology shrank dramatically. Fernald recently asserted that, in general, measurement errors are not to blame and free internet services have a negligible impact on the economy. Erik Brynjolfsson and Andrew McAfee, codirectors of the MIT Initiative on the Digital Economy, argue more optimistically that productivity increases associated with new technologies happen only after a long period of time, when technologies become powerful and cheap enough for their truly transformative powers to kick in. Others say that we’re only just now beginning to see the transformative potential from recent innovations such as big data, artificial intelligence, advanced robotics, nanotechnology, and biotechnology. Robert Atkinson, of the Information Technology & Innovation Foundation, argues that over the next few decades, the US may see productivity rise to perhaps 3.0% to 3.5% per year—as much as a percentage point higher than the relatively rapid pace of 1995 through 2007—once transformative technologies such as these come into wide use. In general, we agree with a more optimistic line of reasoning about technology, but we have reached a different conclusion. We think that in many cases, traditional measures of economic growth don’t take into account important benefits of technology and are less relevant to prosperity than they were in a mass-production world. For example, GDP, an important factor in the calculation of productivity, fails to capture many technology-generated improvements in living standards. These benefits include the greater convenience and better customer experience provided by digital services and the vast amount of information—such as online maps, search results, and social media—available for free and with zero marginal distribution cost. Measurement flaws such as these could partially explain why productivity growth has been so slow over the past few decades, at least according to current metrics. Rather than seeing technology as having a marginal effect on productivity, we have found a strong relationship between technology spending and economic growth as measured by productivity and GDP. For example, executives can predict with some accuracy the impact on the overall economy of a decline in technology spending. Whenever companies cut back on discretionary spending in order to shore up profits during a downturn, they slash their investments in technology. Soon afterward, GDP falls dramatically, and, within a few years, labor productivity across the economy falls. (Remember that technological innovation is an important component of productivity.) The drop in technology intensity that results from a decline in technology spending causes the labor force to shrink, which shows up in productivity up to three years later because productivity is a “stickier” measure. Exhibit 1 shows the relationship between technology intensity and GDP. (A similar pattern exists for productivity.) The global economy is showing other signs of this effect, as Mary Meeker, a general partner at Kleiner Perkins Caufield & Byers, recently highlighted in her influential 2016 Internet Trends report. Global GDP growth has been lower than the 20-year average in six of the past eight years. As GDP comes under pressure, global growth in the use of technologies such as the internet and smartphones has slowed. This downward cycle reduces new opportunities for productivity and GDP growth. One likely explanation for the past decade’s slowdown in productivity, as reflected in the official statistics, could be that economists and business leaders are not tracking the metrics that make the impact of technology most evident. It could be that to see the economic lift from technology, which centers on digital information, we need to look elsewhere than the traditional economy, which centers on the physical, Industrial Age world. Measures such as the price of cloud storage, data processing rates, broadband speed, and 21st-century skill development could be more relevant. That requires a shift in thinking about how we invest in technology and how we measure its macroeconomic effects. In addition to arguing that existing metrics have failed, we maintain that the slowdown in productivity also signals a failure to reach a critical mass of technology. Despite rapid growth in spending and technology’s significant impact, the level of technology intensity at the world’s companies fell steadily from 2005 through 2015. Yes, you read that right: despite record spending on technology, technology intensity is plummeting. While technology expenses are rising faster than the revenues that result from those investments, operating expenses are rising even faster. (The higher ratio of technology spending to revenues in the calculation of technology intensity is offset by a disproportionately lower ratio of technology spending to operating expenses. See the first article in this series for a discussion of how technology intensity is calculated.) In effect, companies are getting less and less for their significant investment in technology. This counterintuitive trend—companies are spending more but getting worse results—is the paradoxical result of some companies’ failure to spend enough on technology. Most companies spend about 5% of revenues on technology, which is not a staggering amount relative to other important expenses. In fact, the pendulum is once again swinging back toward a major slowdown in technology spending. Major banks, normally heavy spenders on IT, have announced 25% to 30% cuts in technology expenditures. IDC forecasts that global spending on IT is set to grow by only 2%, after growth of 5% to 6% over the past five years. The situation is akin to delivering a vaccine that works for only 10% of the population because the company didn’t test enough variations of the vaccine. The company saved money, but that choice had negative repercussions. What if, when things weren’t going well during the Industrial Revolution, companies had cut down on machines, automation, waterways, or electricity? Today, many companies are cutting back on a critical investment that could power the next wave of growth. In many cases, that investment could create huge leverage—lowering other expenses through automation, for example—much more quickly than technology spending rises. But that can happen only if companies manage their technology spending well. (We will explore this topic in greater detail in the next article in the series.) If we’re looking in the wrong places and, paradoxically, not spending enough on technology, how can we gain a better understanding of the technology economy? We maintain that businesses can learn to think about economic growth in new ways, as well as develop new macroeconomic measures that highlight the impact of the technology economy. A more nuanced way to think about productivity involves focusing on technology’s ability to increase reach and generate leverage. For instance, the internet enables companies to reach millions of potential customers, magnifying the results of their investments. Social networking services such as Twitter and Facebook change the productivity of reach: the incremental cost of reaching 3 million instead of 1 million people is zero. In addition, automation allows companies to replace labor-intensive manual processes with algorithms. One day, executives will be able to measure the rise in productivity resulting from innovations such as self-driving vehicles and nanorobots. In the more immediate arena of IT-enabled health care, we are already starting to measure technology’s contribution to health care productivity. Better-trained physicians are able to make diagnoses more quickly and accurately. In other words, they are increasing their labor productivity, and this improvement—even if it doesn’t currently show up in official productivity statistics—leads to better health outcomes. For example, thanks in part to recent efforts to increase health care efficiency and institute value-based health care, inflation-adjusted Medicare spending per beneficiary has declined over the past few years, after years of rapid increases. To keep bending the cost curve downward and thereby improving the productivity of health care overall, we need to take a fresh look at increasing efficiency. The Dell Medical School at the University of Texas at Austin is at the forefront of such efforts to improve health care productivity and outcomes: its curriculum aims at training doctors to navigate a collaborative, data-driven, results-oriented world. Another area of productivity-related inquiry focuses on metrics of global labor costs. Thanks to the globalization of manufacturing and many other industries, companies have circled the globe looking for rapidly developing economies with low average wages. Now they are discovering that in terms of output per dollar of wages and other new measures, these low-wage countries may not have an advantage over a high-wage country with high levels of automation. (See The Shifting Economics of Global Manufacturing, BCG report, August 2014.) Among other factors that affect output, the economic impact of each dollar in wages could be far greater owing to technology. As a result, output per dollar of wages is a much more revealing metric for decision makers than a country’s average wages. In addition to productivity, executives need to watch a macroeconomic measure that shows “flows” in the technology economy: the technology balance of trade, or the technology services exported per dollar imported. India, for example, exports $8.86 in technology services per dollar imported, while the US exports only $0.84 in technology services per dollar imported. (See Exhibit 2.) Understanding flows such as these helps companies identify promising markets and do a better job of predicting economic growth in the technology economy. Ultimately, these new ways of thinking about and measuring economic growth point to the need for new ways to discern whether companies are successfully navigating the technology economy. The first article in this series describes critical company-level metrics that measure the state of the digital world. Executives will also need to create, measure, and track virtual macroeconomic measures—and do that just as carefully as they work with metrics about the physical world. And they must adapt to changes in these indicators in near-real time. But to truly succeed, senior leaders must understand where they stand in relation to competitors—and act on that knowledge. In the How to Reach the Technology Economics Frontier in this series, we explore a new way to gauge success in the technology economy.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9512273669242859, "language": "en", "url": "https://www.cgiar.org/news-events/news/covid-19-and-resilience-innovations-in-food-supply-chains/", "token_count": 1121, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b9e9d8db-44d9-4ff7-9c2b-a5ef52780b86>" }
COVID-19 and resilience innovations in food supply chains Food supply disruptions in developing countries during the COVID-19 pandemic have been varied and often severe, especially in labor-intensive segments of supply chains. Tom Reardon of Michigan State University and IFPRI’s Jo Swinnen summarize early experiences in both international and domestic supply chains across various types of firms and commodities. They review a range of innovations developed to keep supply chains running, many implemented at a surprisingly rapid pace—and make recommendations on to facilitate continued innovation to speed the recovery and ensure better food supplies post-pandemic.—John McDermott, series co-editor and Director, CGIAR Research Program on Agriculture for Nutrition and Health (A4NH). The COVID-19 pandemic has triggered intense discussions about the vulnerability of the world’s food systems and food supply chains (FSCs) and about the roles of different types of supply chains, e.g. local vs. global, in providing food security. We know that the spread of the novel coronavirus and government-imposed lockdowns and other restrictions have had a range of impacts on FSCs, and triggered a variety of creative innovations to keep supply chains running. To guide government policy responses going forward, and to facilitate a shift to more resilient FSCs in the long run, we need to understand several things: The role of various types of supply chains in food security; how resilient they have—or have not—been to the pandemic’s impacts; and what innovations are now emerging to improve their resilience. Here, we distinguish between global chains (where the food or agricultural raw material is produced in one country and consumed in another) and domestic chains (where food is produced and consumed in the same country). Within domestic chains, it is useful to distinguish between those relying on small and medium enterprises (SMEs) in logistics, trade, processing, and retailing; and those dominated by large-scale enterprises, including fast food chains, supermarkets, large processors, and big logistics firms. While there are obviously important differences across commodities and countries, available data suggest that domestic supply chains, especially those dominated by SMEs, are by far the most important for supplying food to consumers in developing countries. Rough estimates suggest that, on average for South Asia and Africa south of the Sahara, domestic chains account for between 75% and 90% of food consumed, of which the vast majority comes through SME- dominated chains and up to 20% through large scale enterprises. Global chains account roughly for 15% to 20% of food consumption in these regions, with a positive correlation between GDP and their share.1 Pandemic-related disruptions in supply chains are concentrated in their labor-intensive segments. In general, supply chains in rich countries have been more resilient because they are more capital- and knowledge-intensive. Notable exceptions are harvesting that depends on migrant labor; labor-dense processing such as in meat processing in the United States; and obviously restaurants and other food service sector firms. Still, there are important differences among FSCs in developing countries. Global FSCs have been more resilient because trade is mostly undertaken by large enterprises in coordinated and capital-intensive supply chains that can mostly adjust to disruptions geographically and temporally, and somewhat in product composition. While there is much concern about COVID-19 affecting trade in perishables, most extra-regional trade is organized through large capital-intensive firms.2 These large trading companies can reduce risk and adjust to shocks as they are more flexible in switching global sourcing and destination regions and in diversifying and shifting stocks to manage risk—as they already do to manage risks from climate shocks (Reardon and Zilberman 2018). Within domestic FSCs, COVID-19 and lockdowns have mixed effects. Large-scale companies are generally less labor intensive but rely more on hired labor (affected especially by lockdowns), while SMEs are more labor intensive, but use more family labor. Wholesaling and logistics operations, such as third party (3PLS) logistics firms in trucking and transport, which are very important for food transport in Africa south of the Sahara, are disrupted by mobility restrictions and wholesale market restrictions. These also affect farm input distribution in rural areas. These differences matter for processing, trade, and logistics, and also apply to the farm sector. Larger mechanized farms are less affected by pandemic restrictions, but those that depend on hired labor have felt an impact. Hired farm labor is relatively rare in Africa south of the Sahara, except for labor-intensive poultry and horticulture operations, compared to India, for example, where farms depend much more on hired labor (Reardon et al. 2020a). Supermarkets and large processors in developing countries depend largely on SME wholesalers, but the largest companies—such as Future Group, a leading supermarket chain in India—tend to have their own logistics and procurement units. This allows them more control and coordination to maximize their sourcing in the face of constraints. SMEs have to take what they can get. This blog post is part of a special series of analyses on the impacts of the COVID-19 pandemic on national and global food and nutrition security, poverty, and development. The blog series is edited by IFPRI director general Johan Swinnen and A4NH director John McDermott. See the full series here. Photo Credit: Minette Rimando/ILO
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9634454846382141, "language": "en", "url": "https://www.greenoptimistic.com/greenhouse-gases-ghg-vs-gdp-gross-domestic-product-20121009/", "token_count": 449, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07470703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:32be2d55-afda-4f2b-b2e9-d4a12d51041c>" }
According to a recent study, greenhouse gas [GHG] emissions increase as the economy rises, but don’t seem to fall as drastically when the economy falters. Richard York, of University of Oregon, reviewed 48 years of data from more than 150 nations, statistics kept by World Bank. According to York, “Economic decline … doesn’t lead to as big a decline in emissions as a comparable amount of economic growth leads to growth in emissions.” He found that for every 1% increase in gross domestic product [GDP], carbon-dioxide [CO2] emissions, the main GHG, rose by an average of 0.73%. For every decrease in GDP, though, there was less of a corresponding decrease in CO2 emissions, just 0.43%. These statistics are important to forecasters, who are helping world leaders come to some accord on how to attack the global warming phenomenon, since most studies simply assume that GHG emissions and GDP move up and down proportionally. “The difference might be because new infrastructure added during times of economic growth – new homes, roads or factories – is still used during recession. When economies decline, factories don’t shut down immediately, people don’t stop driving (although they may defer buying a new car),” said York. “and many new buildings still need heating or air-conditioning.” Scientists agree that CO2 and other GHGs are one of the main causes of global warming. The UN panel of climate scientists says that GHGs leading to globally-rising temperatures will lead to more extreme weather patterns, including floods, droughts, heatwaves, stronger storms, and rising sea levels. Current climate-change scenarios predict the world economy to expand as high as $550 million in 2100, which, according to studies, would raise global temperatures by as much as 11.5°F. Just how to effect a change, though, has been somewhat difficult. Nearly 200 nations, at the Copenhagen summit in 2009, failed to reach an agreement on how to move forward. They are hoping to reach a global pact by 2015, which could take effect as early as 2020.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9684052467346191, "language": "en", "url": "https://www.jrf.org.uk/data/workers-poverty", "token_count": 192, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.294921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c28a5ea8-edd9-4c3a-9e0d-fbada3a9f0ab>" }
Link to source: Households Below Average Income The total number of workers in poverty has gone up over the last 20 years from 2.3 million workers in 1996/97 to 4 million workers in 2017/18. Of these 4 million workers in poverty, 1.9 million are full-time employees, 1.4 million are part-time workers and 0.7 million are full-time self-employed workers. Just under half of workers in poverty in 2017/18 are full-time employees. Despite improvements in pay for those on the lowest wages, low pay remains endemic in the UK’s economy. Once in a low-paid job it is difficult for many workers to move to a better paid one. Poverty and low pay do not always go together – the vast majority of low-paid workers live in households where the income of the people they live with (such as a partner or parents) mean they are not in poverty.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.887576699256897, "language": "en", "url": "https://www.sintef.no/en/software/emps-multi-area-power-market-simulator/", "token_count": 488, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.03173828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ee3b4f1f-15a2-4dd7-95bb-836a667750a0>" }
The objective is to minimize the expected cost in the whole system subject to all constraints. In principle, this solution will coincide with the outcome in a well-functioning electricity market. The simulated system can e.g. be the Nordic system or Northern Europe. The basic time step in the EMPS model is one week, with a horizon of up to ten years. Within each week, the time-resolution is 1 hour or longer. In the strategy evaluation, incremental water values (marginal costs for hydropower) are computed for each area using stochastic dynamic programming. A heuristic approach is used to treat the interaction between areas. In the simulation part of the model total system costs are minimized week by week for each climate scenario (e.g. 1931 – 2012) in a linear problem formulation. Hydropower: Each area in the model is an EOPS module. It is therefore possible to include a detailed representation of hydropower. In the simulation part, total hydro power production for each area is calculated. Thereafter, a rule-based reservoir drawdown model distributes production among all available plants within each area. Other generation: Thermal power plants can be described individually by capacity, marginal cost (or fuel-type and efficiency), and start-up costs (optional). Plant outages may be modelled by an Expected Incremental Cost method. Wind-power and solar-power have zero costs and stochastic generation. Transmission: A capacity and availability is specified for each controllable transport channel. Detailed power flow can also be applied, cf. Samlast/Samnett. Consumption: For each area demand can be specified by annual levels, within-year weekly profile, and within-week hourly profile. During simulation, the demand is affected by prices and temperatures. Some tasks the EMPS-model may perform: - Forecasting of electricity prices and reservoir operation - Long term operational scheduling of hydro power - Maintenance planning (transmission or production) - Calculation of energy balances (supply, consumption and trade) - Utilization of transmission lines and cables - Analysis of overflow losses, and probability for curtailment - Analyse interplay between intermittent generation, hydropower and thermal power - Investment analysis; system development studies - Calculation of CO2-emisssions from power generation
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9520505666732788, "language": "en", "url": "https://www.ukessays.com/essays/accounting/lean-accounting-lean-manufacturing-6643.php", "token_count": 2821, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.08984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9541b9fe-bb99-418a-bd19-c2cb4e2d25c0>" }
“The most noteworthy evolution of lean accounting in recent years has been a sharpening focus on value. Lean has always been centered on creating value for customers and eliminating non-value adding waste” (Asefeso, p 9). Lean accounting has been steadily making it possible for manufacturers to explicitly measure value in financial terms and to focus improvement efforts on increasing value. With many manufacturers now implementing lean, it becomes essential to discover what part of lean accounting has played in the changes made. This paper will give a brief background of lean manufacturing and a general overview of what lean accounting is. I will also explore some problems and disadvantages of lean accounting from various researched articles. Background of Lean Manufacturing Lean is a philosophy that spurred from the Toyota Production System (TPS). TPS was created by Toyota’s founder Sakichi Toyodo, Kiichiro Toyoda, and Taiichi Ohno. Much of TPS was also influenced by W. Edwards Deming’s statistic process control (SPC) and Henry Ford’s mass production lines. However, the Japanese were not impressed with Ford’s approach because it was filled with over-production, lots of inventory, and much waiting. Toyota identified these weaknesses in Ford’s production line and adapted the production line to create a more productive and reliable production line. TPS and lean also use just-in-time inventory where only small amounts of inventory were ordered and very little inventory was left waiting in the production line. This also was very different from Ford’s production line which usually bought high volumes of materials and had high inventory levels to lower costs. If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more After TPS proved to be successful for Toyota, many companies adapted their production lines to incorporate lean principles. Lean management was first introduced in the United States in the early 1980’s after a global study of the performance of automotive assembly plants. Essentially, the primary principle of lean is that it is a tool used in manufacturing to eliminate waste, improve quality, and reduce cost. Waste is eliminated by identifying non-value added activity. The main objective is to supply perfect value to the customer through a perfect value product that has no waste. “Eliminating waste along entire value streams, instead of at isolated points, creates processes that need less human effort, less space, less capital, and less time to make products and services at far less costs and with much fewer defects, compared with traditional business systems” (“What is Lean?”). Companies may face certain challenges when applying lean to their production lines. First, lean should be applied to companies that have production lines that are routine, predictable, stable, and can be flow charted. Second, lean implementation may take years and can be very costly in large companies. Depending on how integrated the systems and how disciplined the production line is, it is quite possible that a lean implementation may fail. “There are several key lean manufacturing principles that need to be understood in order to implement lean. Failure to understand and apply these principles will most likely result in failure or a lack of commitment from everyone” (“Key Lean Manufacturing”). These principles are as follows: “1. Elimination of waste; 2. Continuous improvement; 3. Respect for humanity; 4. Levelized production; 5. Just-in-time production; and 6. Quality built-in” (“Key Lean Manufacturing”). Management may also be discouraged to adopt lean manufacturing right away because the lean implementation is a long term investment. Most CEOs make decisions that benefit the company in the short run, and may choose not to adopt lean because it may show unfavorable results on the financial statement during the early stages. Lean will cause a decrease in inventory levels, causing assets on the balance sheet to drop which is not always favorable. However, these short term negative results will eventually become long run gains as the company benefits from less inventory holding costs and improved processes. Background of Lean Accounting While most people associate lean to manufacturing processes, it is now taking on a very important key role for companies to adopt lean throughout the other departments of the company. An example of a support function that uses the lean concept is the accounting field. Since accounting is a support department, it should apply lean principles after the manufacturing department has incorporated lean. Accounting’s main duty is to accurately measure and communicate financial activity, and by adopting lean accounting after successfully implementing lean manufacturing would allow for the accurate measurement of the new production system. “Lean accounting evolved from a concern that traditional accounting practices were inadequate and, in fact, a deterrent to the adoption of some of the necessary improvements to manufacturing operations. While manufacturing managers knew that investments in automation and the adoption of lean manufacturing practices were the right things to do, traditional accounting was often an obstacle to such improvements, yielding numbers that only supported investments when they could be justified by reductions in direct labor, with little benefit ascribed to any improvements to quality, flexibility or company throughput” (Asefeso, p 10). Lean accounting is the cornerstone of a completely different model of manufacturing management. By itself, lean accounting has limited value, but as the financial basis for the application of logistics, superior management, factory operations, marketing, pricing, and other vital business functions, lean accounting is very powerful. “A core principle of lean accounting is that the value stream is the only appropriate cost collection entity within the organization, as opposed to traditional accounting’s use of cells, cost or profit centers or departments normally based on smaller, functional groupings of work activity” (Asefeso, p12). The main idea behind lean is minimizing waste, therefore creating more value for customers with fewer resources. Problems and Disadvantages of Lean Accounting Lean accounting may reduce the manufacturing process to a few numbers, but it does not provide a lot of information. There are several flaws of using the lean accounting approach. “Speed gives you an advantage over the competition. No matter if you are first in a market or deliver a product faster, it will improve your competitiveness and hence your revenue. However, it is nearly impossible to determine this advantage quantitatively. How much does it get you to be in the market seven days earlier? One big thing in lean manufacturing is to reduce fluctuations. The more even your system works, the more profitable you will be. However, it is difficult to measure these fluctuations, even more difficult to determine the impact of an improvement on fluctuations, and hence nearly impossible to calculate the monetary benefit of reducing fluctuations. Yet another thing in lean is customer satisfaction, often described as value to the customer. What is the monetary damage if a delivery is delayed, if a product breaks, if service is slow, or if your people are unfriendly? It is nearly impossible to know. Even more difficult to determine is how improvement measures will actually influence the above. How much does it cost you to provide a better service, how will this influence customer satisfaction, and what is your benefit from this?” (“The Problems of”). Using lean accounting can also lead to bad decisions such as where to put the money when profits are maximized and where to take the money out that has been saved. There are also several disadvantages of using lean accounting. “One disadvantage of lean accounting is that it requires a top-down, sometimes monumental cultural shift. Most manufacturing companies have cost accounting systems in place that measure production improvements in terms of short and medium-term cost reductions. However, lean accounting focuses on freeing up resources to increase the product or product line’s value to customers and make more money. Senior management must therefore change their thinking from one focused on the bottom line to one focused somewhere between revenues and profits. Without management’s full commitment, full implementation of an effective lean accounting system will stall” (Wright). “Accounting systems traditionally generate internal reports that owners and management – both senior and departmental – review and discuss. Lean accounting aims to translate the information into numbers that task-based employees in various departments can use. These accounting systems focus on compiling cost-based data. Since lean accounting focuses on value creation, companies often need to completely overhaul their accounting systems, collection and measurement procedures, controls and software. Any system overhaul can be daunting, but the scope of an accounting system overhaul can be particularly exhaustive” (Wright). “Lean accounting focuses on increasing revenues and profits by increasing the value of a company’s products and services. When lean accounting systems focus on value stream instead of cost, they may inadvertently omit costs or ignore issues related to specific costs. Until a company fully captures a product or product line’s value stream, accountants may not be able to appropriately price products or determine each product’s individual level of profitability” (Wright). “Effective lean thinking and lean accounting require input and involvement by all employees. Many employees in a traditional manufacturing or distribution environment are reactive, following the orders given them. Companies must therefore invest in training, developing and empowering all their employees to help them become proactive. This can be expensive and time consuming” (Wright). Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services “Unless the accountants understand the way that lean works, in the worst case it seems to them that lean produces losses, not efficiencies. In a typical case, they cannot see the cost advantages. Those who were fighting to introduce lean into their companies reported over and over again that finding a way to reconcile accounting the way lean does it and standard cost accounting was proving to be much harder than it should be” (Woods). “Lean practitioners think of accounting in cash terms. Lean is against creating data and reports for their own sake. That would be considered another form of waste. In general, lean advocates have a jaundiced view of enterprise software and any general-purpose automation tools. The lean approach measures how well your value stream is working” (Woods). The difference between lean accounting and standard cost accounting can be explained in a simple weight loss analogy. “When dieting, standard cost accounting would advise you to weigh yourself once a week to see if you’re losing weight. Lean accounting would measure your calorie intake and your exercise and then attempt to adjust them until you achieve the desired outcome. While this analogy is oversimplified, it does get to the core difference between lean and standard cost accounting. Lean accounting attempts to find measures that predict success. Standard cost accounting measures results after the fact” (Woods). “But even when the accounting types and the lean practitioners start to understand each other, problems remain. How can we reconcile the kind of data collection and accounting that lean demands and the standard cost accounting? Duplicated data collection and reporting is indeed a form of waste” (Woods). “While lean accounting is still a work-in-process, there is now an agreed body of knowledge that is becoming the standard approach to accounting, control, and measurement. These principles, practices, and tools of lean accounting have been implemented in a wide range of companies at various stages on the journey to lean transformation. These methods can be readily adjusted to meet your company’s specific needs and they rigorously maintain adherence to GAAP and external reporting requirements and regulations. Lean accounting is itself lean, low-waste, and visual, and frees up finance and accounting people’s time so they can become actively involved in lean change instead of being merely “bean counters.” Companies using lean accounting have better information for decision-making, have simple and timely reports that are clearly understood by everyone in the company, they understand the true financial impact of lean changes, they focus the business around the value created for the customers, and lean accounting actively drives the lean transformation. This helps the company to grow, to add more value for the customers, and to increase cash flow and value for the stockholders and owners” (Maskell and Baggaley, p 43). Asefeso, Ade. Lean Accounting, Second Edition. AA Global Sourcing Ltd, 2014. p 9, p10 and p12. “Key Lean Manufacturing Principles”. www.lean-manufacturing-junction.com. Accessed February 25, 2017. Maskell, Brian H. and Baggaley, Bruce L. “Lean Accounting: What’s It All About?”. Target Magazine. Association for Manufacturing Excellence, 2006. p 43. www.aicpa.org. Accessed February 25, 2017. “The Problems of Cost Accounting with Lean”. www.allaboutlean.com. Accessed February 27, 2017. “What is Lean?”. www.lean.org. Accessed February 25, 2017. Woods, Dan. “Lean Accounting’s Fat Problem”. Published July 28, 2009. www.forbes.com. Accessed March 1, 2017. Wright, Tiffany C. “The Disadvantages of Lean Accounting”. www.smallbusiness.chron.com. Accessed March 1, 2017. Cite This Work To export a reference to this article please select a referencing stye below: Related ServicesView all DMCA / Removal Request If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please:
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9502332210540771, "language": "en", "url": "https://yourbusiness.azcentral.com/figure-profit-margin-manufactured-product-1464.html", "token_count": 260, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.01275634765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ca39a754-da18-4123-a0dd-dcd8c8b35126>" }
When you're manufacturing a product, it's important to know how much money you are generating from it. One metric you can use is profit margin. The profit margin is the difference between your net sales and the cost of those sales. It can be represented as a dollar value, or -- as is often the case -- as a percentage of sales. Add up the net sales for the product, which would be the sum of total sales less discounts and returns. Add up the total cost of your sales. This includes the cost of manufacturing the product, including raw materials, labor and depreciation on equipment; selling costs, including salaries, sales materials and marketing; and any other costs you incurred to make and sell the product. Subtract your cost of sales from your total sales revenue. This will give you your profit margin as a dollar value. For example, if the product has $70,000 in net sales and $30,000 in cost of sales, you would have a margin of $40,000. Divide the dollar value of your profit margin by your net sales. Using the above example you would divide the dollar value of the margin, $40,000, by your net sales of $70,000 to arrive at a profit margin of 57 percent.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9115434885025024, "language": "en", "url": "https://academiaservices.net/21147753987/", "token_count": 501, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06591796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3cdb1683-3b54-4aa9-9845-d77406355a78>" }
Consider the market for A and B . The demand curve in market A is P A = 64 − Q A and in market B it is P B = 56 − Q B . The firm in market A is a regulated monopolist. It has the choice of two technologies. The cost function for technology 1 is C 1 = 720 + 16 Q A + 0 . 5 ( Q B ) 2 . The marginal cost of B for this technology is MC B 1 = Q B . The cost function for technology 2 is C 2 = 120 + 20 Q A + 14 ( Q B ) 2 . The marginal cost of B for this technology is MC B = 28 Q B . 2 Competitive supply in market B is perfectly elastic at P B = 28. What are the efficient prices for each technology? Which technology should be used? Suppose that the regulator only regulates the price of A . It does so by setting P A = AC A where AC A is average fully distributed costs. The regulator has decided that the appropriate division of common fixed costs is to allocate them equally between markets A and B . Suppose the regulated firm can choose its technology. What technology does the regulated firm choose? Why? Is its choice efficient? This question was answered on: Sep 13, 2020Buy this answer for only: $15 This attachment is locked Pay using PayPal (No PayPal account Required) or your credit card . All your purchases are securely protected by . About this QuestionSTATUS Sep 13, 2020EXPERT GET INSTANT HELP/h4> We have top-notch tutors who can do your essay/homework for you at a reasonable cost and then you can simply use that essay as a template to build your own arguments. You can also use these solutions: - As a reference for in-depth understanding of the subject. - As a source of ideas / reasoning for your own research (if properly referenced) - For editing and paraphrasing (check your institution's definition of plagiarism and recommended paraphrase). NEW ASSIGNMENT HELP? Order New Solution. Quick Turnaround Click on the button below in order to Order for a New, Original and High-Quality Essay Solutions. New orders are original solutions and precise to your writing instruction requirements. Place a New Order using the button below. WE GUARANTEE, THAT YOUR PAPER WILL BE WRITTEN FROM SCRATCH AND WITHIN A DEADLINE.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9049106240272522, "language": "en", "url": "https://brainmass.com/statistics/regression-analysis/33940", "token_count": 2407, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0908203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:25f960ec-9a38-4260-80cc-4bd89bc4e394>" }
Q1 The bad debt ratio for a financial institution is defined to be the dollar value of loans defaulted divided by the total dollar value of all loans made. Suppose a random sample of seven Ohio banks is selected and that the bad debt ratios (written as percentages) for these banks are 7 percent, 4 percent, 6 percent, 7 percent, 5 percent, 4 percent, and 9 percent. Assuming the bad debt ratios are approximately normally distributed, the MINITAB output of a 95 percent confidence interval for the mean bad debt ratio of all Ohio banks is as follows: Variable N Mean StDev SE Mean 95.0% CI d-ratio 7 6.000 1.826 0.690 ( 4.311, 7.689) a Using the sample mean and standard deviation on the MINITAB output, verify the calculation of the 95 percent confidence interval. b. Calculate a 99 percent confidence interval for the mean debt-to-equity ratio. c Banking officials claim the mean bad debt ratio for all banks in the Midwest region is 3.5 percent and that the mean bad debt ratio for Ohio banks is higher. Using the 95 percent confidence interval, can we be 95 percent confident that this claim is true? Using the 99 percent confidence interval, can we be 99 percent confident that this claim is true? Explain. Q2 A production supervisor at a major chemical company wishes to determine whether a new catalyst, catalyst XA-100, increases the mean hourly yield of a chemical process beyond the current mean hourly yield, which is known to be roughly equal to, but no more than, 750 pounds per hour. To test the new catalyst, five trial runs using catalyst XA-100 are made. The resulting yields for the trial runs (in pounds per hour) are 801, 814, 784, 836, and 820. Assuming that all factors affecting yields of the process have been held as constant as possible during the test runs, it is reasonable to regard the five yields obtained using the new catalyst as a random sample from the population of all possible yields that would be obtained by using the new catalyst. Furthermore, we will assume that this population is approximately normally distributed. a Using the Excel descriptive statistics output given below, find a 95 percent confidence interval for the mean of all possible yields obtained using catalyst XA-100. b Based on the confidence interval, can we be 95 percent confident that the mean yield using catalyst XA-100 exceeds 750 pounds per hour? Explain. Standard Error 8.786353 Standard Deviation 19.64688 Sample Variance 386 Confidence Level(95.0%) 24.39488 Part X: For each of the following situations, indicate whether an error has occurred and, if so, indicate what kind of error (Type I or Type II) has occurred. a We do not reject H0 and H0 is true. b We reject H0 and H0 is true. c We do not reject H0 and H0 is false. d We reject H0 and H0 is false. Part Y: What is the level of significance alpha? Specifically, state what you understand by an alpha value of 0.05 and how it is related to Type 1 error? Q4 Consolidated Power, a large electric power utility, has just built a modern nuclear power plant. This plant discharges waste water that is allowed to flow into the Atlantic Ocean. The Environmental Protection Agency (EPA) has ordered that the waste water may not be excessively warm so that thermal pollution of the marine environment near the plant can be avoided. Because of this order, the waste water is allowed to cool in specially constructed ponds and is then released into the ocean. This cooling system works properly if the mean temperature of waste water discharged is 60°F or cooler. Consolidated Power is required to monitor the temperature of the waste water. A sample of 100 temperature readings will be obtained each day, and if the sample results cast a substantial amount of doubt on the hypothesis that the cooling system is working properly (the mean temperature of waste water discharged is 60°F or cooler), then the plant must be shut down and appropriate actions must be taken to correct the problem. a Consolidated Power wishes to set up a hypothesis test so that the power plant will be shut down when the null hypothesis is rejected. Set up the null and alternative hypotheses that should be used. b In the context of this situation, interpret making a Type I error; interpret making a Type II error. c Suppose Consolidated Power decides to use a level of significance alpha = 0.05, and suppose a random sample of 100 temperature readings is obtained. For each of the following sample results, determine whether the power plant should be shut down and the cooling system repaired: 1. Sample Mean = 60.482 and Sample Standard Deviation = 2 2. Sample Mean = 60.262 and Sample Standard Deviation = 2 3. Sample Mean = 60.618 and Sample Standard Deviation = 2 You should show the 5 step STOH for each sample result. Q5. Advertising research indicates that when a television program is involving (such as the 2002 Super Bowl between the St. Louis Rams and New England Patriots, which was very exciting), individuals exposed to commercials tend to have difficulty recalling the names of the products advertised. Therefore, in order for companies to make the best use of their advertising dollars, it is important to show their most original and memorable commercials during involving programs. In an article in the Journal of Advertising Research, Soldow and Principe (1981) studied the effect of program content on the response to commercials. Program content, the factor studied, has three levels-more involving programs, less involving programs, and no program (that is, commercials only)-which are the treatments. To compare these treatments, Soldow and Principe employed a completely randomized experimental design. For each program content level, 29 subjects were randomly selected and exposed to commercials in that program content level as follows: (1) 29 randomly selected subjects were exposed to commercials shown in more involving programs, (2) 29 randomly selected subjects were exposed to commercials shown in less involving programs, and, (3) 29 randomly selected subjects watched commercials only (note: this is called the control group). Then a brand recall score (measured on a continuous scale) was obtained for each subject. The 29 brand recall scores for each program content level are assumed to be a sample randomly selected from the population of all brand recall scores for that program content level. The mean brand recall scores for these three groups were as follows: Furthermore, a one-way ANOVA of the data shows that SST = 21.40 and SSE = 85.56. a. Identify the value of n, the total number of observations, and k, the number of treatments. b. Calculate MST using MST = SST/(k-1) c. Calculate MSE using MSE = SSE/(n-k+1) d. Calculate F = MST/MSE. e. Define the null and alternate hypotheses using the treatment means M1, M2, and M3 to represent each group. Then test for statistically significant differences between these treatment means. Set alpha =.05. Use the F-table to obtain the critical value of F. You should show the 5 steps in the STOH. f. If you found a difference due to the treatments, between which groups do you think this treatment is most likely? Note you do have to perform tests to provide this answer. Q6 An accountant wishes to predict direct labor cost (y) on the basis of the batch size (x) of a product produced in a job shop. Using labor cost and batch size data for 12 production runs, the following Excel Output of a Simple Linear Regression Analysis of the Direct Labor Cost Data was obtained. The scatter plot of this data is also shown. Multiple R 0.99963578 R Square 0.999271693 Adjusted R Square 0.999198862 Standard Error 8.641541 df SS MS F Significance F Regression 1 1024593f 1024593 13720.47k 5.04436E-17m Residual 10 746.7624g 74.67624 Total 11 1025340h Coefficients Standard Error t Stat P-value Intercept 18 a 4.67658 3.953211c 0.00271e BatchSize(X) 10b 0.08662 117.13d 5.04436E-17e For your aid, the different values in the ANOVA table are explained below using the superscript notation: a: b0, b: b1, c: t for testing H0: b0 = 0, d: t for testing H0: b1 = 0, e: p-values for t statistics, f: Explained variation, g: SSE = Unexplained variation, h: Total variation, k: F(model) statistic, m: p-value for F(model) Answer the following questions based on the information provided above: a. Write the regression equation for the LaborCost (y) and BatchSize (x). Note that your equation has to identify the point estimates for b0 and b1 in the equation: y = b0 + b1x b Identify the t statistic and the p-value for this t statistic for testing the significance of the slope of the regression line. Using this, determine whether the null hypothesis H0: b1 = 0 can be rejected? c What do you conclude about the relationship between LaborCost (y) and BatchSize (x)? Use the different test statistics provided in the data to support your case. d. Interpret the meanings of b0 and b1. Does the interpretation of b0 make practical sense for this case? Think carefully about what the value of x will be when y = b0 . e Estimate the value of LaborCost for a batch size of 10. Use your regression equation and show all your steps. Q7 Use the following data for the given situation: International Machinery, Inc., produces a tractor and wishes to use quarterly tractor sales data observed in the last four years to predict quarterly tractor sales next year. All the data for answering the problems (a) through (c) has been provided to you. You do not have to compute any data for parts (a) through (c). a. What type of seasonal variation do you see in the sales data? Is there no seasonal variation, constant seasonal variation, increasing seasonal variation, or decreasing seasonal variation? State your reasons. Find and identify the four seasonal factors for quarters 1, 2, 3, and 4. b. What type of trend is indicated by the plot of the deseasonalized data? c. What is the equation of the estimated trend that has been calculated using the deseasonalized data? d. Compute a point forecast of tractor sales (based on trend and seasonal factors) for each of the quarters next year. You should show all your steps for each quarter forecast. (Hint: Note that you will use the equation from ( c ). This will provide you with the deseasonalized data. You then have to adjust it for the seasonal factor applicable for the quarter.) Problems on Confidence Intervals, Statistical Test of Hypothesis, ANOVA, Regression and Forecasting have been answered
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9239341616630554, "language": "en", "url": "https://farmlandaccess.org/federal-conservation-programs/", "token_count": 1001, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.037841796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c8303e29-76a7-4008-a1fc-99147832b08c>" }
There are many federal government programs offering financial and technical assistance to farmers, and many of these are available to encourage beginning farmers and sustainable practices. These federal government programs can help farmers address land access issues in a variety of ways. The United States Department of Agriculture (USDA), not surprisingly, is the main provider of financial and technical assistance to farmers. The USDA Farm Services Agency (FSA) and the USDA Natural Resources Conservation Service (NRCS) are the key agencies under the USDA that provide farmers with technical and financial assistance via farm programs. Financial resources available to farmers who meet a farm program’s eligibility requirements include loans, cost share to install conservation practices, and rental or other type of direct payments to protect natural resources on agricultural, farm, and forest land. Photo Credit: Lois Miller The Wahl family has been raising sheep and cattle in Oregon since 1874. Their 2,000-acre ranching operation includes timber-producing forests, ponds, riparian buffer vegetation, and wetland habitats. Conservation is a family tradition for the Wahls, who believe in protecting the resources of the ranching operation for future generations. The Wahls have enrolled in various federal USDA conservation programs over the years, including the Conservation Reserve Enhancement Program, the Wildlife Habitat Incentive Program, and the Environmental Quality Incentives Program. As the family continues sorting out how to pass the ranching operation to the fifth generation, conservation conversations will be a key part of the transition. Read more about the Wahl Ranch here. Much of the assistance provided to farmers through USDA agencies is funded through the Farm Bill. The Farm Bill is a large, complex piece of legislation addressing agriculture and a host of other areas, and is passed by Congress every four to five years. Each year, the Farm Bill provides hundreds of millions of dollars in assistance to eligible farmers. Although the programs funded may change from one Farm Bill to the next, the kinds of assistance – cost-share, loans, etc. – generally stay the same. Importantly, recent Farm Bills have recognized the unmet needs of new, beginning, and/or socially disadvantaged farmers and ranchers and have targeted resources to those groups. Farm Bill programs can be complicated to navigate. Fortunately, there are existing resources to help farmers and food producers understand and access these programs. For example, the National Sustainable Agriculture Coalition (NSAC) is a well-known organization based in Washington, D.C., with longstanding expertise in helping farmers access farm programs. NSAC focuses on helping sustainable and diversified farm operations, and also provides information related to farm programs for beginning and minority farmers. NSAC has developed a guide for farmers that explains key Farm Bill programs, called the Grassroots Guide to Federal Farm and Food Programs. In addition, NSAC also provides this helpful chart of food- and farm-related programs, which summarizes who is eligible to apply or sign up for each program. Additional farm program explainer resources are listed at the bottom of this page. Another way to get help with farm programs is to get to know the USDA staff at your local USDA Service Center. The Farm Service Agency (FSA) and Natural Resources Conservation Service (NRCS) staff these Service Centers and help farmers, food producers, and rural businesses understand USDA farm programs, including eligibility, program requirements, and how to sign up or enroll in a program. USDA provides a directory of local Service Centers by state and county. To access the directory, click here. In determining whether to apply for enrollment in a USDA program, important considerations include 1) whether a program furthers your own farming goals and 2) whether you will be able to meet program requirements. For example, some programs require a producer to maintain conservation practices installed on farmland using federal funding for a specific period of years. Additionally, farmers should keep in mind that the farm program application process takes time, energy, and patience. However, the resources available below and your local USDA office should be able to help you create a successful application and access resources to help your farm operation grow and thrive. It’s not an attorney’s job to make decisions for farmers or to set farm transfer goals. Instead, attorneys can provide information about pros and cons of different options, advice about what is common versus unusual, fair versus unfair, etc. Attorneys can help farmers understand the universe of possible farm transfer goals and help narrow down individual options so that farmers can make final decisions. The Center for Agriculture and Food Systems is an initiative of Vermont Law School, and this toolkit provides general legal information for educational purposes only. It is not meant to substitute, and should not be relied upon, for legal advice. Each farmer’s circumstances are unique, state laws vary, and the information contained herein is specific to the time of publication. Accordingly, for legal advice, please consult an attorney licensed in your state.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9683901071548462, "language": "en", "url": "https://geoffconsidine.com/2019/09/15/rethinking-earnings-and-consumption-through-life/", "token_count": 761, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.083984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e06394f1-0f1a-449e-a59a-a1777d6a19a4>" }
In the United States, many people are re-evaluating the traditional model of working and saving through life. The traditional model is that you get an education and then work full-time until you exit the workforce entirely and live on a pension, Social Security, and/or savings. In recent years, the Financial Independence / Retire Early (FIRE) movement has encouraged discussion of alternative ways to think about work and earnings. At the same time, economic research has started to address ways in which policies and regulations have started to change incentives in how people work (or not). Specifically, working later in life, whether part-time or full-time, is much more financially attractive than in the past. This is a result of changes in Social Security and the transition from traditional pensions to 401(k)s and IRAs. Transitioning from full-time work to part-time work and staying in the workforce longer is a scenario more people should consider. Traditional pensions (so-called defined benefit or DB plans) provide a specific amount of retirement income, typically based on the number of years that you have worked and an average of your highest-income years. Working longer typically increases the retirement benefit because you have more years of service but reduces the number of years you will be in retirement (obviously). If, for example, you have been employed for 30 years, adding one more year adds 1/30th to your retirement income, but you have lost 1/N of your expected lifetime retirement income, where N is the number of years that you expect to live in retirement. With 401(k)s and IRAs and similar defined contribution (DC) plans, the economics are very different. Delaying retirement reduces your number of years in retirement, but increases your wealth because your portfolio has more years to grow. You do not lose 1/N of your expected lifetime retirement income because you will have the option to draw a higher level of income in retirement because you have delayed retirement and/or you will have more wealth to leave to your kids or to causes that you support. Traditional pensions create incentives to work until a full retirement age and then to retire while self-directed retirement plans open up all sorts of new ways to shape earnings and savings. Consider, for example, the increasingly common scenario is which someone saves aggressively in their younger years and then downshifts to work fewer hours for some number of years. By doing so, they delay claiming Social Security and drawing from retirement savings. Saving more in your younger years gives your money more time to grow until you need it. Working part-time rather than abruptly ceasing all paid work reduces the amount of retirement savings that you would need to accumulate to support a traditional retirement. In addition, there is a disproportionate benefit to claiming Social Security later. Another benefit of part-time work in later years is that you maintain your skills and other human capital. Continuing to work part-time in a field in which you have some expertise makes it far more likely that you will be able to scale up the amount that you work if circumstances require. Someone who has consulted in their field for even ten hours a week is much more employable, one imagines, than someone who fully retired a decade ago. The old idea that people will work traditional full-time jobs for 30+ years and then retire abruptly to a life of zero work is neither terribly attractive nor practical for many Americans. People are living longer and have more productive years available to them, but these additional years can be spent in a variety of ways. Public policy makes it increasingly beneficial for people to work part-time for longer. The future of paid work is quite different than the past and I expect to see much greater variety of options for employment.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9640468955039978, "language": "en", "url": "https://infothatmatter.com/2020/06/13/what-is-gig-economy-and-why-is-it-booming/", "token_count": 1015, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.045166015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:345af06d-6297-48eb-ad7d-1b1cd9606813>" }
Ever got confused if you are a freelancer or a gig worker? Have you been working an independent online job for some time but still don’t know the difference between a freelancer and a gig worker? Don’t worry. This is one of the most common questions put to us in the past few days. We have decided to bring this dispute between the two terms to an end today. Read more to know about: What is Gig economy? Freelancer vs. gig worker Factors behind the rise in short-term jobs Pros and Cons of short-term jobs What is Gig Economy? According to a report by ‘flexing it’ published by economic times, 72% of all gig projects in India were commissioned by large corporate and professional services firms in 2018-19. This is a significant number because startups were initial adopters of the flexible gig economy. But, increasingly, large corporate and professional service providers are driving the demand. But, before we dive deeper into more detailed aspects of the gig economy, let us understand the basics. The word ‘gig’ earlier used to be limited to the musical lexicon. It referred to a paid engagement performance by a musician. However, lately, the term has become more expansive in the meaning it represents. It is now widely used in a career-related context. A gig economy refers to a type of market system in which temporary positions are common, and organizations hire workers for a short-term commitment. The term is used as a slang for jobs that last for a short specified duration. These hired workers get paid as per a mutually agreed upon rate and aren’t offered permanent positions. Freelancers vs. Gig workers: Demystifying the difference It’s easy to mix ‘gig workers’ and ‘freelancers’ because there is no standard definition for the term ‘gig economy’ worldwide. Generally, both freelancers and gig workers are considered part of the gig economy. However, these are entirely different terms that cannot be used interchangeably. A freelancer is an independent worker who runs his/her own business. They are responsible for everything from marketing, billing to the actual work. In a way, they are both the front-line worker and the CEO of their business. Freelancers set their pay rates, apply for jobs and can engage in a project ranging from months to years. Some freelancers may be associated with an organization for more than three years on a freelance basis. You may be interested in: Soft Skills and Websites to Boost your Freelancing Career Gig workers, although independent, are not the whole sole owner of their business. They are usually people hired through mediator apps like Ola, Uber, and Urban Clap. They are not responsible for marketing and billing, which is done by the mediating platform. They also do not decide the pay rates. Unlike a freelancer, a gig worker is often employed in what is known as micro-tasks or piecemeal work. Recommended for you: Earn Money via Micro Jobs In essence, the job and pay scale of a gig worker is closely tied to the parent company and its structure. On the other hand, a freelancer has a brand of his own and dictates his terms and policies. Reasons behind the Rise of Short-Term Jobs - Delinking of Job and Location: With the advent of the digital age, work can be done from anywhere thanks to mobile devices. This has allowed people to work for projects inaccessible physically. - Financial Pressure on Companies: The companies due to high competition are hiring flexible workers more as compared to cost-inefficient permanent workers. Companies also save up on infrastructural expenses. - Expert services without training: Companies can now take services from experts without having to pay for their training and long term recruitment. Nearly 70% of projects require less than 20 hours and are well suited for short-term jobs. Pros and Cons of Short-Term Jobs - Flexibility: People no longer need to stay stuck in a 9-5 work cycle and can adjust their work hours to their needs. - Growth opportunity: Constraints like the location of an individual do not impact the growth prospect in online short term jobs - Zero Job Benefits: Unlike a permanent employee, members of the gig economy do not get additional job benefits like insurance, maternity leaves, and allowances. - No Oversight: Since the gig economy consists of independent workers, there is a chance of potential misuse of that freedom. The assault of passengers by ride-sharing cab drivers is an example of this. We hope to have settled the doubt regarding the gig economy jargon through this post. If there are more questions on any career-related topics, please feel free to drop a comment. Subscribe to our newsletter to get path-breaking career insights and join us in the information revolution.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9181250929832458, "language": "en", "url": "https://insights.dice.com/2012/06/22/how-ebay-turned-to-alternative-fuels-for-its-data-needs/", "token_count": 400, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.208984375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:deb9a680-9d94-4e72-99c2-7283a9cf1662>" }
Big Data consumes Big Energy: the massive data centers that store and help analyze mountains of consumer and business information require a significant amount of electricity. That costs money—in the case of massive server farms run by some of the world’s largest companies, lots of money. In a bid to lower its data centers’ carbon footprint, Facebook launched the Open Compute Project, in which engineers figure out how to make facilities such as the social network’s Prineville, Oregon data center more efficient. Meanwhile, Apple has made a commitment to powering its data centers, including its massive one in North Carolina, with a high-percentage mix of renewable energy. But online auction site eBay has come up with a particularly unique solution for powering a new data center in Utah: fuel cells powered by biogas, or the gaseous byproduct of organic waste decomposition. The fuel cells are built by Bloom Energy, which also crafts fuel cells from ceramic material coated with proprietary inks—materials the company suggests are more environmentally sustainable than the acids, precious metals, and other materials used in other fuel cells. Like batteries, fuel cells create electricity from chemical reactions; unlike batteries, they need oxygen and a refreshed source of fuel. The new eBay data center will feature thirty Boom Energy servers onsite, each capable of supplying 1.75 million kilowatt hours (kWh) of electricity per year (6 megawatts in total), and go online by mid-2013. In case the fuel cells go offline for some reason, eBay will rely on the conventional grid to power the system; that same grid also powers the data center already onsite. EBay’s data centers need to crunch an epic amount of information from 102 million active users, plus PayPal and StubHub. The company already uses alternative energy, including Bloom Energy fuel cells and solar arrays, to provide electricity for its San Jose headquarters and other data centers. Image: Bloom Energy
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9647747278213501, "language": "en", "url": "https://www.dailysignal.com/2018/08/01/this-case-presents-perfect-opportunity-for-courts-to-push-back-on-federal-agencies/", "token_count": 856, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.462890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2f634cb7-94b1-44f9-a835-ce8c8ea44ea4>" }
Does an administrative agency have the power to rewrite an act of Congress? The answer to that question in the headline ought to be a resounding no. Yet, by deferring to administrative agencies over the meaning of federal law, the federal courts have for decades empowered the executive branch do exactly this. Agencies now rewrite the law with regularity. The problem is so-called Chevron deference—a doctrine that was meant to keep courts out of the detailed implementation of federal law. Courts decided to defer to administrative agencies when the law called on them to apply their specialized expertise—especially scientific expertise—to set various standards. Unfortunately, this practice has gotten out of hand. The courts have allowed agencies to dictate the meaning of federal law and even allowed agencies to change their mind about what a federal law means. An example is the case of the Federal Communications Commission’s regulation of the internet at issue in Berninger v. Federal Communications Commission, which is currently pending before the Supreme Court on a petition for writ of certiorari. This is a case of an agency saying the law means one thing on one day, and the complete opposite thing on another day. Indeed, the agency has changed its mind at least three times about the meaning of this one law. This must stop, and Berninger just might be the case for the court to put an end to this foolishness. To get an idea of the shenanigans of the FCC in this case, you need to go back in time to 2005 to a Supreme Court case titled National Cable & Telecommunications Association v. Brand X Internet Services, 545 U.S. 967 (2005). The issue in that case was whether broadband internet providers should be regulated as telephone companies (heavily regulated utilities) or information service providers (much lighter regulation under the law). The FCC opted for the lighter version of regulation, interpreting the Communications Act of 1934 and the Telecommunications Act of 1996. The court ruled that this was a permissible interpretation of the federal law and that the decision of the FCC was entitled to deference under Chevron. Ten years later, the FCC changed its mind and decided that the same law interpreted in 2005 now meant that the FCC had the authority to regulate broadband internet companies as if they were telephone companies. The District of Columbia Circuit Court of Appeals ruled that the FCC’s new decision was also entitled to deference under Chevron. The following year, with a change in personnel, the FCC changed its mind yet again ruling that broadband internet companies were really just information service providers and not subject to heavy regulation by the FCC. The law that Congress wrote did not change during this time—only the interpretation of the law by the FCC. Justice Antonin Scalia was fond of saying: “Words have meaning. And their meaning doesn’t change.” But that is not the case if an administrative agency is allowed to change its mind on the meaning of a statute on a whim. The words of the statute lose all meaning if a court must permit the agency, and only the agency, to interpret and reinterpret the words Congress wrote into the law. At that point it is the agency, not Congress, that is writing the law. Even worse, it is the agency, not the courts, interpreting the law. The agency becomes a law unto itself, answerable to nobody. Chevron deference was meant to cure the problem of an activist judiciary crusading to implement its own vision of appropriate regulation. It has led to the greater problem, however, of agencies rewriting the law (through “interpretation”) to pursue their own activist agendas never authorized by Congress. It is time for the court to put an end to this violation of separation of powers. If the law is clear, then require the agency to enforce it. But if the law is not clear, send it back to Congress and let the elected representatives make it clear. Chevron deference makes sense when Congress is asking an agency to use scientific expertise to set appropriate limits for air pollutants or exposure to dangerous chemicals. It violates the Constitution, however, when it is used to allow agencies to change their minds on what a law means.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9561119675636292, "language": "en", "url": "https://www.deepseanews.com/2008/07/rising-fuel-costs-hurt-marine-research/", "token_count": 278, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1962890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:48840bff-0bdd-417c-a3ad-94dfe5516452>" }
“Many of the research projects launched as part of the International Polar Year (IPY), which runs from March 2007 to March 2009, are under threat because of the steep rise in marine-fuel costs. Hundreds of Arctic and Antarctic scientists face uncertainty as polar science programmes worldwide are curtailed, postponed or cancelled. The price of a barrel of oil has more than doubled since March 2007, from US$60 to $140 now. High energy costs are a problem for research in most fields, but logistically complicated research operations in remote polar regions are more affected than, say, big physics experiments. “We have reached a point where the collapse of some of our activities is looming on the horizon,” says Karin Lochte, director of the Alfred Wegener Institute for Polar and Marine Research (AWI) in Bremerhaven, Germany, which operates the research icebreaker Polarstern, Europe’s largest scientific vessel. Icebreakers are usually fuelled by marine diesel oil (MDO), a cleaner and more expensive fuel than the heavy oil used by normal cargo ships. The average price for MDO has increased fivefold since 2003, from $250 to $1,300 per metric tonne (equivalent to around 1,200 litres of diesel). Since January, the price has increased by almost $550 per tonne (see graph).”
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9301590919494629, "language": "en", "url": "https://coinauctionshelp.com/Coin_Help_Blog/coin-grading/", "token_count": 552, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.068359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3db0570b-f799-46c7-8d93-97ccf59ea171>" }
COIN GRADING TERMS DEFINED While looking at coin price guides you will see some of these symbols for grading like FR, PR, AG, G, VG, VF, EF AU and MS. All of these acronyms stand for certain words or phrases that describe the condition of coins. They encompass a universal system that is used by price guides, dealers and experts to identify and figure the values of coins based on their grade. Not everyone will agree on the grade of a particular coin and one must watch out for unscrupulous sellers that over-grade or leave out other problems a coin may have like scratches, corrosion, cleaning, etc. (I will discuss problems a coin may have later.) The best way to start grading is to examine your coin to see if has a full date. If it doesn’t then it may grade FR or PR, but if the date is full then it may grade G-4. Then look at the over- all design of the coin (You may want to look at a few examples on an Internet coin site or grading book for help.) and begin adding points and grades with how many more features you can identify with the coin in question. The more features the coin has evident, the higher the grade. Below is a guide to the many grading words and phrases including their acronym symbols and number grades. Most, universally, use the symbols in the right hand column. Coin Grading Acronyms: Basal (Basal State) A flat piece of metal with no features whatsoever. Poor 1 = PR1 Fair 2 = FR2 About Good 3 = AG3 Good 4 = G4 Very Good 8-11 = VG8 Fine 12-19 = F12 Very Fine 20-29 = VF20 Very Fine 30-39 = VF30 Extremely Fine 40-49 = EF40 or EF40 Almost Uncirculated 50-59 = AU50-AU58 Mint State 60-70 = MS60-MS70 So as not to confuse the reader PR can also mean Proof, but will be followed by a number 60 or higher. One example is the 1895 Morgan Dollar, and it can have a grade of PR60 which means Proof 60. Also, PF can be used to designate a proof coin. It is obvious that grading is very subjective and is different when it comes to each coin type. Grading comes with experience, and I mean years of experience. It is always best to consult professional grading publications and trusted dealers for reference, or submit your coin to a third party grading company like PCGS, NGC, ICG, ANACS.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9586173892021179, "language": "en", "url": "https://e-cryptonews.com/everything-you-should-know-about-cryptocurrency-mining/", "token_count": 2344, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.060302734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:caa6f0bd-273a-41e2-b33c-3775ac8952b7>" }
Everything You Should Know About Cryptocurrency Mining Over the past few months, crypto market capitalization has multiplied and the prices of many cryptocurrencies have reached new highs. As a result, more and more people started considering mining as an option to benefit from this rally. But what is mining and what is needed for it? Only video cards or laptops? Let’s try to figure this out. Simply put, mining is a process where a machine performs certain tasks to receive cryptocurrency as a reward for its work. This mining concept is also known as the Proof of Work algorithm. But what are these tasks? Basically, it is solving mathematical equations and scenarios. The miner’s equipment is trying to solve a math problem and find an answer that must be in a certain range. If the miner finds the answer, then a new block will be created and added to the blockchain. A block reward is sent to the first miner who found this block. This reward consists of a small amount of newly created coins and fees for transactions that the miner decided to add to their block. Then the process is repeated — miners start looking for a new block and add new transactions. There is high competition between miners. The more computing power a miner has, the more chances that their hardware will be able to find the necessary solution and receive a reward. For the same purpose, a lot of miners are united in so-called pools. In mining pools, the reward is distributed among all participants, depending on their contribution to the process of solving the problem. Not all coins are available for mining, as not all use the Proof of Work algorithm for consensus. Coins that support mining may have different hashing algorithms and mining difficulty. Therefore, it is necessary to select the appropriate equipment for the efficient mining of a certain coin. What are the mining methods? There are several main mining methods — CPU, GPU, ASIC, and cloud mining. Briefly about each of them. CPU mining involves using PC processors. In the early days of crypto, this was a fairly popular mining method, but now it has practically outlived itself due to its extremely low efficiency. The CPU method has been replaced by GPU mining. Miners predominantly use GPU for mining because of its efficiency in the context of hash speed and rig price. Most often, GPU miners use several video cards for mining at once and create so-called mining farms. But it is also possible to use an ordinary video card that is installed on the PC. There are no significant differences between mining on the home computer and the farm, the difference consists only in the number of GPU devices and the size of the profit that miners can get. Most GPU miners try to use the latest generation of video cards for mining to stay efficient and predominantly switch to new devices when new versions become available. ASIC is a piece of hardware that is specifically designed for crypto mining. ASICs are much more efficient than GPUs and CPUs, but they are also the most expensive. There is some negative attitude towards ASIC miners in networks dominated by GPU miners because ASICs can introduce an imbalance in the network. That is why some coins cannot be mined with ASICs. At the same time, ASIC is the only effective option in networks where there are an extremely high mining difficulty and hash rate, for example, in the bitcoin network. All the above-mentioned mining methods imply that miners will buy and deploy mining equipment on their own. But there is a way that allows miners to borrow computing power — cloud mining. With cloud mining, miners rent equipment for a certain period and pay another company for it. Generally, companies that offer cloud mining usually have huge mining facilities with multiple mining rigs at their disposal, so it can be good for both miners with big ambitions and those who want to rent a rig for a test. All mining earnings (minus the electricity and maintenance costs) are credited to the miner’s wallet. Cloud mining can be suitable for those who do not want to dive into the hardware-related part of mining and want to use ready-made equipment. However, it is important to double-check how reliable the cloud mining company is. What do you need to start mining? First of all, you need to decide what mining equipment to use. It must be prepared in advance by installing the necessary mining software for a specific coin. In addition, the software helps miners track important parameters such as hardware hashrate, temperature, fan speed, average hashrate in the particular cryptocurrency network, etc. Also, miners have to select the cryptocurrency wallet for receiving mining rewards. If you need to exchange the mined coins for another cryptocurrency or fiat, for example, convert LTC to BTC, then you should also think about searching for a crypto exchange with multiple trading options. If you are not going to mine crypto on a large production scale, then it is worth considering joining a pool where miners receive a reward depending on their contribution. Otherwise, you will compete with other miners and pools alone, which may be ineffective in some cryptocurrency networks. Besides, miners should take care of the space for storing mining rigs. Mining farm consumes a lot of electricity and can make a lot of heat and noise. Therefore, in the case of large farms, miners might need fans and other cooling equipment as well. And last but not least is knowledge. If you want to become a miner, you need to figure out how to set up the necessary equipment and calculate what coin to mine depending on the equipment and electricity costs. Also, to stay an effective miner, it is important to constantly monitor cryptocurrency prices and mining difficulty to find the most optimal way for earning crypto. How to decide what to mine and whether it is profitable? Hardware price is not the only factor that a miner should take into account when calculating mining profitability. Mining equipment operates 24/7, and this carries a significant amount of electrical power. In this case, not only the electricity costs are important, but also electricity consumption per mining rig. One of the determining factors in the cost of electricity is the country where the miner is located. For example, the largest Bitcoin miners are located in China close to hydroelectric power plants in order to have access to cheap electricity and optimize their costs. To calculate the mining profitability, special calculators are mainly used that take into account a lot of parameters. Some of these metrics are: ● Mining equipment specifics (mining type, hashing power, power usage) ● Electricity costs (depend on country and place for mining) ● Cryptocurrency network features (hashing algorithm, mining difficulty, block reward, block time, etc) ● Cryptocurrency price and its volatility Such calculations may seem complicated at first glance, but it is still worth diving into this for efficient mining. Thanks to these calculations one can understand how profitable the existing equipment is and in which cryptocurrency networks it can show its best. There is no such thing as right or wrong cryptocurrency to mine. Everything rests on the efficiency and miner’s desire. For example, if you like Litecoin, want to support the network and it’s profitable for you, then you can just start mining and not delve into the specifics of other cryptocurrencies. However, if you want to get the most out of your hardware, then it’s worth keeping a close eye on the cryptocurrency market. The price of cryptocurrencies is constantly changing, and if something is profitable now, it does not mean that it will be profitable in the future. How to choose a mining pool? While solo mining gives you the opportunity to claim the entire block reward, this is not always profitable. Mostly in cryptocurrency networks, it is the mining pools that are dominated since there are more chances to get a reward if you are a part of the pool. Mining pools consist of thousands of miners who are trying to get block rewards together, while solo miners are ones against all. Pools are especially popular with small and medium-sized miners as they allow for more steady and consistent mining rewards. Here are some things to look out for when choosing a pool: ● Reputation — before joining the pool, you should find out what current members think of it ● Pool hashrate — compare the total cryptocurrency network hashrate and the pool’s hashrate in a given network to understand how often a pool can find a block and receive a reward ● Pool’s fees — when a block is discovered, many pools charge a commission, but some do not. ● Uptime efficiency — check that the pool’s uptime is 99.5% or higher and that it has backup servers in the case of an outage ● Pool’s threshold — using low effective equipment may not be feasible in some mining pools ● Payout method — familiarize yourself with the main payment methods and what the pool itself uses ● Location — make sure that pool’s servers are near you to quickly get information about the situation in cryptocurrency networks and the pool Finding a pool that will be perfect in all respects is almost impossible, so it is always a compromise. Many large mining pools even have their own support in case of technical issues, so if you have any questions, you can address them directly to the pool representatives. What are the advantages and disadvantages of mining? One of the main advantages of mining is that the cryptocurrency market is still in its early stages of development. And the earlier miners join the network, the more benefits they get from it. Therefore, many miners join the network not only due to current profitability but also because they are confident that the cryptocurrency will be more valuable in the future. Let’s take a look at bitcoin as an example. Due to halvings, the amount of new coins in the block is decreasing and that increases the scarcity of the asset. Those who mined bitcoin in 2010 on a laptop and received 50 BTC for it benefited significantly from the current price of the main cryptocurrency. Now miners are launching huge mining farms to receive a reward of 6.25 BTC. The next 10 years will show whether it is justified. One of the main disadvantages follows from this. Given that mining certain cryptocurrencies is becoming more and more difficult and costly, this creates a higher entry threshold for miners every year. Therefore, if the miner does not constantly adapt to changes in the industry, they will become less and less efficient. But if the miner can remain in positive territory, then one becomes their own boss. Miners are left to themselves and only they decide in which direction to move. Therefore, if you are looking for financial independence, then mining can be a great way to achieve this goal. As one of the disadvantages, the current attitude of society towards mining can be recalled. According to some experts, cryptocurrency mining requires too much energy and this affects the climate. However, the eco-friendly issue of mining is rather related to the energy sources that people mainly use, not to mining itself. With the gradual transition to renewable energy sources such as solar, wind, and hydropower, such an issue will be sidelined. Mining is a very flexible process where everyone can find a suitable path. The answer to the question of what to mine depends only on the miner’s ambitions and capabilities. The cryptocurrency market is diverse enough for efficient mining both for small miners with a few video cards and large players with industrial capacities. The main thing is to study the issue in advance and assess your capabilities
{ "dump": "CC-MAIN-2021-17", "language_score": 0.938494861125946, "language": "en", "url": "https://inflationdata.com/articles/2008/08/18/inflation-vs-consumer-price-index-do-you-know-the-difference", "token_count": 629, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.030029296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:983ab1ac-05c3-49ca-a715-8f1f28fd3a96>" }
Many people are confused by the difference between Inflation and the Consumer Price Index. The Consumer Price Index is as its name implies an index, or “a number used to measure change”. The Consumer Price Index (CPI-U) The government chose an arbitrary date to be the base year and set that equal to 100. Currently that date is 1984. (Or more accurately the average of the years 1982-1984) previously the base year was 1967 (they change the base year every once in a while so you don’t notice that there has been over 2000% inflation since the start). See Cumulative Inflation Since 1913. Every month the Bureau of Labor Statistics (BLS) surveys prices around the country for a basket of products and publishes the results as a number. Let us assume for the sake of simplicity that the basket consists of one item and that one item cost $1.00 in 1984. Then the BLS published the index in 1984 at 100. If today that same item costs $1.85 the index would stand at 185.0 of course a group of items would work the same way. If you have 100 items each would account for 1% of the total index. By itself that does not tell us what the current Inflation rate is. We must do some calculations using that index to tell us the Percentage of increase or decrease in the level of prices. So How does Inflation or Deflation relate to the CPI? “Price Inflation” is the percentage increase in the price of the basket of products over a specific period of time. “Price Deflation” is, of course, the percentage decrease in the price of the basket of products over a specific period of time. For convenience Price Inflation has been shortened in common usage to simply “Inflation” and similarly Price Deflation has been shortened to “Deflation”. (*Interestingly this is not Webster’s definition of Inflation… More) In order to calculate the percent of inflation or deflation we have to use the Consumer Price Index as a starting point. So assuming You wanted to calculate the inflation rate from July 2000 until July 2008. You need to know the CPI for the starting and ending dates. So the CPI index in July 2000 is 172.8 and the CPI index is 219.964 in July 2008. (Note they went to a three decimal place accuracy in between). The formula is: (end -start)/start so we have (219.964-172.8)/172.8 = Now that has to be converted to a percent so we multiply it by 100 to get 27.29% inflation. Normally, the inflation rate is calculated on an annual basis for example from July 2007 until July 2008. That will give you the amount of inflation in one year. Which is typically called “The Inflation Rate“. So from this example we can see how the Consumer Price Index (CPI) is used to calculate the actual inflation rate.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9408769011497498, "language": "en", "url": "https://marketrealist.com/2015/09/us-electricity-generation-fell-hard-september-18-week/", "token_count": 315, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01031494140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:97f6a7f7-a2ee-4537-87a1-ff93421f90cb>" }
The Edison Electric Institute (or EEI) publishes electricity generation data weekly. The current report is for the week ended September 18. Electricity generation in the United States dropped to 77.3 million MWh (megawatt hours) for the week ended September 18, a 9.0% drop from the previous week’s 84.9 million MWh. However, the week’s electricity generation was higher than the 74.7 million MWh reported during the corresponding week in 2014. Why is this indicator important? More than 90% of the coal produced in the United States is used for electricity generation. The power utility segment is coal’s largest end user. As a result, coal and utility investors should watch electricity generation trends. Electricity storage is expensive, so most produced electricity is consumed right away. Thus, electricity generation mirrors consumption. What does this mean for coal producers? Thermal coal is used mainly for electricity generation. Everything else being equal, a drop in electricity generation is negative for coal producers (KOL) such as Peabody Energy (BTU) and Cloud Peak Energy (CLD). In addition, coal is losing market share to natural gas in the current low natural gas price environment. Weekly generation levels are subject to seasonal deviations. The impact on utilities (XLU) such as NextEra Energy (NEE) and Southern Company (SO) depends on the regional breakdown of electricity generation. We’ll take a look at this in the next part of this series.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9661744832992554, "language": "en", "url": "https://mymommyneedsthat.com/children-money-management/", "token_count": 1792, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0281982421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f496465e-770d-4bd5-8f03-e5c37d913b41>" }
Teach your children about money and financial responsibility in a fun and interactive way and raise money-smart kids who’ll understand the value of money from a young age. Children’s books about money are a great way to do that – and we’re here to show you how. In this progressively digital world we live in, where credit cards, online banking, and online shopping are only increasing in popularity, the concept of money may be somewhat hard for kids to comprehend. Of course, parents are already trying to teach their kids about healthy eating, the importance of physical activity, and while that’s all well and good, there are many other important life skills to be learned – and financial education is one of them. But money is a complex topic for young children, right? So what if we tell you that it doesn’t have to be? Indeed, teaching your kids about money can be done in a fun and comprehensive way with the help of some fun and engaging children’s books about money – and we’re here to introduce you to the best ones out there! Before we begin, let’s say a few words on why learning about money from a very young age can be valuable and beneficial to your child later on in life when it comes to financial responsibility. Why Is It Important to Teach Kids About Money From a Young Age? We all want to see our children becoming financially secure adults and leading successful lives. In order to increase the chances of this happening, they need to learn the value of money from a young age so they make good financial decisions later on in life. Firstly, teaching kids about money, especially saving money, makes them aware of other important skills like being patient and having self-control. These life skills develop in early and late childhood, so learning how to be patient and have self-control while learning about money during these years can have incredibly positive effects and many benefits in adulthood. Secondly, depending on the story, children’s books about money can teach them a great deal about how to set goals and accomplish them. For example, they will be able to understand the necessary steps they need to take so they can buy their favorite toy – like how much money they should save each week, what chores to do to earn an extra buck, how much time it’ll take, and more. Lastly, they’ll not only learn about what it means to be responsible when it comes to money, but they will acquire a number of soft skills as well. Kids that have some financial knowledge from early on can grow up to be responsible adults without the stress and anxiety most adults face today in regards to money. Since there are no lessons about money and financial responsibility is primary schools and since many parents don’t really teach their kids money management until they are old enough to start working, books about money with fun stories and interesting illustrations are here to help! Let’s introduce you to the best ones. Walk-It Willow (Editor’s Choice) Walk-It Willow is one of the amazing children’s storybooks that can be found at Clever Tykes, and this one is designed to teach kids the importance of organisation and being enterprising, in a really fun way. Through Willow’s endeavors, the books addresses money and money management in a fun way. It’s our favorite one so far and it’s one of the best children’s books about money that can be found online. Walk-It Willow is a story about Willow, a young girl who loves her dog Stomp. She enjoys walking him every day and realizes that she can turn this activity (and her love for dogs in general) into a great business – her own dog-walking service. Along the way she faces challenges and comes to important realizations: the importance of hard work, money management, organization, problem-solving, communication, and of course, the rewards that come when you work hard to make your goal happen. What mistakes will Willow make? How will she make them right? What will happen to her beloved canine Stomp? Your child will learn all of this and more by reading this fun and engaging storybook. The story is amusing, exciting, and easy to comprehend; the detailed and fun illustrations are top-notch, and the message important and on-point. All of this combined makes it our favorite book to recommend. We guarantee that both you and your kid(s) will love it! With its 300 books and 260 million copies sold, the Berenstain Bears books are very famous and popular with kids, and this book is no exception. Kids enjoy learning from those bears and in this story specifically, they’ll learn about the importance of being responsible when it comes to money. In Trouble with Money, both Mama and Papa bear want to explain to their kids that money doesn’t grow on trees. Afterwards, the young siblings will try to make money by starting a couple of “businesses” like opening a lemonade stand, starting their own walking service, and so on. A truly great story about earning and saving – and it comes with 50+ stickers too! Bunny Money is an adorable story about two bunnies who want to spend their savings on a gift for their grandma’s birthday. However, as time passes by, their savings start to decrease as Max and Ruby spend it little by little while searching for a present. Will they manage to keep enough of their savings in order to buy a gift for their grandmother? This children’s book is a fun tale and it’s one of the 40 books featuring bunnies Max and Ruby written by Rosemary Wells. The story is heavily illustrated and fun to read – your kids will be so entertained that they won’t notice they’re learning about money management along the way! In this book, the main character is a boy named Pete who has the habit of saving some of the money he gets from his allowance. However, sometimes he spends too much of the savings too fast. That being said, once he realizes this he thinks things over and starts to save his allowance once more while strategizing how to spend it in the future. This children’s book about money is recommended for kids who are from 5 to 8 years old. The author of the book, Harriet Ziefert, has more than 200 books that are intended for children. This one is perfect if you want to implant the idea of savings in your kid’s mind from early on. If your kid already puts some money aside this book will make him or her think about how and when to spend it. As the title says, this book is all about money. In an interesting way, of course, your child will get acquainted with various types of currency and objects used for trading that were used long ago. The story is about the history of money and mentions different forms of currency used in different cultures, some of them being shells, leather, coins, and so on. One Cent, Two Cents, Old Cent, New Cent with its great rhymes also explains how in the bygone eras of the world, certain temples were used as banks, and then it fast-forwards to modern banks and explains the basic concepts related to paying and gaining interest – in an engaging and interesting way, of course. This is another great illustrated book designed to teach kids about money. In this storybook, the main protagonist Lily goes shopping with her father. While at the shop, she learns all about the wants and needs of consumers, as the title suggests. This very cute story is recommended for kids between 5 and 8 years old. It’s a guide of sorts that’ll help them understand how the market works and why exactly is money important, but it’s so interesting your kids will barely notice they’re learning. The illustrations are engaging, so it makes it easy for the parent to spark a discussion once reading time is over. In Little Critter: Just Saving My Money, the author tells a story about a child who wants to buy a skateboard, only for his father to explain to him that he needs to earn and save money first in order to buy something later. Little Critter takes that lesson to heart and starts working different jobs and doing various chores to make his goal happen – like feeding the dog or selling lemonades. While he works hard to earn money, the Little Critter starts to understand the value of money and money management. This is a great book if you want to teach your child more about entrepreneurship as well as earnings and savings.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9575964212417603, "language": "en", "url": "https://thehutchreport.com/the-great-divide-between-cause-and-effect/", "token_count": 988, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.44140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f1d3f830-78b2-41b0-a3aa-70245c60f662>" }
Cause and effect is the principle of causality, establishing one event or action as the direct result of another or where the cause is partly responsible for the effect, and the effect is partly dependent on the cause. We often look towards correlations in order to identify and resolve cause and effect relationships and there are so many. Is obesity (the effect) directly related to the consumption of fast food (the cause)? Or is obesity related to the fact that people with limited disposable income can only afford to eat at fast food establishments? Or is obesity the result of poor education that leads to poor paying jobs that result in limited disposable income which provokes people to choose affordable fast food outlets? There is a further complication in that, “Correlation does not imply causation.” Just because two trends seem to fluctuate in tandem, doesn’t prove that they are meaningfully related to one another. As an example we can look at the correlation between the per capita consumption of chicken to total US crude oil imports. Correlation is something which we think, when we have limited information at our disposal. So the less the information we have the more we are forced to observe correlations. Similarly the more information we have the more transparent things will become and the more we will be able to see the actual casual relationships. As humans, we generate and evaluate explanations in a very spontaneous manner. In fact, to do so is fundamental to our sense of understanding. We don’t like uncertainty and ambiguity. From an early age we respond to issues of uncertainty by spontaneously generating plausible explanations. In our rush for an explanation, we tend to produce fewer hypotheses and search less thoroughly for information. We are more likely to form judgments out of first impressions and fail to account enough for situational variables. This happens very often amongst economists and “may” explain why they are so often wrong in their conclusions. As an example, central banks believed that accommodative monetary policies would encourage banks to extend credit to borrowers. Available information regarding lending decisions pre- and post- negative interest rate policy (NIRP), however, indicates that banks did not increase their marginal propensity to lend. Instead, the suppression of rates on behalf of the central banks narrowed banks’ net interest margins and thereby discouraged credit expansion. Loan growth in Europe and Japan has remained weak and, despite the significant rally in global equity markets, bank stocks did not fare better after the arrival of NIRP. This example in itself is vastly over simplified as there are a number of issues that may have played a part in coming to this conclusion. So if this is really the case where we, as individuals, tend to jump to conclusions, spontaneously generate plausible explanations or find correlations where there are none, how can we be certain that our leaders, bankers, managers, the media etc, are not doing the same thing? The general public is provided little to no insight into the detailed thought processes that go into many governmental decisions. How do we know our officials have considered all the angles and come to the best decision possible? All we are given is their decision and a political sound bite designed to provide the appearance of an explanation. We buy into these explanations because they provide us with a sense of certainty. If we look towards current events, we see that we are now experiencing an unprecedented level of income inequality in the country but what is the cause of this effect? It forces us to go back into a vicious cycle of thought where we once again are prone to jump to conclusions, explanations with limited information etc. To better understand the complexity of these issues you can try coming to your own conclusion with the use of the Five Whys technique. The five whys is an iterative interrogative technique used to explore the cause-and-effect relationships underlying a particular problem. As an example we have taken the recent riots and just brainstormed through the exercise. This doesn’t mean to say we have come to the proper conclusion or have exhausted all the whys, but you can see how finding causality can quickly become a complex issue. Why? – People are frustrated and are lashing out Why? – They lack opportunities, equal opportunities and income / they are drowning in debt / injustice Why? – Available jobs pay low salaries / expenses are increasing / fewer job opportunities / people living beyond their means / inequalities within the justice system Why? – Increased productivity through technology has led to layoffs / poor levels of education Why? – Management compensation is linked to increased shareholder value / Decrease costs and increase profits anyway possible / broken education system If anything, this should persuade you to look deeper into our current state of affairs, question everything you hear and not to assume the explanations that you are being fed are anymore accurate than what you could conclude on your own. The divide between cause and effect is greater than you can imagine.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9231519103050232, "language": "en", "url": "https://www.energimyndigheten.se/en/cooperation/eu-and-europe/", "token_count": 692, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.154296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dc280a94-ed15-4d90-b8a8-d7d0bd0a481f>" }
The European Union EU Climate and Energy Objectives. Climate change, increasing import dependency, security of supply and higher energy prices are challenges that all EU countries are facing. At the same time the member states' energy interdependence is increasing. The common EU energy objectives build on agreements decided between member states. In 2007 the EU adopted targets for 2020, which were enacted through the climate and energy package in 2009. The package shall get Europe back on track - towards a sustainable future in a low carbon and energy efficient economy. To achieve this end the EU has committed itself to by 2020: - reduce greenhouse gas emissions by 30 per cent within the framework of a global climate agreement, or by 20 per cent in the absence of an international agreement - increase the proportion of renewable energy in the energy mix to 20 per cent - increase the proportion of renewable fuels to 10 per cent - increase the efficiency of energy use by 20 per cent In October 2014 the EU also adopted common objectives for 2030. A decision was taken on a 2030 policy framework which aims to make the European Union's economy and energy system more competitive, secure and sustainable. The EU has committed itself to by 2030: - reduce EU domestic greenhouse gas emissions by at least 40 per cent below the 1990 level - increase the share of renewable energy to at least 27 per cent of the EU's energy consumption. The target is binding at EU level - increase the efficiency of energy use with an indicative target of 27 per cent to be reviewed in 2020 having in mind a 30 per cent target - reform and strengthen the EU ETS. In addition a reliable and transparent governance system will be developed to help ensure that the EU meets its energy policy goals. Swedish Climate and Energy Objectives The EU policies give a platform for Swedish climate and energy policy, and Sweden will make its contribution to achieving the Union's targets. In 2009, the Parliament approved a comprehensive climate and energy policy which sets a number of targets for Sweden: - 40 per cent reduction in greenhouse gases compared to 1990 - At least 50 per cent share of renewable energy in the energy mix - At least 10 per cent share of renewable energy in the transport sector - 20 per cent more efficient use of energy compared to 2008 Long-term priorities and vision beyond 2020: - By 2030, Sweden should have a vehicle stock that is independent of fossil fuels. - Sweden's electricity production today is essentially based on only two sources – hydropower and nuclear power. To reduce vulnerability and increase security of electricity supply, a third pillar that reduces dependence on nuclear power and hydropower should be developed. To achieve this, cogeneration, wind power and other renewable power production must together account for a significant proportion of electricity production. - A vision that, by 2050, Sweden will have a sustainable and resource-efficient energy supply and no net emissions of greenhouse gases in the atmosphere The Swedish Energy Agency is active to further the European Union and the Swedish energy policy objectives The Swedish Energy Agency represents Sweden in a number of committees for the preparation and implementation of EU directives on energy, for example the Eco design directive or the Renewable Energy Directive. The Agency also works with information and communication of EU energy related policies and EU funded programmes, such as Horizon 2020 and the Regional development fund. The work aims to support and increase the participation of Swedish actors in these programmes.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9587209224700928, "language": "en", "url": "https://www.foreclosure-support.com/bankruptcy-sales.php", "token_count": 1883, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.072265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0dc70290-f701-4591-bda6-475d660ec96b>" }
What is bankruptcy? Bankruptcy is a legal process whereby individuals or organizations seek relief when they're not able to repay their debts. It is often filed by a debtor or creditor and imposed by the court. You might have to declare bankruptcy when you're facing foreclosure on your home. But, even when it seems like the perfect option, there are some drawbacks like impacting your credit score and ability to get a loan in the future. Most bankruptcies fall under the Bankruptcy Code Chapter 7, Chapter 11, and Chapter 13. The Bankruptcy Process Below are the basic steps to the bankruptcy process: - Pre-Bankruptcy Counseling: More often than not credit counseling is required for those looking to file bankruptcy; therefore, this is often the first step in the process. - Filing Bankruptcy: After obtaining counseling, individuals (or companies) then file bankruptcy. For individuals, most people either file Chapter 7 or Chapter 11 (which will be explained in more detail later). A petition for bankruptcy from an attorney is highly recommended. - Trustee Appointed: Typically a judge will then appoint a trustee to oversee the case. - Automatic Stay: After filing bankruptcy, an automatic stay is put in place to protect the individual who filed bankruptcy from creditors. - Meet with Creditors: Approximately a month or two after filing for bankruptcy, you (and your lawyer) will have a meeting with the creditors and will examine all pertinent records and documents. All items that are not exempt are often given to the creditor at this time to help reduce your debt. Throughout this process, the person filing bankruptcy tends to lose his or her home if the property has equity. On the other hand, if there is no equity in the property, then the homeowner can more than likely keep the property as long as they remain up-to-date on mortgage loan payments. However, if they do not remain up-to-date on payments then the property can still be foreclosed upon by the lender. The property may be exempt and therefore the homeowner may be able to keep the property even if it has equity. At the end of the day, it primarily depends upon bankruptcy laws, which vary by state. Furthermore, the type of bankruptcy may also play a key part in whether or not the home is acquired by creditors. When properties are lost in the bankruptcy process, potential homebuyers and investors can often purchase these properties below market value. As a result, bankruptcy homes are often considered by those looking for discount properties. Why Does Bankruptcy Happen? Bankruptcy happens for many reasons. Some of these reasons are: Job loss could mean that someone is laid off, resigns, or got their appointment terminated. Loosing your work can be devastating for a lot of people, especially those that lost their jobs without any compensation and/or benefits. Without a job and any savings is one of the primary causes of bankruptcy, and paying your bills with a credit card might worsen the situation. Loss of Property Natural disasters like flooding or earthquakes could lead to loss of property and force the owner into bankruptcy when the property is not insured. Along with losing their property, some people might also lose some valuable items that are not easy to replace. Divorce or Separation Another major cause of bankruptcy is marital dissolution, and this could result in financial strain on both parties involved. The legal fees, child support, alimony, and division of assets are some of the things involved in marital dissolution. The financial burden involved in this process could result in bankruptcy. According to the 2019 American Journal of Public Health publication, 66.5 percent of bankruptcies occur in the United States as a result of medical conditions such as the inability to pay huge medical bills or losing tangible work time. Although, health insurance could help but not in the case of job loss or paying high bills. Medical expenses could run into thousands of dollars, and this could wipe out home equity, education funds, or retirement funds within a limited period. At this point, the only option left might be to declare bankruptcy. Excessive or Poor Use of Credit When not used correctly, installment debt and car debts, credit card bills, and loan payments could lead to disastrous financial problems that might make the owner unable to meet up with any of the payment plans. When this happens, bankruptcy becomes inevitable. Chapter 7 Vs Chapter 13 When filing for bankruptcy, you have two options as an individual: Chapter 7 and 13. When you choose chapter 7, most of your assets would have to be sold off to pay your creditors. In chapter 13 bankruptcy, you will be given a specific period for the payment of your debt, but you can keep your assets. In Chapter 7 bankruptcy ( or liquidation bankruptcy), public benefits are exempt. Assets like clothing, pensions, work tools or equipment, pension, some part of your automobile and home equity and social security are also exempt from the assets liquidated. Assets like other property aside from your primary residence, additional automobiles, boats, investment account, bank accounts, and valuable items will be liquidated. With chapter 7 bankruptcy, a huge part of your loan must have been settled, and there will be no need for repayments. But debts like student loans, taxes, and child support will have to be paid. People with low income generally prefer this form of bankruptcy since they have few assets In chapter 13 bankruptcy ( or reorganization bankruptcy), you can keep your assets, but you must agree to pay your loan within a period of between 3-5 years. You will make your payments to a trustee who will then send them to your creditors. People with tangible properties choose this form of bankruptcy to keep their properties intact or save them from seizures or foreclosure for a specific period. Bankruptcy and Credit Score Bankruptcy could have a serious effect on your credit card score. That's why declaring bankruptcy shouldn't be an option. Alternatively, you could work out a comfortable payment plan with your creditors. For instance, based on the type of bankruptcy you've chosen, a bankruptcy stays on credit reports for 7 or 10 years. As a result, getting a mortgage, car, or credit card loan in the future becomes a problem. Your insurance rates might also become high, and getting a new job or renting an apartment might be an issue. How to Buy a Bankrupt House For Sale If you are a potential home buyer or investors looking for cheap properties, then you may wish to consider looking for bankrupt houses for sale. These properties can often be purchased well below market value (even when you take into consideration trustee or negotiation fees), making them great investment opportunities. How, exactly, do you buy a home that was lost due to bankruptcy? If you are looking for a bankruptcy home for sale, start your search with real estate listing services websites and then follow the same steps as if you are purchasing any other property or foreclosure. Options for those Facing Bankruptcy Before filing bankruptcy, it is highly recommended that you meet with attorneys to learn more about your options and the US bankruptcy code. An attorney can help you better understand the bankruptcy process and what bankruptcy options are available to you and your circumstances as well as other options (besides bankruptcy). Attorneys can also help you to better understand whether your property can be saved through the bankruptcy process and a wide variety of other pertinent information, such as if your student loans can be included and how to set up a repayment plan that works for you. The Discharge in Bankruptcy The discharge in bankruptcy occurs approximately three to four months after filing. A discharged bankruptcy simply means that the individual is no longer responsible - legally - for the discharged debt. Chapter 7. Liquidation Under the Bankruptcy Code Chapter 7 bankruptcy is also known as liquidation bankruptcy and is what most people are referring to when they say they are filing personal bankruptcy. In this type of bankruptcy, the trustee sells off unprotected (nonexempt) assets to repay the creditors. Debt that is unable to be paid off by the assets tends to be discharged. Chapter 11. Reorganization Under the Bankruptcy Code Chapter 11 bankruptcy is more complex than Chapter 7 or Chapter 13. More often than not, a business instead of an individual will file Chapter 11. In this type of bankruptcy, the debtor will attempt to work out a reorganization or debt in an effort to keep all assets. Chapter 13. Individual Debt Adjustment Chapter 13 bankruptcy is also commonly used by individuals and unlike Chapter 7 does not require liquidation. In Chapter 13 bankruptcy the individual more than often can keep all assets and a deal is created that must meet bankruptcy laws. What in someone’s estate may be included in the restructuring depends on state law; often homes are included. Learn More about Bankruptcy The information above is merely a brief introduction into bankruptcy; therefore, make sure you visit the U.S. Bankruptcy Court website to learn more about bankruptcy rules, obtain necessary forms, and to find the answers to all of your bankruptcy-related questions.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9442904591560364, "language": "en", "url": "http://energyskeptic.com/2015/robert-mcnally-u-s-congressional-hearing-testimony-on-energy/", "token_count": 8884, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.3671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:abaa2735-62a4-43ce-9300-873a6fa47a7c>" }
[I think it is interesting to know what Congress hears about energy from experts, and what the official U.S. energy policies are. It is frustrating that Energy Return on Invested (EROI) is never discussed, even by intelligent analysts like McNally. Nor is the enormous ecological harm of biofuels – their stripping of topsoil, depletion of aquifers, their dependence on natural gas based fertilizer and oil, destruction of rainforests to grow palm oil, negative EROI, and the myriad reasons why cellulosic biofuels are unlikely to be developed discussed at hearings (i.e. “Peak Soil“). Well, what else can be expected of a scientifically illiterate congress and public? With so many leaders crowing energy independence, the train is picking up speed as it heads for the ecological brick wall, not slowing down. Alice Friedemann www.energyskeptic.com author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer] House 113-2. February 13, 2013. American Energy Outlook: Technology, Market and Policy Drivers. U.S. House of Representatives. 119 pages. Testimony of Robert McNally We must recognize that our standard of living is closely and inextricably linked to fossil fuels. It is hard to overstate but often overlooked how much modern civilization depends on continuous access to the substantial flow of fossil fuels from producers to consumers. The displacement of bioenergy [i.e. wood] with coal made the industrial era possible. Subsequent use of oil and natural gas augmented coal and enabled our modern transportation and electricity sectors to develop. Concentrated and abundant energy stores of coal, gas and oil power virtually all we do at the current state of technological development. Transportation, which is critical to food supply chains and other core systems society needs to function, today runs almost entirely on oil. Electrical generation taps a more diverse suite of fuels but much of it, too, is fossil fuel powered. “Energy,” as Nobel chemist Richard Smalley noted in 2003, “is the single most important factor that impacts the prosperity of any society.” Fossil-based energy or hydrocarbons–oil, gas, and coal–are far superior to other primary energy sources because they are dense, highly concentrated, abundant, and comparatively easy to transport and store. That is the case now, and it is expected to be the case in the coming decades. The latest EIA International Energy Outlook forecasts that world energy consumption will rise by 53 percent by 2035 and fossil fuels ’ share of total energy consumption will rise from 74 to 79%. Patience about the time it takes to transform energy systems The pace of energy transformations depends on both the availability of economical stores of energy and the development of devices that can turn those energy stores into “work” such as light, heat, and mobility. Major energy transitions take a very long time, measured in decades if not generations. The respected energy expert Vaclav Smil in 2008 “ Moore’s Curse and the Great Energy Delusion, The American Magazine: “Energy transitions” encompass the time that elapses between an introduction of a new primary energy source oil, nuclear electricity, wind captured by large turbines) and its rise to claiming a substantial share 20 percent to 30 percent) of the overall market, or even to becoming the single largest contributor or an absolute leader (with more than 50%) in national or global energy supply. The term also refers to gradual diffusion of new prime movers, devices that replaced animal and human muscles by converting primary energies into mechanical power that is used to rotate massive turbogenerators producing electricity or to propel fleets of vehicles, ships, and airplanes. There is one thing all energy transitions have in common: they are prolonged affairs that take decades to accomplish and the greater the scale of prevailing uses and conversions the longer the substitutions will take. The second part of this statement seems to be a truism but it is ignored as often as the first part: otherwise, we would not have all those unrealized predicted milestones for new energy sources. The main reason why it would take many decades to transform our energy system is that our energy system is colossal. Developed countries have made, and continue to make, enormous investments in recent years in fossil energy production, transportation, refining, distribution, and consumption systems and devices that could not quickly be replaced in any reasonable scenario, even if an alternative energy source was available. Whether one regards our society’s massive investment in and dependence on hydrocarbons as an addiction or a blessing, it is here to stay for many more decades. Humility and restraint about predicting, much less attaining, arbitrary and aggressive energy targets The historical record is littered with overly optimistic or scary predictions and policy targets , by experts and non-experts alike. While energy surprises can be humbling for analysts, too often leaders and observers ignore technology, geology, and economics and either predict or prescribe unachievable targets. They range from period cries of imminent peak oil, through confident predictions in the 1950s that nuclear energy would be “too cheap to meter”, to President Nixon ’s declaration that the US would be energy independent by 1980. Widespread adoption of electric cars or deployment of renewable energy technologies has a long and sad history of failure going back over a century. Just six years ago, Congress passed a law mandating 36 billion gallons of biofuels consumption by 2022 that EIA analysts say cannot be met given economic and scientific realities. In July 2008 former Vice President Al Gore called for the US to commit to producing our entire electricity supply from renewable sources within 10 years. Though he described the goal as “achievable” and “affordable” not one energy expert I am aware of would agree this is even remotely possible. At best, arbitrary and aggressive targets can mislead the public about the complexities and uncertainties involved in energy market transformations and at worst when such targets are married to costly mandates or subsidies, they can become expensive policy errors. I would respectfully recommend policy makers abjure from basing policy on arbitrary, unrealistic targets, much less basing mandates or subsidies on them. Energy transformations are more akin to a multi-decade exodus than a multi-year moonshot, as pretending otherwise misleads citizens and distracts from serious debate about real circumstances and solutions. Senate Hearing. June 23, 2015. American Energy Exports: Opportunities for U.S. Allies and U.S. National Security. Subcommittee on Multilateral International Development, Multilateral Institutions, And International Economic Energy, And Environmental Policy Oil and natural gas are the lifeblood of modern civilization. Their abundance and affordability are prerequisites for thriving economic growth, high living standards, and ample employment. They are also an essential requirement for our national security. U S foreign policy has historically benefited from our strong position as a producer and exporter of energy. While we were known as the “Arsenal of Democracy” during World War II, we were equally an “Arsenal of Energy” , supplying nearly six out of seven barrels consumed by the Allies. 1 Even after net crude imports began rising steadily after the war, our control of spare production capacity enabled us to supply our allies and prevent economically damaging price spikes that would have resulted due to oil supply disruptions associated with Middle East conflicts in 1956 and 1967. But after the energy, geopolitical, and economic convulsions of the 1970s , our confidence in our domestic abundance and control shifted to apprehension about dependence and vulnerability. For the past forty years our foreign and national security policy planning has prioritized preparing against supply interruptions and price spikes, protecting Middle East oil fields from hostile control , an d protecting the supply lines between the region and global markets. In this respect, the tremendous and unexpected boom in domestic oil and gas production in recent years is an enormous blessing for our country. In the last ten years, our net oil imports fell from 12.5 mb/d to 5 mb/d (in the first quarter of 2015) or from 60% to 24% of supply. 2 For the first time since the 1950s, most official projections see U.S. net energy imports, which includes all fuels, declining and eventually ending. 3 Our newfound abundance does not mean we can ignore the Middle East, which holds nearly half of the world’s prove n oil reserves and supplies one-third of global production. That region will remain a source of potential price and supply shocks, and its stability will therefore remain a vital national interest. But our domestic boom does confer enormous benefits and require s that we change our thinking about energy. It is important to realize that we need not export large quantities of gas to benefit from a foreign policy standpoint. Just having the option to buy from the US strengthens the bargaining power of our allies when they negotiate long term contract prices with suppliers like Russia. Last December, Lithuania opened a costly LNG import terminal, an example of an ally willing to pay a security premium for diversified source of supply. Lithuania’s new terminal forced Gazprom to drop its prices to Lithuania, reportedly by 20%. While much attention is paid to the spectacular turnaround in our oil supply and imports, it is worth remembering our need for imported liquefied natural gas (LNG) underwent a similar and surprising transition. Between 2002 and 2007 our LNG imports had more than tripled, and officials were expecting another doubling. We were building terminals to import from suppliers like Qatar and Russia . But after the shale gas revolution increased proven reserves by 77% from 200 billion cubic feet (bcf) in 2004 to 354 bcf last year, we are now on track to become a net natural gas exporter by 2017, according to EIA. 1 A History of the Petroleum Administration for War , 1946, p. 1. 2 June 2015 Short Term Energy Outlook, Table 4a. http://www.eia.gov/forecasts/steo/pdf/steo_full.pdf For historical data, see EIA. In 2005, total product supplied was 20.8 mb/d and net imports were 12.5 mb/d. 3 http://www.eia.gov/pressroom/presentations/gruenspecht_06092015.pdf, slide 2 McNally, B. July 19, 2011. Outlook for US Biofuels. 2011 Agricultural symposium, Federal Reserve Bank, Kansas City, Missouri My outlook for biofuels is, in a word, stark. First, corn ethanol’s political power in Washington has peaked and is now in surprisingly rapid decline. Future policy support is blocked, and past policy supports are being scaled back. No one expected such a dramatic turnabout, the speed and extent of which is startling. Corn ethanol will be lucky to hold on to a 15 billion gallon per year (bgy) blending mandate, and other, “advanced” biofuel mandates are likely to be reduced by future Congresses or EPA. This shift in policy support for corn ethanol is not yet fully factored into commodity market analysts’ and energy investors’ expectations. Second, Washington is unlikely to help ethanol surmount the main public policy impediment to greater biofuels blending–i.e. the 10% of gasoline “blend wall.” Washington’s new power constellation and fiscal austerity imperative will limit the future regulatory or fiscal support needed to push ethanol into intermediate blends (e.g. E15) or E85. In the absence of high public support, future growth in ethanol will require technical breakthroughs that dramatically lower costs and allow for production at the commercial scale. Finally, when ethanol is blended at levels below the blend wall, prices will depend on ethanol’s suitability as a substitute for gasoline, which in turn depends on oil prices. Oil prices are likely to see greater cyclical swings as OPEC is not investing in enough capacity to retain an adequate supply buffer with which to dampen volatility. Greater oil price swings will reduce certainty and bedevil investment in conventional and bio-based energy. When OPEC supplanted the United States 40 years ago as the dominant force in global oil markets, oil prices rose and imports soared, and energy security became a top policy priority. To promote the growth of a domestic transportation fuel supply, Washington exempted ethanol from part of the federal motor-fuel taxes, placed a tariff protection on imports, mandated government fleet purchases, and extended loans and loan guarantees for ethanol plant investment and federal R&D. Later, policymakers added pro-ethanol incentives in federal fuel economy rules and provided a volatility waiver to the formula in the oxygenated and reformulated fuels programs. Although President Reagan pared back some support for ethanol, Republican ethanol champions such as Senators Dole, Lugar, and Grassley, as well as longtime Senate Energy Committee Chairman Pete Domenici, protected the blending credit, and the tariff protection survived and was increased. Ethanol has historically enjoyed strong voting blocks in the House and Senate, and the importance of Iowa’s role in the presidential nomination process is not lost on aspiring presidential candidates. In the 1990s another rationale for ethanol blending emerged: environmental protection. The 1990 Clean Air Act Amendments (CAAA) mandated oxygenates in gasoline to reduce carbon monoxide emissions resulting from gasoline combustion. And as ethanol’s chief competitor in the oxygenate market–MTBE–was phased out due to concerns over water contamination, ethanol benefited further. In the last decade, both energy security and environmental rationales for ethanol blending combined to create a third, and by far the biggest, political wave of support for ethanol. Terrorist attacks and oil price gyrations renewed national alarm about energy security, and the reduction of greenhouse gas emissions became the holy grail of the environmental movement. By offering benefits and political support to both causes, ethanol supporters succeeded–via the 2005 and 2007 energy policy acts–in achieving a new and powerful policy support for ethanol–a large and direct blending mandate. Specifically, in 2007 Congress ordered that the US blend 15 bgy of ethanol into gasoline by 2015, which translates into a conversion of some 40% of the US corn crop into 10% of the gasoline pool. And the nation must consume another 21 bgy of advanced cellulosic, not corn starch-based) ethanol by 2022. From a n energy policy and political perspective, the ethanol mandate is probably the single most impactful energy policy Washington has implemented in the last 11 years. From a financial market perspective, it is no secret that neither Wall Street nor the oil industry is terribly fond of ethanol on its merits. But market participants have come to believe ethanol is a winner in Washington. As Senator Feinstein observed: “Ethanol is the only industry that benefits from a triple crown of government intervention: its use is mandated by law, it is protected by tariffs, and companies are paid by the federal government to use it. Investment in ethanol production and actual blending soared. Commodity analysts and traders began to assume a greater part of future liquid fuel demand would be met by biofuels. And oil companies began to acquire ethanol facilities and started to view corn fields as upstream energy assets. As we turn to the near past and present, it is striking to watch how ethanol’s fortunes have fallen so hard and so fast in Washington. The change was completely unexpected and is still underway, and market participants have been slow to realize it. I must admit, as one who has been noting the turnaround in ethanol’s fortunes over the recent years, the collapse in recent weeks has been breathtaking. With the benefit of hindsight, signs of the trend shift emerged in 2008, when agricultural commodity prices soared as ethanol was ramping up in response to the 2007 RFS. Of course, other factors were also at work in the commodity price boom. But there had been no prior official analysis by EIA or anyone else of the impact of the RFS on grain prices. Unusually for such a major energy policy initiative, Washington mandated first but analyzed and debated later. Now well underway, the food versus fuel debate will rage for years. But in Washington perception matters as much as reality, and the perception was and is that biofuels mandates contributed to rising food prices. The second shift came in 2009, when the always-tenuous alliance between the environmental community and the ethanol community began to sour. While g reen groups appreciate d corn ethanol’s utility in reducing carbon monoxide, they were irked by exemptions from tough rules limit ing vapor pressure. Nor did they like the fossil fuel consumption, land-use impacts, and life-cycle carbon emissions associated with higher ethanol blending. But as long as cap-and-trade was on the table in the late-Bush and early-Obama administrations, Greens held their noses and allied with ethanol. Greens did lay some traps in the path of potential corn ethanol growth by insisting in the 2007 RFS that biofuels blending above 15 bgy come from more efficient, less carbon emitting sources than corn, such as cellulosic ethanol. But in the last two years, the Great Recession and Republican gains in the 2010 election have taken cap and trade off the table, and as a result the falling out has gathered steam. Now that the chief rationale for the ethanol-green alliance has fallen away, tensions are laid bare and the gloves are coming off. Green groups are stepping up opposition to ethanol on grounds that it emits high amounts of carbon on a life cycle basis and that blending credits are an expensive way to cut carbon emissions. The Congressional Budget Office estimated blending credits cost about $750/ton of CO2 equivalent reduction. 2 The third, and I would argue most important, challenge corn ethanol faced was the emergence of fiscal austerity and the need to tighten fiscal policy, which is now the primary focus of the Republican-controlled House and also the top priority of the Senate and White House. And given the size of our fiscal imbalances and the election outlooks of most observers, it is fair to assume Washington’s budget cutting imperative won’t be going away soon. Even those without a strong anti-ethanol bias found it hard to justify continuing a blending credit for a product whose demand is mandated. Environmental groups joined with their usual foes on letters to Congress opposing E15. Long envied, courted, and respected, ethanol now finds itself vulnerable, low-hanging fruit and facing an “unholy coalition” environmentalists, fiscal conservatives, the oil and food industries, and small engine manufacturers able and willing to block its growth and take back its prior gains. The first tangible signs that corn ethanol was in trouble in Washington came during the E15 debate in 2010, when Congress and the White House failed to direct EPA to grant ethanol the sweeping waiver for E 15 it desired. Then the Tea Party and Republican House came to town. Turning first to E15, the House voted twice to deny federal funding for E15 blending pumps and storage tanks, by 262-158 and 283-128, and by 285-136 to block E15 waiver implementation. Then the $6bn per year blending credit moved to the center of the bulls-eye. In June, the Senate voted 73-27 for a Coburn/Feinstein proposal to end the blending credit immediately rather than wait for end-year expiration. A strong reversal from the 1990s, when it was the anti-ethanol forces that typically lost Senate votes with counts in the 20s. The most recent indication of how far corn ethanol’s star has fallen came during President Obama’s recent news conference–actually the first Twitter town hall. He raised eyebrows calling corn ethanol producers “probably the least efficient producers [compared with cellulosic]” and saying “ it’s important for even those folks in farm states who traditionally have been strong supporters of ethanol to examine are we, in fact, going after the cutting-edge biodiesel and ethanol approaches that allow, for example, Brazil to run about a third of its transportation system on biofuels. Now, they get it from sugar cane and it’s a more efficient conversion process than corn-based ethanol. And so us doing more basic research in finding better ways to do the same concept I think is the right way to go.” The President reportedly has put the blending credit on the table to help offset a continuation of the payroll tax cut. Adding further support to the negative outlook for ethanol, official energy analysts making long term projections of fuel mix are becoming more cautious about biofuels growth . Whereas International Energy Agency IEA projections had ethanol accounting for almost half of gasoline demand growth in the last five years, IEA now projects the fuel will account for less than a quarter of demand growth in the next five, despite higher projected oil prices, 3 due to higher corn prices and greater uncertainty aro und mandates. 4 IEA sees global biofuels rising from 1.8 mb/d to 2.3 mb/d by 2016, displacing some 5.3% of gasoline and 1.5% of diesel by 2016 on an energy content basis. 5 IEA does not expect cellulosic biofuels to achieve widespread cost competitiveness with conventional gasoline until 2030, despite aggressive mandates. EIA, March 24, 2011. http://www.eia.gov/pressroom/presentations.cfm , slide 4. IEA projects advanced biofuels will rise from 20 kb/d now to 100-130 kb/d in 2016. Even DOE’s forecasting arm, the Energy Information Administration, projects the US will fail to meet advanced biofuels targets by 2022. Discussion about weakening the RFS has already started in Washington. Senator Inhofe (R-OK) and Representative Issa (R-CA) have introduced the Fuel Feedstock Freedom Act, which would allow states to withdraw from the RFS. However, state opt-outs are likely to be logistically difficult if not unworkable. Eventually either Congress or EPA will probably reduce the mandate to prevent it from colliding with the blend wall and raising gasoline prices. The ethanol lobby saw the blend wall danger and first tried to surmount it by getting EPA approval for “intermediate” blends above 10%, such as 15% ethanol or E15. Ethanol forces are trying to secure federal funding and indemnification for intermediate blend infrastructure and consumer acceptance. While EPA (grudgingly, I suspect) granted partial approval for E15 blends, they did so in the full knowledge that very little is likely to be sold due to large remaining infrastructure compatibility, cost and liability concerns, as spelled out in a recent GAO report. 9 Even ethanol-laden companies like Marathon and Valero said they would not offer E15. While ethanol forces took heart when Senator McCain’s bill against ethanol pump funding failed 40-59, it is far from certain that Congress will be in the mood to grant ethanol additional funds or legal protection to enable E15 growth. Grains and oil converge From a commodity market perspective, it is noteworthy that grain and fuel prices are becoming more correlated and volatility is going up. Wallace Tyner noted the rapid explosion in ethanol’s market share has established a high and positive correlation between crude oil and corn that has not previously existed. Below the blend wall, the price of crude will drive ethanol prices. Above the blend wall, the price of corn will drive ethanol prices. There are also important linkages between the RFS and higher grain price volatility. As the RFS mandate rises, it will introduce a price-insensitive source of demand for corn. That in turn will impart greater price volatility back onto agricultural markets. Two academics recently estimate d that at times when the RFS is driving ethanol demand instead of high oil prices relative to corn, inherent volatility in US grain markets will rise by about 25%. And volatility of US coarse grain prices in response to supply side shocks in energy markets will rise by almost one-half. A word about biodiesel and wind energy Biodiesel history has mirrored that of corn ethanol. The inventor of the diesel engine, Rudolph Diesel, actively considered agricultural feedstocks as a fuel. But petroleum distillate established a dominant position, though oil price hikes of the 1970s renewed interest in homegrown alternatives. Commercial production of biodiesel began in the 1990s, but only increased sharply since 2004 when a $1 blending/production credit was implemented. In 2005, supplemental credits for the “renewable diesel tax credit” (“renewable” diesel does not use alcohol in conversion) and “small agri-biodiesel production credit” also went into effect. Biodiesel production was around 30 million gallons before 2005, but by 2008 was over 700 million gallons per year, with a large portion exported (though the EU has since imposed an import tariff that has hurt US exports). Biodiesel remains expensive compared with petroleum distillate. Biodiesel economics feature a high correlation between soybean oil and conventional diesel prices, since it takes a gallon of soybean oil to produce a gallon of soy-based biodiesel. In addition, soy-based biodiesel has a slightly lower energy content than conventional diesel. Bruce Babock, of Iowa State University, has noted biodiesel marginal costs are $2 per gallon higher than diesel, requiring a $1.00 credit and $1.00 RIN price. 12 This makes most analysts cautious about the outlook for biodiesel growth. IEA projects biofuel-based distillate will account for only 4% of diesel demand growth in the next five years, compared with having taken 9% over the last five. 13 EIA expects US biodiesel use to rise from 0.1% of total liquids supply or 0.6% of diesel fuel consumption in 2010 to 0.6% of total supply and 3.0% of diesel demand by 2035. 14 The $1 per gallon biodiesel blending credit does not attract as much support or opposition as the ethanol blending credit. Because biodiesel blending, and therefore subsidy costs, have been lower, it has avoided the attention of the budget cutters, so far. But being small has its downsides too–Washington has frequently let the biodiesel credit expire with barely a whimper. When the credit last expired in 2010, the industry estimated production fell 42 percent and nearly 9,000 jobs were lost. Production fell despite a retroactive and rising RFS mandate, and exports were hurt by an EU import tariff. As for wind, challenges to large-scale commercialization are fairly well understood. They include intermittency, austerity, distance from load centers, political opposition, and low natural gas prices. However, I am skeptical that $4 per Mmbtu natural gas will endure for too long, given questions about the economics and politics of shale gas production as well as strong political opposition to new nuclear and coal build-out. But ultimately wind cannot scale unless large cost and technological barriers are broken, not the least of which are storage and transmission and public opposition on footprint grounds is overcome. - Babcock, Bruce, The State of Biofuels Today, Iowa State University, April 2011 - Babcock, Bruce A., Mandates, Tax Credits, and Tariffs: Does the U.S. Biofuels Industry Need Them All? CARD Policy Brief, Iowa State University, March, 2010 - Babcock, Bruce and Carriquiry, Miguel, A Billion Gallons of Biodiesel: Who Benefits?, - Iowa Ag Review Online, Winter/2008, http://www.card.iastate.edu/iowa_ag_review/winter_08/article3.aspx - Congressional Budget Office, Using Biofuel Tax Credits to Achieve Energy and Environmental Policy Goals, July 2010 - Congressional Research Service, Intermediate-level Blends of Ethanol in Gasoline, and the Ethanol “Blend Wall,” January 28, 2010 - General Accounting Office, Biofuels: Challenges to the Transportation, Sale, and Use of Intermediate Ethanol Blends , June 2011 - Glozer, Ken G., Corn Ethanol: Who Pays? Who Benefits? Hoover Institution Press, 2011 - Hertel, Thomas W., and Beckman, Jayson, Commodity Price Volatility in the Biofuel Era: An Examination of the Linkage Between Energy and Agricultural Markets , July, 2010 - International Energy Agency , Medium Term Oil and Gas Market Report , June 2011 - Tyner, Wallace E., The Integration of Energy and Agricultural Markets, presented at the 27th International Association of Agricultural Economists Conference, Beijing, China, August 16-22, 2000 - Tyner, W., Dooley, F., Hurt, C., and Quear, J. Ethanol Pricing Issues for 2008. Industrial Fuels and Power, 2008 Serial No. 112-89. December 16, 2011. Changing energy markets and U.S. National Security. House of Representatives. 69 pages Robert McNally, President of the Rapidan Group, on Changing Energy Markets and US National Security. Oil is the only major energy commodity we import and lies at the center of our national security concerns. Our energy security is and will remain strongly linked to trends and developments in the global oil market, not just our import share. We are and will remain vulnerable to price shocks caused by tightening global supply-demand fundamentals and geopolitical disruptions anywhere in the global oil market. And the strategic importance of the Persian Gulf region and its enormous, low-cost hydrocarbon reserves is likely to grow in the coming decades as Asia taps them to fuel growth. Our geopolitical and homeland security interests will remain closely bound to the security of the Persian Gulf region, the sea-lanes to and from it, and the ability to prevent Gulf countries from spending their windfalls on threats to US and global security. It must not be overlooked that the world urgently needs new productions just to offset declining production in mature fields. The global oil industry needs to find an amount equal to two-thirds of existing conventional production, or 47 mb/d, in coming decades just to offset declines in mature fields. This is in addition to the new oil needed to meet demand growth in Asia and the Middle East. Ethanol accounts for about 10% of gasoline, and EIA projects all biofuels will rise from 4% of liquids supply in 2009 to 11% by 2035. While higher US and hemispheric production can and should help fill the gap, OPEC and the Persian Gulf producers hold the bulk of the world’s low-cost, proved reserves (70% and 55%, respectively). Foreign policy makers should take into account three global energy market changes that will pose large challenges to our energy and economic security. The first is voracious growth in demand for energy, as well as for other natural resources, particularly from densely populated, fast-growing Asia, especially China and India. Achieving modern living standards in developing countries is impossible without consuming large amounts of dense, storable, reliable, and affordable energy. By these measures, fossil fuels are and will remain far superior to alternatives, especially in transportation. Unfortunately, no large scale, commercially viable alternatives to oil exist or are visible on the horizon. The US and other developed countries have made massive investments in oil fields, pipelines, terminals, refineries, tanks and dispensing stations in past decades. And rising Chinese, Indian and other Asian and Middle Eastern economies are starting to do the same. Second, China and India are going to become tremendously dependent on flows of oil from the Middle East. The International Energy Agency projects China’s oil import dependence will rise from 54% in 2010 to 84% in 2035, and India’s will rise from 73% to 92% over the same period.3 The lion’s share of these imports will come from the Middle East. This is going to make China and India extremely concerned about protecting their access to Gulf supplies and sea-lanes, which is already a strategic concern for the United States. Third, oil prices are going to gyrate more wildly than in the past as Saudi Arabia and OPEC’s ability to prevent price spikes erodes due to reduced spare capacity. This transition is overlooked but just as important as the first two noted above. The world oil market is leaving the relatively stable OPEC era and entering a new “Swing Era” in which large price swings rather than cartel production changes will balance global oil supply and demand. The Swing Era portends much higher oil price volatility, investment uncertainty in conventional and alternative energy and transportation technologies, and lower consensus estimates of global GDP growth. Ironically, Western governments and investors will miss OPEC, or at least the relative price stability OPEC tried to provide. In summary, soaring Asian energy demand, sharply increasing Asian dependence on the Persian Gulf, and wild oil price gyrations pose major challenges to US energy security and foreign policy. What is the future role of OPEC? What happens to price stability? The changing role of OPEC, with its implications for oil price stability, is the most important, and so far overlooked, feature of global energy markets. It will have enormous consequences for US economic and foreign policy, especially in our bilateral relations with Saudi Arabia, as noted further below. In short, soaring global demand and constrained supply growth is causing OPEC to lose its spare capacity cushion and therefore its ability to stabilize oil prices. While intuitively OPEC losing control may seem like a good thing, it actually means global oil prices, and therefore our pump prices, are going to swing much more wildly in the future, at times high enough to contribute to recessions as they did in 2008. As a commodity, oil exhibits what economists call a very low price elasticity of demand. In plain English, this means supply and demand are very slow to respond to price shifts. Oil is a must-have commodity with no exact substitutes; when pump prices rise, most consumers have little choice in the near term but to pay more rather than buy less. And on the supply side, it takes years to develop new resources, even when the price incentive to do so rises sharply. Since the beginning of the modern oil market, producers have tried to mitigate the tendency of oil prices to swing wildly. Standard Oil, the Texas Railroad Commission and the “Seven Sisters” (major western oil companies) succeeded at stabilizing prices by controlling supply, most importantly by holding spare production capacity back from the market and using it to balance swings in supply and demand. The 1967 Arab oil embargo did not lead to a major oil disruption or price spike, partly because the United States had spare capacity in reserve and increased production to make up for lost Arab producer exports. The 1973 Arab oil embargo did lead to an oil price spike, mainly because the year before – in March 1972 to be exact – the United States ran out of spare capacity. OPEC took over control of the global oil market from the US and the Seven Sisters in the early 1970s. Since the mid-1980s, OPEC’s main tool to stabilize prices has been holding and using spare production capacity. If demand jumped unexpectedly or if supplies were suddenly disrupted, OPEC producers with spare capacity, especially Saudi Arabia, would release more oil, reducing the need for prices to swing in order to balance supply and demand. But the years 2005-2008 marked the first time spare capacity ran out in peacetime since 1972. As in 1972, the reason was demand was racing faster than production. But today, no new cartel waited in the wings to satisfy global crude appetites. In 2008, market balance was achieved by sharply rising oil prices along with the financial crisis. While many in Washington, Paris, Riyadh, and Beijing publicly blamed speculators, energy experts and economists pointed instead to strong demand for a price inelastic commodity running up against a finite supply. Going forward, OPEC will still be able to influence how and when oil prices bottom. It can and will likely still take oil off the market to keep prices from falling or to raise them, as it did in late 2008 and 2009. But OPEC’s ability – really, Saudi Arabia’s ability – to prevent damaging price spikes has eroded. Therefore a replay of 2005-2008 is more a question of when than if. Global GDP growth remains oil intensive. When it picks up (and there are many macroeconomic risks currently, so the timing is uncertain), net non-OPEC supply growth is not expected to rise fast enough to meet incremental demand, requiring OPEC producers to increase production. OPEC is not investing enough in total production capacity to meet demand growth and still maintain the 4-5 mb/d spare capacity buffer needed to assure market participants it can respond to disruptions or tighter than expected fundamentals by adding supply. Saudi Arabia, the main spare capacity holder, says it will hold only 1.5 to 2.0 mb/d of spare capacity, and most other OPEC countries hold little if any back in spare. As OPEC falters, the price mechanism will return to balance the market through demand destruction, enforcing the iron law that consumption cannot exceed production. Even if our import dependence declines, we will still be vulnerable to price gyrations that are very harmful for consumers and producers and will bedevil economic and foreign policymaking.4 What role do/should energy markets play in U.S. national security policy? In U.S. defense posturing? Even if our import dependence falls, the US will still have a vital national security interest in the Persian Gulf region. Instability or disruptions in the Gulf will be felt quickly and directly at the pump in the US. Gulf producers will earn billions of dollars in revenue, and the US has an interest in seeing that those dollars do not finance terrorism or other threats to our security. And the US will need to ensure no country can use oil as a weapon or threaten vital trade routes and chokepoints. While the US must find ways to share the costs, burdens, and responsibilities for protecting the global energy commons, our interest in preventing a regional or external hegemon from dominating the Persian Gulf will remain as vital in the next thirty years as it was in the past. The Carter Doctrine and its Reagan corollary must remain cornerstones of our energy security doctrines. The Carter Doctrine states: “An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America, and such an assault will be repelled by any means necessary, including military force.” And its Reagan corollary extends the policy to include hegemonic threats to our Gulf allies by hostile regional powers, like Iran. It will be especially important to repair and strengthen the fraying US relationship with Saudi Arabia. The relationship will likely loosen somewhat as Saudi Arabia and other Gulf producers see future sales growth and profits in Asia instead of the western hemisphere. But something bigger is at stake: The grand bargain whereby the US provides Saudi Arabia protection from regional and global adversaries in return for Riyadh ensuring stable oil supplies and prices. This grand bargain has served our national and economic interests, and mitigated occasional wars and disruptions in the region. At present, each side is less certain the other can uphold his end of the bargain. If, as noted above, Saudi Arabia can no longer prevent oil price spikes from damaging the economy, it becomes less important in global affairs and US foreign policy. And if the US can no longer protect Saudi Arabia from a nuclear, belligerent Iran, then Riyadh’s interest in cooperating with us in many areas, including counter-terrorism and regional security, could decline. Vulnerability of current and future energy markets to terrorism Terrorists understand the vulnerability of energy infrastructure. One consequence of low spare capacity is that any disruption, even of a relatively small size, can lead to an oil price spike. We saw this earlier this year in Libya, when the world lost about 1.7 mb/d of supply, equal to about half of total OPEC spare capacity. Prices jumped about $15 per barrel, helping to push gasoline prices here up to $4.00 per gallon and thereby hurting family budgets and economic growth. What role does energy play in China’s foreign policy? What can be done to check China’s energy development in the western hemisphere? China’s leaders are preoccupied with finding resources to supply its voracious growth, including energy resources. As its oil imports increase rapidly, China has followed an energy strategy similar to our policies over recent decades. As the US did forty years ago, China is reacting to the prospect of high and rising dependence on imports by building strategic stocks and implementing fuel economy and other efficiency standards. China is also fostering the growth of globally competitive energy companies and diversifying its sources of energy. And it is developing political relationships and strategic capabilities to protect its investment and supply lines. China’s energy security policies could pose major indirect threats to our national security if Beijing concludes it can and should ignore our national security interests when engaging with foreign producers. This is of concern with Sudan, Venezuela, and especially Iran. The Energy Information Administration (EIA) estimates US shale gas production has increased twelve-fold over the last decade, now amounting to 25% of total production. EIA projects shale gas will rise to 47% of total production by 2035. Whereas a few years ago we faced the prospect of importing increasing amounts of liquefied natural gas (LNG), we are now permitting export facilities. This new supply holds the potential to revitalize our chemical industry and economically depressed regions of our country, use more natural gas in electricity generation, and possibly fuel natural gas vehicles (though the cost of converting car and truck fleets and fueling infrastructure to natural gas would be very high and the transition would be long, making it impractical except in some centrally-fueled commercial fleets). Even if we didn’t import a drop from the Middle East, our vital national interest there would remain. The Middle East and the Persian Gulf is and will remain the world’s most important energy region. As of 2009 it held 56 percent of global proven oil reserves, nearly all of those in the Persian Gulf. With a higher market share and higher prices, Middle Eastern oil producers are going to earn trillions and trillions of dollars in revenues. We must remain engaged in that region partly to ensure that windfall is not spent to threaten us or our allies. Another interest is to make sure that China and India’s soaring dependence on Middle East oil flow, mentioned earlier, does not lead to strategic competition or conflict. The International Energy Agency sees China’s import dependence headed over 84 percent and India’s over 92 percent by 2035. U.S. foreign policy can and should aim to share the costs, burdens and responsibilities of protecting the Gulf and sea lanes with other friendly and capable importers. Such cooperation exists to some extent already, such as with multi national anti-piracy patrols. But for the foreseeable future only the United States can play the role of guaranteeing the stability of the Persian Gulf.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8359997272491455, "language": "en", "url": "https://agric4profits.com/trends-in-crop-production-nationally-globally/", "token_count": 2175, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.10693359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f5feb039-58ed-4f68-b023-c3f62276eacc>" }
There are certain trends associated in crop production both nationally and globally. Therefore, let’s have a good look at the trends in crop production both nationally and globally. Here are certain Trends in Crop Production Nationally and Globally including Fodder Production: World cereal production in 1999 is forecast at 1870 million tons (including milled rice). While on the supply side, the estimates are becoming firmer, the demand-related issues have yet to be determined. Global cereal utilization in crop production 1999/2000 is forecast to rise only slightly, just less than one percent. Overall, the growth in direct food consumption of cereals is expected to keep pace with population increase. Nigeria for instance with a total cereal production of 18 million tones representing only 1% of the world cereal as reported by the food and Agricultural Organization of the United Nation (FAO) year book of 2002 and presented in tab.1.1 and fig.1.1. Table 1.1 Cereal production in Nigeria 1990-2000 in 000, metric tones Source FAO year book 2002 In sub-Saharan Africa, 1999 was another disappointing year in terms of agricultural output, as overall agricultural production lagged behind population growth rates for the third consecutive year. Output increased by 2.1 percent in 1999, after increasing by 0.4 and 2.3 percent in 1997 and 1998, respectively. In Nigeria, production growth slowed from more than 4 percent in 1998 to slightly less than 3 percent. Read Also: Historical Development of Crop Production The preliminary estimates for 2000 suggest no improvement in the sluggish performance of the last few years and overall agricultural production appears to have expanded by only 0.5 percent. (Source FAO year book 2002) The steady rise in the imports and decline in export of cereal crops in Nigeria from 1990 to 2000 as is evidenced in tab. 1.2 and fig 1.2. These are direct indications that food crop production in the country is lagging behind the demand for demand for food. The rapid increase in the Nigeria population (3.5%) annually which is considered among the highest in the world has necessitated the need to massively import food to feed the teaming population. Agricultural production within the same period recorded a modest growth rate of 1.5%, but the growth is mostly associated with cassava production which is currently enjoying a boom. This scenario of massive importation of cereals and sharp decline in export could be attributed not only to growth in population and stagnation of internal production levels but also on other equally important factors such as natural and socio-economic factors. Natural factors in form of drought and flooding that affected the major crop producing areas of the country within the reported period. Drought which affected the Northern Savanna zone where the bulk of country’s cereals is produced, thereby, leads to shortages of major foodstuff which necessitated massive importation of cereals to supplement the shortfall. The growth of poultry industry in the country also lead to increased demand for cereals to be used in the production of feeds, this triggered massive importation of maize to be processed into poultry feeds. The shift in government policy that do not accord food production the priority it deserved in terms of adequate funding and supply of needed inputs to sustain the current production levels. The fiscal and monetary effected food production especially cereals in the country by making importation of maize to be processed into poultry feeds. Lack of stability in the farm prices and poor marketing system of major cereal crop in the country discourage farmers from making investment to produce more. These and many more socio-economic factors have contributed to the present scenario of massive food importation by Nigeria. 2) Cassava Production Nigerian cassava production is by far the largest in the world; a third more than production in Brazil and almost double the production of Indonesia and Thailand. Cassava production in other African countries, the Democratic Republic of Congo,Ghana, Madagascar, Mozambique, Tanzania and Uganda appears small in comparison to Nigeria’s substantial output. The Food and Agriculture Organization of the United Nations (FAO) in Rome (FAO, 2004a) estimated 2002 cassava production in Nigeria to be approximately 34 million tonnes. The trend for cassava production reported by the Central Bank of Nigeria mirrored the FAO data until 1996 and thereafter it rises to the highest estimate of production at 37 million tonnes in 2000 (FMANR, 1997; Central Bank of Nigeria). The third series provided by the Projects Coordinating Unit PCU (PCU, 2003) had the most conservative estimate of production at 28 million tonnes in 2002. PCU data collates state level data provided by the ADP offices in each state. Comparing the output of various crops in Nigeria, cassava production ranks first, followed by yam production at 27 million tonnes in 2002, sorghum at 7 million tonnes, millet at 6 million tonnes and rice at 5 million tonnes (FAO, 2004a.) Expansion of cassava production under crop production has been relatively steady since 1980 with an additional push between the years 1988 to 1992 owing to the release of improved IITA varieties. Read Also: The Complete Classification of Crops By zone, the North Central zone produced over 7 million tonnes of cassava a year between 1999 to 2002. south south produces over 6 less than 6 million tonnes a year. The North West and North East are small by comparison at 2 and 0.14 million tonnes respectively ( Table 1.3). Table 1.3 Cassava Production by Nigeria geographical zones |Regions||2000||2001|| 2002 | |South West||4 993 380||5 663 614||5 883 805| |South South||6 268 114||6 533 944||6 321 674| |South East||5 384 130||5 542 412||5 846 310| |North West||2 435 211||2 395 543||2 340 000| |North Central||7 116 920||7 243 970||7 405 640| |North East||165 344||141 533||140 520| |Total||26 363 099||27 521 016||27 938 049| On a per capital basis, North Central is the highest producing region at 720kg/per person in 2002, followed by South East (560kg), South South (470kg), South West (340kg), North West (100kg) and North East (10kg). National per capital production of cassava is 320kg/per person. Benue and Kogi state in the North Central Zone are the hargest producers of cassava in the country, while Cross River, Akwa Ibom, Rivers and Delta state dominate cassava production in the South South. Ogun, Ondo, and Oyo dominate in the South West and Enugu and Imo dominate production in the South East. Kaduna state alone in the North West is comparable in output to many of the states in the Southern regions at almost 2 million tonnes a year. The production in the North East is currently very little. Table 1.4 Ranking of Nigeria in the World production of some field crops in 2005 |Commodity||Nigeria||World||Ranking in the world| |Cassava||41,565.000 MT||208,559,340 MT||1| |Yams||34,000.000 MT||44,276,130 MT||1| |Cowpeas||2,815000 MT||22,880,290 MT||1| |Melon seeds||451,000.000 MT||691,605.00 MT||1| |Taro||5,068000 MT||11,538,705 MT||1| |Citrus fruits||3,545, 841.00 MT||6,999,186 MT||1| |Green Maize||4,779000 MT||9,216,770.00 MT||2| |Millet||6,282000 MT||30,522,860 MT||2| |Sorghum||8,028000 MT||59,153,380 MT||2| |Okra||730,000 MT||5,357.927 MT||2| |Groundnuts in shell||3,478.000 MT||37,763.330 MT||3| |Sweet potatoes||3,205.000 MT||123,271.111 MT||3| |Papaya||834,040.00 MT||6,666.540 MT||3| |Cashew nuts||594,000. MT||2,864.270 MT||4| |Cocoa beans||366,000 MT||3,924.770 MT||4| |Ginger||110,000 MT||1,270.400 MT||4| |Vegetables||4,285,000 MT||261,732.740 MT||5| |Pineapple||976,920,000 MT||17,692.310 MT||6| |Sesame seed||100,000 MT||3,322.080 MT||6| Source FAO year book 2005 According to FAO year book 2005, Nigeria account for more than 77% of world yam production as well as occupied first position in the world production of cassava, taro, citrus fruits, melon seeds and cowpeas. During the same year under review, Nigeria rank second in the world production of millet, sorghum, okra and green maize and came third in sweet potatoes, groundnuts, and papaya. The tremendous rise in the status of food crop production in the country from the year 2002 up wards could be attributed to recent shift in government policy that favors massive food production program internally with the hope of attaining sustainable food security status and meet up with Millennium Development Goals of the country. Read Also: Systems of Crop Production Read Also: Strategies for Improving Cattle Production Read Also: 10 Amazing Health Benefits of Cucumber Fruit Read Also: 7 Amazing Health Benefits of Cherries Do you have further questions or any other form of contribution? then kindly use the comment box below for all your contributions. You are also encouraged to kindly share this information with friends that also need to know this as we can not reach everyone at the same time. Thank you
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9461495280265808, "language": "en", "url": "https://atas.net/volume-analysis/market-theory/bonds-basic-concepts/", "token_count": 2010, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10595703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:47eb8226-d41f-42ea-b75c-3b8f3e634ddf>" }
Bonds. Basic concepts In this review article, we will briefly describe the basic concepts, a trader should know about bonds. Read in this article: - what bonds are; - what forms and types of bonds are; - bond properties; - pros and cons of bond trading. What bonds are? Bonds are debt securities. When an investor buys them, he lends money. For example, the Sberbank bonds (code: SberB B12R) are issued for 3 years with the nominal yield of 7.60%. It means that an investor lends money to Sberbank for 3 years with the annual interest of 7.60%. Thus, bonds are a financial instrument with a fixed income, that is why an investor knows beforehand when the debt will be repaid and what would be his income. By the trading volume, bonds are the biggest investment instrument in the world. They seem to be more complex than stocks due to a big number of forms and parameters. There are three main bond properties: - repayment – an investor counts upon return of the debt; - maturity – an investor lends money to the bond issuer for a limited period. However, there are non-maturing bonds, but they are rather exceptions from the rules; - payment of interest – an investor plans to receive interest payments for the loan. All bonds have: - Face value since we speak about a loan. The face value is specified in the bond programme and an investor receives it after the bond maturity. The face value changes in the process of trading depending on demand and supply. And then it is not the face value any more but the current value. Unlike the stocks, the current value is specified in percentage of the nominal value. For example, the face value of the Sberbank bonds SberB B12R is RUB 1,000. As of December 23, their current value was 102.76%, which means it was more than RUB 1,000. - Coupon rate is a share of the face value, which the issuer pays as the interest. - Maturity date is the date of the latest payment and repayment of the nominal value. - Payment frequency is the order of payment under obligations. - Currency is the currency of bond payments. - Options availability shows whether there is a possibility to settle the bond before maturity. If the bonds are noncallable, neither the investor nor the issuer can settle them before maturity. The issuer can settle the callable bonds with the call option beforehand. The investor can settle callable bonds with the put option beforehand. The early repayment conditions are known from the very beginning since they are specified in the bond programme. What forms and types of bonds are? There are coupon and discount bonds. Buying discount bonds, an investor makes money on the difference between the face value and buying value. He doesn’t receive any other payments. Buying coupon bonds, an investor receives regular payments – coupons. They could be fixed – in this event an investor knows beforehand when and how much money he would receive. However, sometimes an issuer issues bonds for 10-20 years. The issuer cannot promise a certain yield for such a long period, that is why it fixes coupons for the first several years and later it could change them. The bond programme contains information on how exactly the issuer can change coupons. The issuer’s bond programme and financial reports can be found on its official web-site. Depending on the issuers, bonds could be: State bonds are issued by states. Such bonds are called OFZ or federal loan bonds in Russia and they are issued by the Ministry of Finance of the RF. You can find detailed information about all issues on the official web-site. Such bonds are called Treasury bonds in the United States and they are issued by the US Department of the Treasury. Information about the bonds can be found on the special web-site, where you can also buy them. This web-site is managed by one of the divisions of the US Department of the Treasury. The state bond yield is often considered by investors as the minimum risk-free yield of the financial instruments. It is connected with the fact that the state securities are secured by the state assets and its ability to print money. In the growing economy, the yield of long-term state bonds exceeds the yield of short-term bonds. When situation changes, investors start to feel nervous and expect crises or slowdown of the economy. American analysts combined the weekly 10-year bond yield chart and the chart of financial crises. Municipal bonds are issued by the cities and regions. Russia also has a sub-federal debt, issued by the RF subjects. The probability of bankruptcy of the issuers of such bonds is higher than that of the state but lower than that of corporations. Corporate bonds are issued by private companies. Their yield, as a rule, is higher than that of the state or municipal bonds but risks are higher. Private companies go bankrupt more often than cities and states. What eurobond is? Eurobonds are issued when issuers diversify their debt denominating it in different currencies. Despite the name, eurobonds are not necessarily issued in EUR. They could be issued in any other than the national currency and are placed outside the home country. The bond yield could be different because there are coupon payments and nominal bonds. Current yield is a relation of the coupon yield to the market price. The current yield could be calculated for the coupon bonds only because the discount bonds do not have any current income except for a discount. Yield to maturity could be received by an investor if he holds the bond until maturity and invests again all the received coupons. This yield is mostly theoretical since an investor might not reinvest coupons and not wait until the repayment. Realized yield could be received if an investor sells the bond in the secondary market before its maturity. It is a real yield. Estimate of the bond yield will take much time if done manually. That is why there are online calculators. For example on the Moscow Exchange web-site. Interrelation of the yield and price The bond yield and price are inversely related. If the price grows, the yield falls and vice versa. This dependence could be explained from a mathematical standpoint. When an issuer issues bonds, it fixes the face value and coupon yield in RUB. However, in respect of percentage, the coupon size could change with respect to the current price depending on economic situation. - the bond’s face value is RUB 1,000; - the coupon rate is RUB 100 or 10%. Let’s assume that the current bond’s price increased up to 105% or RUB 1,050, but the coupon should stay at the level of RUB 100. In order to keep the RUB size of the coupon in relation to the new current bond price, it should be decreased in percentage terms down to 9.5%. 9.5% = RUB 100 / RUB 1,050 * 100% The same inverse relation exists between the bond yield and interest rates in the economy. If the key rate reduces, the money become cheaper in the country. The deposit rates decrease and investors start to look for a more interesting yield in the stock market. The inflow of money on the exchange increases demand and stock prices. When the rates decrease, the stock prices increase. In such times it is profitable to buy long-term bonds. If it is difficult to forecast the change of the base rate, it makes sense to pay more attention to the issuer’s reliability. What bonds to invest in? There are corporate bonds with 15% annual yield on the Moscow Exchange as of December 2019. However, such a yield is connected with a high level of risk for an investor. Thus, the most profitable bonds simultaneously bring the highest risk for an investor. The portfolio diversification will help to minimize risks. An investor needs to assess risks of bonds as financial instruments in order not to lose money. We can specify two main risk types: - the market risk is a risk type, which is connected with the bond price change due to the change of the economic situation in the market; - the issuer’s risk of default. A default takes place when the issuer cannot pay the bond coupon liabilities. You can insure against these risks only if you thoroughly select bonds. How to buy bonds profitably? Use progressive software, such as ATAS, for analyzing the bond markets. ATAS has futures on Russian OFZ and American and European bonds. You can choose the most profitable moment for their buying or selling with the help of the cluster analysis and individually tailored technical indicators. Let’s consider one example in the 500-tick Euro Bund futures (FGBL) chart. - small-sized bars; - the volume increases but despite the fact … - … the price doesn’t fall. We marked the coinciding levels of the maximum volumes of several bars with a black line – a local support level was formed here. Long positions with a stop behind the day’s low could be opened from here. We marked the single print in the profile with number 2 – the price jumped here. Single prints often serve as the support/resistance levels because the traders, who failed to enter into a trade, will try to do it if the price comes back to this level. This is what happens in point 3, which is a good place for opening a new long position or for increasing the existing one. If a trader understands the situation, this brings confidence to him and increases the number of profitable trades.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9546676278114319, "language": "en", "url": "https://esacademyusa.com/2019/07/02/colleges-might-be-falling-behind-vocational-schools/", "token_count": 489, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.41796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:be26e3ff-7a36-4454-a4bd-df5cb5708f8c>" }
Bill Gates and Melinda Gates launched a drive towards education. Recently, the Gates Foundation is determined to bring value to college education or a certificate. College education has become less of a factor in the employer world. Undoubtedly, college degree doesn’t guarantee that you will land an important job. However, on the flip side, vocational schools are beginning to take the lead with the right approach towards education. Colleges Might Be Falling Behind Bill Gates has noted that he could transform colleges and universities to offer courses that are similar to vocational training centers. In other words, vocational schools offer marketable skills that colleges have failed to provide their student body with the foundation that employers are searching for. With more demand for skilled workers, colleges might be falling behind. This isn’t to say colleges lack ambition. It’s the current market that is changing quickly. A primary example is the medical field. The medical field requires professionals to be equipped to handle situations. A classroom experience with a hands-on approach is the best way for medical professionals to learn their trade. A skilled medical professional is valuable in today’s growing healthcare concern. This Means Majors Could Be Cut and Financial Aid Cut to Certain Programs Though elite institutions of higher education will still provide students with a liberal arts education, degrees like history, geography, philosophy, and political science might be cut to make room for the demand. If congress adopts the Gates Foundation’s tool to measure the value of a college degree or certificate, then Congress could decided that low earnings and low loan repayment rates. This could disqualify students in certain programs from receiving federal financial aide. Students may be faced with the burden of picking a program they are not necessarily interested in. Trade Schools are Leading the Educational Industry Coincidentally, trade schools are leading the educational industry with providing the workforce with skilled workers. With vocational schools, this may not be an issue especially with Financial Aid. For some vocational schools, they are opting to provide students with affordable education that provides them with better access to employment opportunities and educational grants. Conversely, the educational world is changing. What was the mainstream ten years ago may not be the right choice for the future. It’s good that they are adding options for high school graduates. The more paths to get to a successful destination the more likelihood there will be progress in society.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9201100468635559, "language": "en", "url": "https://nackpets.wordpress.com/2020/08/05/", "token_count": 2014, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.3203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f219f668-c462-4b77-8cc1-cee5724129c5>" }
A $600 per week supplement to regular unemployment insurance means that unemployed janitors receive unemployment benefits which are, on average, over 1 1/2 times more than what they earned working, “while janitors who continue to work at increased health risk in businesses deemed ‘essential’ have no guarantees of any hazard pay or increased earnings…” (Ganong et al. May 15, 2020 https://bfi.uchicago.edu/wp-content/uploads/BFI_WP_202062-1.pdf) “It’s also unfair to the teachers, police officers, firemen, healthcare workers, and others who continue to get up and go to work to have someone on unemployment making more by staying home.” (US Senator Lindsey Graham, August 4, 2020) Remember $600 is on top of regular unemployment benefits, which varies by state and earnings. “Under the CARES Act, 68% of workers have [income] replacement rates above 100%. The median replacement rate is 134% and workers in the bottom 20% of the income distribution have replacement… The spacecraft may have found where the colorless gas has been hiding on the solar system’s biggest planetary inhabitant. New results from NASA’s Juno mission at Jupiter suggest our solar system’s largest planet is home to what’s called “shallow lightning.” An unexpected form of electrical discharge, shallow lightning originates from clouds containing an ammonia-water solution, whereas lightning on Earth originates from water clouds. Other new findings suggest the violent thunderstorms for which the gas giant is known may form slushy ammonia-rich hailstones Juno’s science team calls “mushballs”; they theorize that mushballs essentially kidnap ammonia and water in the upper atmosphere and carry them into the depths of Jupiter’s atmosphere.Get the Latest JPL News: Subscribe to the Newsletter » The shallow-lightning findings will be published Thursday, Aug. 6, in the journal Nature, while the mushballs research is currently available online in the Journal of Geophysical Research: Planets. Since NASA’s Voyager mission first saw Jovian lightning flashes in 1979, it has been thought that the planet’s lightning is similar to Earth’s, occurring only in thunderstorms where water exists in all its phases – ice, liquid, and gas. At Jupiter this would place the storms around 28 to 40 miles (45 to 65 kilometers) below the visible clouds, with temperatures that hover around 32 degrees Fahrenheit (0 degrees Celsius, the temperature at which water freezes). Voyager, and all other missions to the gas giant prior to Juno, saw lightning as bright spots on Jupiter’s cloud tops, suggesting that the flashes originated in deep water clouds. But lightning flashes observed on Jupiter’s dark side by Juno’s Stellar Reference Unit tell a different story. “Juno’s close flybys of the cloud tops allowed us to see something surprising – smaller, shallower flashes – originating at much higher altitudes in Jupiter’s atmosphere than previously assumed possible,” said Heidi Becker, Juno’s Radiation Monitoring Investigation lead at NASA’s Jet Propulsion Laboratory in Southern California and the lead author of the Nature paper. Becker and her team suggest that Jupiter’s powerful thunderstorms fling water-ice crystals high up into the planet’s atmosphere, over 16 miles (25 kilometers) above Jupiter’s water clouds, where they encounter atmospheric ammonia vapor that melts the ice, forming a new ammonia-water solution. At such lofty altitude, temperatures are below minus 126 degrees Fahrenheit (minus 88 degrees Celsius) – too cold for pure liquid water to exist. https://www.youtube.com/embed/tq_6DClZ0Ns This animation takes the viewer on a simulated journey into Jupiter’s exotic high-altitude electrical storms. Get an up-close view of Mission Juno’s newly discovered “shallow lighting” flashes and dive into the violent atmospheric jet of the Nautilus cloud. Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill “At these altitudes, the ammonia acts like an antifreeze, lowering the melting point of water ice and allowing the formation of a cloud with ammonia-water liquid,” said Becker. “In this new state, falling droplets of ammonia-water liquid can collide with the upgoing water-ice crystals and electrify the clouds. This was a big surprise, as ammonia-water clouds do not exist on Earth.” The shallow lightning factors into another puzzle about the inner workings of Jupiter’s atmosphere: Juno’s Microwave Radiometer instrument discovered that ammonia was depleted – which is to say, missing – from most of Jupiter’s atmosphere. Even more puzzling was that the amount of ammonia changes as one moves within Jupiter’s atmosphere. “Previously, scientists realized there were small pockets of missing ammonia, but no one realized how deep these pockets went or that they covered most of Jupiter,”said Scott Bolton, Juno’s principal investigator at the Southwest Research Institute in San Antonio. “We were struggling to explain the ammonia depletion with ammonia-water rain alone, but the rain couldn’t go deep enough to match the observations. I realized a solid, like a hailstone, might go deeper and take up more ammonia. When Heidi discovered shallow lightning, we realized we had evidence that ammonia mixes with water high in the atmosphere, and thus the lightning was a key piece of the puzzle.” This graphic depicts the evolutionary process of “shallow lightning” and “mushballs” on Jupiter. Image Credit: NASA/JPL-Caltech/SwRI/CNRS › Full image and caption A second paper, released yesterday in the Journal of Geophysical Research: Planets,envisions the strange brew of 2/3 water and 1/3 ammonia gas that becomes the seed for Jovian hailstones, known as mushballs. Consisting of layers of water-ammonia slush and ice covered by a thicker water-ice crust, mushballs are generated in a similar manner as hail is on Earth – by growing larger as they move up and down through the atmosphere. “Eventually, the mushballs get so big, even the updrafts can’t hold them, and they fall deeper into the atmosphere, encountering even warmer temperatures, where they eventually evaporate completely,” said Tristan Guillot, a Juno co-investigator from the Université Côte d’Azur in Nice, France, and lead author of the second paper. “Their action drags ammonia and water down to deep levels in the planet’s atmosphere. That explains why we don’t see much of it in these places with Juno’s Microwave Radiometer.” “Combining these two results was critical to solving the mystery of Jupiter’s missing ammonia,” said Bolton. “As it turned out, the ammonia isn’t actually missing; it is just transported down while in disguise, having cloaked itself by mixing with water. The solution is very simple and elegant with this theory: When the water and ammonia are in a liquid state, they are invisible to us until they reach a depth where they evaporate – and that is quite deep.” Understanding the meteorology of Jupiter enables us to develop theories of atmospheric dynamics for all the planets in our solar system as well as for the exoplanets being discovered outside our solar system. Comparing how violent storms and atmospheric physics work across the solar system allows planetary scientists to test theories under different conditions. JPL, a division of Caltech in Pasadena, California, manages the Juno mission for the principal investigator, Scott Bolton, of the Southwest Research Institute in San Antonio. Juno is part of NASA’s New Frontiers Program, which is managed at NASA’s Marshall Space Flight Center in Huntsville, Alabama, for the agency’s Science Mission Directorate in Washington. Lockheed Martin Space in Denver built and operates the spacecraft. “He that takes truth for his guide, and duty for his end, may safely trust to God’s providence to lead him aright.” - Blaise Pascal. "There is but one straight course, and that is to seek truth and pursue it steadily" – George Washington letter to Edmund Randolph — 1795. We live in a “post-truth” world. According to the dictionary, “post-truth” means, “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” Simply put, we now live in a culture that seems to value experience and emotion more than truth. Truth will never go away no matter how hard one might wish. Going beyond the MSM idealogical opinion/bias and their low information tabloid reality show news with a distractional superficial focus on entertainment, sensationalism, emotionalism and activist reporting – this blogs goal is to, in some small way, put a plug in the broken dam of truth and save as many as possible from the consequences—temporal and eternal. "The further a society drifts from truth, the more it will hate those who speak it." – George Orwell “There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.” ― Soren Kierkegaard
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9403983950614929, "language": "en", "url": "https://prepass.com/2021/04/07/how-the-infrastructure-bill-may-effect-trucking/", "token_count": 720, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0537109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0ca06e4a-7f39-49fa-9b9b-6a50aede5fff>" }
A multiyear infrastructure bill appears to be the next agenda item for Congress. Its price tag? Some say north of $2 trillion. The cost of the legislation speaks to how much has changed since the days when Congress passed the “federal highway bill” every three to five years in a bipartisan manner. Federal highway bills of the past stuck to building and repairing roads, and everyone applauded. However, a transition from “highways” to “surface transportation” legislation began about 30 years ago. That change in terminology accommodated provisions for rails, mass transit, waterways and airports, with associated increases in spending. So what is likely to be in an “infrastructure” bill that effects the trucking industry? Roads and bridges. The infrastructure bill will contain the traditional federal funding for roads and bridges, plus congressional directives on how environmental reviews of road projects are conducted. It is widely recognized that roads and bridges do need repair. Through the work of the American Transportation Research Institute, Congress will also address bottlenecks which slow truck transportation. Road construction and repair means more than seeing orange cones in the summer months. While care must be taken through highway construction zones, the result will be better travel conditions, improved safety, less driver fatigue and reduced damage to vehicles. What else is “infrastructure”? Expect provisions for mass transit, waterways, dams, sewer systems and airports, in the “surface transportation” bills. But also look for funding for improved GPS, expansion of rural broadband, resiliency in the electrical grid, safety of urban pedestrians and cyclists, and deployment of electric vehicles. Many of these infrastructure items also benefit trucking. A reliable electrical system is needed to produce the goods that trucks move, not to mention powering the electric trucks of tomorrow. More exact GPS coordinates allow for better travel planning. The expansive nature of “infrastructure,” though, drives the legislation price tag higher, resulting in two more reasons the infrastructure bill may effect trucking: taxes and regulations. Taxes. Traditionally, highway users paid for roads through state and federal fuel taxes, registration fees, the federal highway user tax, and excise taxes on lubricants and new vehicles. The last major federal transportation bill, the Fixing America’s Surface Transportation Act, or FAST Act (2015), moved pieces around on the federal financial chessboard and avoided the dreaded word “tax” altogether. This time, most members of Congress don’t show interest in raising the federal fuel tax rate, which hasn’t been touched since 1993. Congress is discussing a number of other taxes, including a vehicle mileage tax for trucks. The breadth of “infrastructure” has generated thought about taxes on non-highway users, such as a “wealth tax” or possible increases in the rates for estate taxes, corporate taxes and capital gains. Remember, too, that increases in the federal share of funding for roads ratchets up what states must match for their portion of project funding. Some state taxes will likely go up soon, while others have increased their fuel taxes in recent years. Trucking regulations. Congress will use the infrastructure bill to give policy direction to federal regulatory agencies, such as the Federal Motor Carrier Safety Administration, the National Highway Traffic Safety Administration and others. The new administration issued an executive order outlining its regulatory policy preferences. With Congress and the Executive in the same party, expect to see a full regulatory agenda soon.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9111979603767395, "language": "en", "url": "https://softline.az/solutions/internet-of-things/iot-gorod/umnyie-seti-energosnabzheniya", "token_count": 393, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.049072265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:735009d7-d290-44b3-83c2-6fd7d60b8c75>" }
Modernized power supply networks allow to produce, distribute and expend energy more efficiently, reliably and economically. One of the serious issues that smart technologies must solve is a new system for supplying settlements and enterprises with energy. Modernized power supply networks allow to produce, distribute and expend energy more efficiently, reliably and economically. It is due to the fact that information has been collected about all processes in real time. How it works? A smart power supply network must be saturated with elements to measure its topological parameters in order to evaluate the state of the network in different situations. To manage network elements and settings of energy consumers, a hardware and software complex is required to collect and process data. To make decisions, you need tools to automatically assess the current situation and build predictions. And for the implementation of these solutions, mechanisms are required that can change topological parameters and interact with energy objects. Efficiency: supplying enterprises and network companies The introduction of smart technologies in power supply will help supplying enterprises and network companies to increase the accuracy of accounting, reduce the number of losses and theft of resources, equalize the load by flexible tariffication of end users (zones, time of day), bills for actually transferred energy and reduce costs of processing Information by economic units. Efficiency: industrial enterprises Smart power supply networks show themselves well in production: they allow to reduce the total energy consumption, switch to flexible tariffication, control the consumption of sub-subscribers and individual units, reduce the number of errors with manual data input and subsequent processing of information. Efficiency: management companies and consumers End users also benefit from the introduction of a smart network. In the automatically formed accounts for electricity will be the order. Due to the possibility of controlling the balance of consumption in the home, as a whole, embezzlement of resources is excluded. Reduces the labor costs of personnel of management companies and losses due to waste of energy.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.954663872718811, "language": "en", "url": "https://thebottomline.as.ucsb.edu/2017/10/why-c-a-should-support-the-48-fix", "token_count": 715, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1279296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0e065e62-a0a1-48c3-8947-bdf35847002d>" }
Image Courtesy of Wikimedia California has the largest gross domestic product and population in the country, which makes this state a leader within the United States. With such a large share in the national economy, California must maintain a stellar education system. Despite a large population and tax base, college in California has not become more affordable. Three of the ten most populous cities in the U.S. are in California, along with huge metropolitan areas and a large, rural population. For many families, the opportunity to attend a four-year college is far-fetched. In fact, tuition regularly increases. However, a proposal now exists to pay for free in-state tuition at the University of California, California State University, and California City College systems, called “The $48 Fix.” The plan establishes a system to pay for free college tuition by a $48 tax increase on the median household per year. The fix was proposed in 1960, when California only had half of its current population, according to 48fix.org. Californians should support The $48 Fix to increase college accessibility, an investment in its future the state can afford to make. There is no problem with an increase in state population to boost the economy, yet the competition for university spots is tenuously increasing. High school graduate percentages have increased, according to a California Department of Education study. The percent increase of high school graduates is mainly due to a population influx and immigration over the past few decades. The study found that the combined University of California and California State University systems have not increased potential freshmen enrollment. However, the UC and CSU systems have both regularly raised tuition. In 2011, the UC system increased tuition by over 9 percent, which increased in-state tuition to an average of $1,068. Since then, the UCs and CSUs have continued to raise tuition costs with no end in sight. One may assume that the median household income is insufficient to handle the household tax hike. However, California’s lower class would pay very little of “The $48 Fix” tax burden. Most of the payments would fall upon wealthier individuals who can afford the tax increase. While California has some of the highest taxes in the nation, The $48 Fix is intended to raise $9.43 billion, according to a Tax Foundation report from 2016. About $9 billion is the amount that authors estimate is necessary to fully fund in-state tuition for the UC, the CSU, and the California Community College systems. This is a relatively small increase in the state’s overall tax revenue from the 2017-18 fiscal of $125.88 billion and a mere drop in the bucket of California’s $2.4 trillion GDP. Californians should willingly pay the $48 Fix because the long-term investment of free tuition may increase C.A.’s economic potential. Within the next few decades, California will undergo significant changes in the taxation agenda. In the meantime, $48 is practically nothing when one thinks of the investment for future generations. The prospect of establishing free in-state tuition would support decades of future prosperity and would set a standard for the nation. Attendance at four-year universities in California should be tuition free for California residents. Although there are many college eligible people, California universities are failing to make college more accessible. To read the full policy paper for The $48 Fix, visit http://48fix.org/policy-paper/.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.8906798362731934, "language": "en", "url": "http://m.plant-led.com/info/lithium-battery-knowledge-36711603.html", "token_count": 534, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.01177978515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c325e33e-5511-4449-8b7d-9dec6443e4ac>" }
High energy density Lithium-ion batteries weigh half as much as nickel-cadmium or nickel-hydrogen batteries of the same capacity, and are 20-30% as big as nickel-cadmium and 35-50% as big as nickel-hydrogen. A single lithium-ion battery has a working voltage of 3.7v (average), equivalent to three nickel-cadmium or nickel-hydrogen batteries in series. Lithium-ion batteries do not contain harmful metals such as cadmium, lead and mercury. Lithium-ion batteries do not contain lithium metal, so they are not subject to aircraft transport rules that prohibit the use of lithium batteries in passenger aircraft. High cycle life Under normal conditions, lithium-ion batteries can be charged and discharged more than 500 times, and lithium iron phosphate (hereinafter referred to as ferrophosphorus) can reach 2,000 times. No memory effect Memory effect refers to the phenomenon that the capacity of nickel-cadmium battery decreases during the charging and discharging cycle. Lithium-ion batteries do not have this effect. Using the constant current and constant voltage charger with rated voltage of 4.2v, the lithium ion battery can be fully charged in 1.5--2.5 hours. The newly developed lithium phosphorus iron battery can be fully charged in 35 minutes. 2.Main battery electrode material Carbon negative materials The cathode materials used in lithium ion batteries are basically carbon materials, such as artificial graphite, natural graphite, mesophase carbon microspheres, petroleum coke, carbon fiber, pyrolysis resin carbon and so on. Tin base anode material Tin - based anode materials can be divided into tin oxides and tin - based composite oxides. Oxides are oxides of various valence metals of tin. No commercial products. There are no commercial products. They include tin-based alloys, silicon-based alloys, germany-based alloys, aluminium-based alloys, antimony-based alloys, magnesium-based alloys and other alloys, and there are no commercial products. Carbon nanotubes, nanometer alloy materials. At present, according to the latest market development trend of lithium battery new energy industry in 2009, many companies have begun to use nano titanium oxide and nano silicon oxide to add in the traditional graphite, tin oxide and carbon nanotubes, greatly increasing the charge and discharge amount and times of lithium battery.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9574527740478516, "language": "en", "url": "http://news.bbc.co.uk/2/hi/americas/4941126.stm", "token_count": 556, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.059326171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4d5778a1-a6cf-4683-aab4-30af4f9dfb8a>" }
Panama has announced an ambitious $5.3bn (£2.9bn) plan to widen its famous canal to handle a new generation of giant container ships. The Panama Canal was opened in 1914 President Martin Torrijos described the project as a "formidable challenge" but necessary if the canal is to retain its place as a key route for global cargo. The plan is due to be put to a national referendum later this year. Polls suggest the majority of voters back the project, which is set to create several thousands jobs. In a televised speech, Mr Torrijos said the plan was, "the most important decision about the canal and its role in the 21st century". The 80km (50-mile) canal links the Pacific and Atlantic Oceans and plays a vital role in global trade. Around 40 ships a day pass through its system of locks and lakes. But, partly because of surging exports from China, the canal's capacity is now stretched. It also faces the prospect of missing out on business from a new generation of super-ships, which can carry up to twice as much cargo as normal vessels. PANAMA CANAL FACTS Handles an estimated 5% of world trade The main goods shipped are oil products, grain and container cargo Last year the canal handled 14,000 transits, shipping 200 million tonnes of cargo Traffic between Asia and the east coast of the US accounts for more than 40% of shipping The Panama government fears its income from tolls will fall if ship-owners switch to alternative routes, BBC Americas editor Simon Watts says. That is why they are proposing a new set of giant locks, measuring more than 50m wide, to create a third lane of traffic that is capable of handling wider loads. "The Panama Canal route is facing competition," Mr Torrijos said. "If we do not meet the challenge to continue to give a competitive service, other routes will emerge that will replace ours. "It would be unforgivable to refuse to improve the capacity of the waterway." The canal is a sensitive issue in Panama so Mr Torrijos has tried to take party politics out of his proposal, our correspondent says. He has consulted widely, and the plan needs to be passed by parliament as well as through a referendum. But Panamanians will want to know exactly how the plan will be financed, and Mr Torrijos will also need to address a widespread feeling that ordinary people have not seen any benefit from canal revenue, our correspondent adds. The canal was opened in 1914 and run by the US until it was handed to the Panama government in 1999.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9280632138252258, "language": "en", "url": "http://www.learntalkmoney.com/types-of-banks-in-india/", "token_count": 3407, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0380859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:50ae4da5-28db-4dab-aa88-51c0cad942cf>" }
Modern banking in India originated in the last decade of the 18th century. Now India has many types of banks such as public and private sector banks, small finance banks, cooperative banks. Banks are classified as Scheduled and Non-scheduled banks. Let us look at the types of banks in India in detail. What is Bank? A Bank is an institutional body that accepts deposits from the investors and grants credit to the entities who are needing the funds. Banks play a vital role in maintaining the economic status of a country. They provide financial services like wealth management, safe deposits, currency exchange, locker system, etc. Banks make money by accepting deposits and lending money out at interest. They also charge for services provided. Among the first banks were the Bank of Hindustan, which was established in 1770 and liquidated in 1829–32; and the General Bank of India, established in 1786 but failed in 1791. The largest and the oldest bank which is still in existence is the State Bank of India (S.B.I). Types of Bank in India In the Indian banking system, RBI is the apex body who acts as the leader of the banking system in the country. RBI regulates the money supply in the country and supervises and controls the banking and non-banking financial companies in India. Our article Reserve Bank of India talks about RBI in detail Banks can be classified into 2 categories namely, Scheduled Banks and Non-Scheduled Banks under Banking Regulation Act, 1949. Scheduled Banks are the Banks that are included in the second schedule of the Reserve Bank of India (RBI) act, 1934. Non-Scheduled Banks, on the other hand, are the ones that are not included in the second schedule of the Reserve Bank of India (RBI) act, 1934. Scheduled Banks can further be categorized into 2 categories i.e. Commercial Banks And Co-Operative Banks. Commercial Banks are further divided into Public Sector Banks, Private Sector Banks, Foreign Banks, and Regional Rural Banks (RRBs). The Co-Operative Banks are divided into State Co-Operative Banks, District Co-operative Banks, and other Co-Operative Banks. You can check out Wikipedia article List of Banks in India for names of banks. Difference between Scheduled Banks v/s Non-Scheduled Banks The major differences that exist between the scheduled and non-scheduled banks can be understood from the table given below: |Scheduled Banks||Non-Scheduled Banks| |Definition||Listed in the second schedule of the RBI act, 1934||Not Listed in the second schedule of the RBI act, 1934| |RBI Rules and Guidelines||Follow the rules made by RBI||Do not follow the rules made by RBI| |Minimum Paid-up Capital and Reserve amount||25 lakhs||Don’t comply with the rules of RBI| |Clearing House Membership||Scheduled Banks can become a member of the clearinghouse||Non-Scheduled Banks cannot become a member of the clearinghouse| |Cash Reserve Ratio||Need to maintain CRR and deposit the amount to RBI||Need to maintain CRR but they can keep the amount with themselves| |Borrowing||Ability to borrow money from RBI||Not allowed to borrow money from RBI| A commercial bank is a financial institution that accepts deposits from the public, gives loans, and offers financial products and services to individuals and businesses. Types of Commercial Banks Commercial Banks can further be classified into the following categories: Public Sector Banks: Public Sector Banks are the banks where a major stake (i.e. more than 50%) is held with the government. There are over 21 Public Sector Banks in India. These include 19 nationalized banks, State Bank of India & Associate Banks, and IDBI. Private Sector Banks: Private Sector Banks are the ones where a major stake is held with private individuals. These banks have limited liability. These include HDFC Bank, ICICI Bank, Axis Bank, IDFC (estd. 2015), and Bandhan Bank (estd. 2015). Public Sector Banks in India Public Sector Banks (PSBs) are a major type of bank in India, where a majority stake (i.e. more than 50%) is held by a government. These banks are listed on stock exchanges. There are a total of 12 Public Sector Banks alongside 1 state-owned Payments Bank(India Post) in India. In terms of volume, SBI is the largest public sector bank in India and after its merger with its 5 associate banks (as on 1st April 2017) it has got a position among the top 50 banks of the world. Subsequently, the number of public sector bank has been reduced to 12 from 27 |Anchor Bank||Merger Banks||Established||Headquarter||Branches| |1||Bank of Baroda|| |2||Bank of India||1906||Mumbai, Maharashtra||5,000| |3||Bank of Maharashtra||1935||Pune, Maharashtra||1,897| |5||Central Bank of India||1911||Mumbai, Maharashtra||4,666| ||1907||Chennai, Tamil Nadu||6,104| |7||Indian Overseas Bank||1937||Chennai, Tamil Nadu||3,400| |8||Punjab and Sind Bank||1908||New Delhi, Delhi||1,554| |9||Punjab National Bank|| ||1894||New Delhi, Delhi||11,437| |10||State Bank of India|| |11||UCO Bank||1943||Kolkata, West Bengal||4,000| |12||Union Bank of India|| Mergers in Public Sector Banks The Central Government entered the banking business with the nationalization of the Imperial Bank of India in 1955. A 60% stake was taken by the Reserve Bank of India and the new bank was named as the State Bank of India. The seven other state banks became the subsidiaries of the new bank in 1959 when the State Bank of India (Subsidiary Banks) Act, 1959 was passed The next major government intervention in banking took place on 19 July 1969 when the Indira Gandhi government nationalised an additional 14 major banks. The total deposits in the banks nationalised in 1969 amounted to 50 crores. In 1980, 6 more private banks were nationalised. - State Bank of Saurashtra was merged with SBI on 13 August 2008. - State Bank of Indore was acquired by State Bank of India on August 27, 2010. - The State Bank of Bikaner & Jaipur, State Bank of Hyderabad, State Bank of Mysore, State Bank of Patiala and State Bank of Travancore, and Bharatiya Mahila Bank were merged with State Bank of India with effect from 1 April 2017. - Vijaya Bank and Dena Bank were merged into Bank of Baroda in 2018. - IDBI Bank was categorised as a private bank with effect from January 2019. - In April 2019, Vijaya Bank and Dena Bank were merged with Bank of Baroda - On 30 August 2019, Union Finance Minister Nirmala Sitaraman announced a merger of six public sector banks (PSBs) with four better performing anchor banks. This merger will be effective from 1 April 2020. The banks are being merged in order to streamline their operation and size, two banks were merged to strengthen the national presence and four were amalgamated to strengthen regional focuses. - Indian Bank is to be merged with Allahabad Bank (anchor bank – Indian Bank) - PNB, OBC and United Bank are to be merged (anchor bank – PNB) - Union Bank of India, Andhra Bank and Corporation Bank are to be merged (anchor bank – Union Bank of India) - Canara Bank and Syndicate Bank are to be merged (anchor bank – Canara Bank) Private Banks in India A private bank has a bank capital requirement of 500 crores and the total assets that a private bank must possess should be worth of 5000 crores. Private Sector Banks can further be classified into Old Private Sector Banks and New Private Sector Banks. Old Private Sector Banks are the ones that are established before 1993 while the New Private Sector Banks are the ones that are established after 1993. There are over 12 Old Private Sector Banks in India. These can be found in the table given below: |City Union Bank||1904||Thanjavur, Tamil Nadu||600| |Karur Vysya Bank||1916||Karur, Tamil Nadu||668| |Catholic Syrian Bank||1920||Thrissur, Kerala||426| |Tamilnad Mercantile Bank Limited||1921||Thoothukudi, Tamil Nadu||509| |Nainital Bank||1922||Nainital, Uttarakhand||135| |Karnataka Bank||1924||Mangaluru, Karnataka||835| |Lakshmi Vilas Bank||1926||Chennai, Tamil Nadu||570| |Dhanlaxmi Bank||1927||Thrissur, Kerala||269| |South Indian Bank||1929||Thrissur, Kerala||852| |DCB Bank||1930||Mumbai, Maharashtra||323| |Federal Bank||1931||Aluva, Kerala||1,252| |Jammu & Kashmir Bank||1938||Srinagar, Jammu and Kashmir||958| |RBL Bank||1943||Mumbai, Maharashtra||342| |IDBI Bank||1964||Mumbai, Maharashtra||1892| |Axis Bank||1993||Mumbai, Maharashtra||4094| |HDFC Bank||1994||Mumbai, Maharashtra||4,787| |ICICI Bank||1994||Mumbai, Maharashtra||4,882| |IndusInd Bank||1994||Mumbai, Maharashtra||1,004| |Kotak Mahindra Bank||2003||Mumbai, Maharashtra||1,369| |Yes Bank||2004||Mumbai, Maharashtra||1,050| |Bandhan Bank||2015||Kolkata, West Bengal||1000| |IDFC First Bank||2015||Mumbai, Maharashtra||301| These are the banks that have their headquarter located in a foreign country while they are operating in our country. Examples of Foreign banks are HSBC, Bank of America, American Express, Royal Bank of Scotland, etc. They should have a foreign direct investment of 74% from the foreign country. To comply with the local rules and regulations, foreign banks must have a minimum capital of 5 billion (i.e. 500 crores). The banks must maintain priority sector lending of 40% (including lending to agriculture, MSME, weaker sections, renewable energy, education, and housing). Regional Rural Banks (RRBs) Regional Rural Banks (RRBs) were set up in 1975 on the recommendation of the Narshimham Committee to primarily serve the rural areas of India by providing them the basic banking and financial services. RRBs are regulated by NABARD (National Bank for Agriculture and Rural Bank) and they operate at regional levels in different states of India. RRBs have their branches set up for the urban areas. A typical RRB has a 50% stake of the central government, 15% of state government, and 35% of the sponsor banks. Prathama Bank is the first RRB set up in India. This was established on 2nd October 1975 and was sponsored by Syndicate Bank with its headquarters at Moradabad. What is a Co-Operative Bank? A Co-Operative Bank is a non-profit, self-help/mutual-help institution that holds deposits, makes loans, and provides financial services to co-operatives and member-owned organizations. Co-Operative Banks are registered under the co-operative societies act, 1912 and are regulated by the RBI under the Banking Regulation Act, 1949 and Banking Laws act, 1965. These banks function on the principle of one member, one vote. Their functions include deposit mobilization, the supply of credit, and provision for remittance. Types of Co-Operative Bank The co-operative banks can be categorized into the following categories: State Co-operative Banks: State co-operative banks work at the state level. They can operate in 2-3 states due to which they are often known as multi-state co-operative banks. District Central Co-Operative Banks: District Central co-operative banks as the name suggests, they work at the district level. These banks act as a link between the societies present at the village level to the state co-operative banks. Difference between Commercial Banks and Co-Operative Banks The major differences between the commercial banks and co-operative banks can be understood from the table given below: |Commercial Banks||Co-Operative Banks| |Definition||Offer banking services to individual and businesses||Offer banking services to a limited extent (to agriculturists, rural industries, etc.)| |Governed by||Banking Regulation Act, 1949||Co-Operative Societies act, 1912| |Type of Organization||Profit Based||Non-Profit Based| |Areas of Operation||Large||Small| |Borrowers||Account Holders||Member shareholder| |Categorized into||Public Sector Banks, Private Sector Banks, Foreign Banks, and Regional Rural Banks (RRBs)||State Co-operative Banks, District Co-Operative Banks| Small Finance Banks These banks help with financial inclusion of sections which are not taken care of by other leading banks. They look after micro industries, unorganized sector, small farmers etc. RBI and FEMA are the governing bodies of these banks. 1. AU SMALL FINANCE BANK 2. CAPITAL SMALL FINANCE BANK 3. FINCARE SMALL FINANCE BANK 4. EQUITAS SMALL FINANCE BANK 5. ESAF SMALL FINANCE BANK 6. SURYODAY SMALL FINANCE BANK 7. UJJIVAN SMALL FINANCE BANK 8. UTKARSH SMALL FINANCE BANK 9. NORTHEAST SMALL FINANCE BANK 10. JANA SMALL FINANCE BANK This is a new model of banking in India. A payment bank is categorised as a scheduled bank, conceptualised by Reserve Bank of India formed committee headed by Dr Nachiket Mor on 23 September 2013. On 19 August 2015, the Reserve Bank of India gave in-principle licences to eleven entities to launch payments banks. The minimum capital requirement to set up a payment bank is 100 Crore. The main objective of payments bank is to broaden the reach of payment and financial services to small businesses, low-income households, migrant labourers in a very secured technology-enabled environment. They have restricted operations, can take maximum of Rs. One Lakh is acceptable per customer by these banks. Like other banks, they also offer para-banking services like ATM cards, Debit- Credit cards, net-banking, mobile banking etc. The RBI had given licences to 11 payments bank of which following are currently operational. - Airtel Payments Bank, - Fino Payments Bank, - Jio Payments Bank and - Paytm Payments Bank Those who have backed out from the payments bank space include Tech Mahindra, Cholamandalam Finance and a consortium of IDFC Bank, Telenor and Sun Pharma. In July 2019, Aditya Birla Payments Bank said it would close operations by October 2019 due to unanticipated developments that rendered the economic model unviable Banking Sector in India – Overview The image below from IBEF Growth of Banking Sector shows the composition of banking sector in India, growth of deposits etc Please check out Wikipedia article List of Banks in India for names of banks.
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9331234693527222, "language": "en", "url": "https://lawdirect.co.nz/2010/08/01/structuring-your-business-and-incorporating-a-limited-liability-company/", "token_count": 1389, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.002777099609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9dc2877d-5cb8-484e-9120-6d399b8051a0>" }
One of the first things people consider when going into business is the type of entity or structure they would like to use to carry out their business. The business structure is like a vehicle which will take the business through its life cycle from formation to winding up. However, depending on the type of business, different structures may be appropriate for different stages of the business. For example, it may be appropriate to start operating the business using an entity such as a sole trader or partnership and then incorporate a limited liability company later on. Every business is different and the advantages and disadvantages of the structures available should be considered in light of your particular business. Some of the common business structures are: - sole trader, - general partnership, - limited partnership, - trading trust, - unincorporated joint venture, - qualifying company and - loss attributing qualifying company. In this blog I look at the limited liability company; in particular, I give a quick overview of what a company is, some of the advantages and disadvantages of the company structure and how to incorporate a limited liability company. A company is a separate legal entity in its own right. That means it is like a legal person. It is separate from its shareholders and directors and generally has the same rights and obligations of a natural person. That means it is also responsible for its own debts and liabilities. The most attractive feature of a company is the limited liability protection for the company’s shareholders. That means the liability of shareholders is limited to the amount they have invested in the company. Creditors of the company cannot generally go after the personal assets of the shareholders. This protection is eroded when a Bank or creditor requires personal guarantees from the shareholders e.g. when money is borrowed or commercial premises are leased. Directors and shareholders A company must have one or more directors and one or more shareholders. The directors are in charge of managing the day to day operations of the company while the shareholders are effectively the owners and generally have control over the directors. Advantages of a company - Limited liability protection for shareholders; - the ability to change the rights of shareholders – different shareholder can hold different types of shares (e.g. redeemable shares, preferential shares, limited or conditional voting shares or non-voting shares) and have different rights (e.g. voting rights, different rights to share in dividends and the distribution of surplus assets of the company). The default type of share, the “ordinary share”, will suit the shareholders in most small companies; - the ability to expand the size of the company by issuing new shares; - the company can potentially live forever (even after the directors or shareholders have died); - the company structure is a easy way to separate management and ownership of the company and the ownership and control can easily be changed without changing the entire structure; - more credibility in the market place; - easy to raise finance (e.g. investors can buy shares); - easy to sell a company; - a company is relatively cheap and easy to set up. The disadvantages of a company - Limited liability protection can be eroded by shareholders and/or directors giving guarantees; - on going administration and reporting costs; - certain accounting records must be retained; - the company must keep certain statutory records; - the company must keep a share and interest register; - it must prepare and file annual reports and tax returns; - complete financial statements, - appoint an auditor, - comply with the law around distributing money or property to shareholders; - there are certain duties imposed on company directors (directors need to understand their responsibilities); - certain information about the company is available to the public e.g. the company’s, shareholder’s and director’s addresses, the share register, certain documents e.g. the company’s constitution (if it has one). Setting up a company A company can be incorporated online on the Companies Office website. Reserve a company name The first thing to do when incorporating a company is to check the company name you wish to use is available. The name you choose cannot be the same as, or almost the same as, an existing company’s name. There are a few other restrictions on company names, for example, the name cannot be offensive. To check whether the name you wish to use is available you need to do a search for that name on the Companies Office website. If that name is not already in use you must then “reserve” the company name. Reserving a company name costs $10.00 and lasts for 20 days. Within 20 days of reserving the company name, you must apply to incorporate the company. Applying to incorporate a company To incorporate a company you will need to pay $150.00 and provide the following documents: - an application for registration, - constitution (optional), - a consent from each director and - a consent from each shareholder. It is a good idea to apply for and IRD and GST number for your company at the same time as incorporating your company. Although a constitution is optional, it is a good idea to have one. A constitution set out the rules for the company and is a public document. The Companies Act 1993 contains certain mandatory rules for companies that cannot be amended by the constitution. For example: - the company’s name must be clearly stated on all correspondence; - the company must maintain a share register; - and the certain duties are imposed on directors. There are certain other optional rules that can be altered or negated by the constitution; but without a constitution the “default” options will apply. For example: - the majority required to pass a special resolution; - all shares have one vote and share in all distribution equally; - all shares are of the same class; all new shares must be offered to existing shareholders first. Then there are certain other rules that may only be relied on if provided for in the constitution. For example: - restricting the transfer of shares; - and the ability of the company to indemnify a director, employee or provide directors and officers with liability insurance. Another document to consider having is a shareholders agreement. This document is not available to the public and is like a “pre-nup” for the shareholders. Even though a company can be set up online without a lawyer, it is a good idea to consult your lawyer and accountant before doing so. These professional advisors can help with ensuring a company is the most appropriate structure for your business and ensuring the company is set up correctly for your business venture.Share
{ "dump": "CC-MAIN-2021-17", "language_score": 0.9498041868209839, "language": "en", "url": "https://websimplifiers.com/zymyo6/viewtopic.php?tag=07bd03-fixed-plants-examples", "token_count": 2565, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:74e0fae8-751a-4f44-8518-96c9b29650a3>" }
Readers, this is the most popular page on jacksonproductivity.com. (ii) Fixed plants: Some plants Here is a look at what fixed nitrogen is and an explanation of different fixation processes. Fixed assets include property, plant, and equipment (PP&E) and are recorded on the balance sheet. Plants growing in water are called aquatic plants. Fixed costs in accounting are costs that remain the same, and are not impacted by production levels. They have spongy bodies. Fixed position layout is also called the location layout of a plant. Steer and direct plastic manufacturing operations that generate $40M in annual revenue. They have plate-like leaves that float over the surface of water. flexible and have air spaces. Copyright © 2020 Multiply Media, LLC. fixed oil: [ oil ] 1. an unctuous, combustible substance that is liquid, or easily liquefiable, on warming, and is not miscible with water, but is soluble in ether. This layout involves the active movement, from one point to the other of the machine and manpower within the plant. On the other hand, fixed equipment does not include things such as pumps, compressors, turbines, electrical equipment, or instrumentation (I&E), even though they typically don't move. truck). Depending on their behavior on heating, they are classified as volatile or fixed. not get damaged by strong current. They are known as fixed aquatic plants. This helps the leaves to Directly manage seven supervisors overseeing 125+ exempt and non-exempt employees. Examples of mitosis in plants? function of many variables. Their roots fix them in the muddy soil. Aquatic Plants 1) Floating Plants: These are light, spongy and float on the surface of the water. Fixed assets are also referred to as tangible assets, meaning they're physical assets. Internet access. Fixed Assets Register Template. Below ⦠Some are either cut off or broken due to winds etc. Fixed costs are those cash expenses that must be paid whether the business produces or sells a single product. Establish operational policies and implement process improvements to ensure superior quality and outstanding output of custom molded and extruded parts. After the business has the production plant and people in place for the year, its fixed manufacturing costs cannot be easily scaled down. Common examples include rent, insurance, salaries and interest.There is a difference between the cost accounting definition and ⦠Who is the longest reigning WWE Champion of all time? Definition: A plant asset; also called property, plant, and equipment; is a long-term fixed asset that is used to produce or sell products and services for the company. They are called underwater or submerged plants. Ex: Carrot. It is a fixed number, in an equation, for example, that can be expressed as a ratio of two integers. Non-maintenance related downtime may be attributed to lack of demand, an interruption in raw material supply or production scheduling delays beyond the control of the maintenance function. These assets are tangible in nature and are expected to produce benefits for more than one year. © and ⢠first-learn.com. Details. In this Fixed Assets Register Example template you are getting the example of the fixed asset managed by a college for a very long time. This is generally a matter of costing, which involves both fixed and variable costs. The stomata in the leaves are on They bend with the flow of water so they do A clear comparison can be seen in the following table: Who is the actress in the saint agur advert? We can grow such plants in aquarium. Examples of plant layout and design Principles of plant layout and design will apply to most industrial situations. They are called floating plants. They have spongy bodies. Web hosting. (i) Floating plants: There are some plants which float on water. the upper side. They have plate-like leaves that float over the surface of water. Subscription to shopify or other ecommerce platforms. Thanks for your interest, and I hope you'll Some plants like duckweed, green-alge, wolfia, water-hyacinth and pistia are some of the floating plants that float freely on top of the water. (iii) Underwater plants: Some These plants grow for many years. Calculating the total fixed cost would amount to multiplying the cost per capacity unit by the capacity of the plant. As an example, Total Fixed (Capital) Costs for a 500 MW coal plant with capital costs of $ 2,000 per kW are equal to $ 2,000/kW × 500,000 kW = $ 1 billion. The stems are All Rights Reserved. Roots of such plants are fixed in the soil at the bottom of a pond. plants like pondweed, tape-grass, hydrilla, etc. The stems are hollow and very light. Nitrogen has to be 'fixed' or bound into another form for animals and plants to use it. conveyor) or mobile (e.g. What are the disadvantages of primary group? We will discuss about the plants living Roots of such Fixed Aquatic Plants Submerged Plants. 2013 - 2020. Nitrogen is fixed, or combined, in nature as nitric oxide by lightning and ultraviolet rays, but more significant amounts of nitrogen are fixed as ammonia, nitrites, and nitrates by soil microorganisms. The most common resume samples for Plant Operators mention responsibilities such as handling and maintaining equipment, monitoring process parameters, making adjustments, performing tests, and making sure safety rules are being followed. like water-lily and lotus have roots that fix the plants in the mud at the Terrestrial Plants 6 .Plants on Sea Cost They can survive on Salty water Prefers area of heavy rainfall Seeds are dispersed by water Example : coconut tree www.reflectivelearn.com 9. Some examples include things like pressure vessels, heat exchangers, piping, storage tanks, valves, pressure relieving devices, boilers, furnaces/heaters and structures. Taking a power plant as an example, an economist might be interested in the manufacturing economics of the power plant. They are called floating plants. More than 90 percent of all nitrogen fixation is effected by them. In the short-term, there tend to be far fewer types of variable costs than fixed costs. Fixed and variable costs in ecommerce (with examples) Especially if you run a smaller, home-based ecommerce business, like an Etsy store, you may avoid many of the costs other ecommerce stores deal with. leaves that float over the surface of water. Plant Operator Resume Examples. There are some plants which float on water. It is one of the major and most commonly utilized means of arrangement within manufacturing plants. Ex; cotton, wheat. bottom of the pond. Rent. Examples of Fixed Plant in a sentence However, if the fixed equipment is operated during night time hours, the night time Sound Pressure Level of the Fixed Plant Equipment must not exceed the average daytime Background Noise to compensate for night time operations, which is assumed to be 10dBA below daytime Background Noise. These plants move along with the current. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. It has to make the best use it can from its production capacity. Perennial plants. Process layouts are facility configurationsin which operations of a similar nature or function are grouped together. For instance, Tom owns a small manufacturing ⦠All Rights Reserved. Using this as a base, edit the content and modify it to get your asset inventory. They are known as fixed aquatic plants. Some plants like duckweed, green-alge, wolfia, water-hyacinth and pistia are some of the floating plants that float freely on top of the water. Roots of such plants are fixed in the soil at the bottom of a pond. Does pumpkin pie need to be refrigerated? Examples of fixed costs for ecommerce. As such, they occasionally are referred to as functional layouts. Examples of variable costs are direct materials, piece rate labor, and commissions. and grow under water completely. This makes the plants light enough to float. The stomata in the leaves are on the upper side. They breathe through their body surface. Biennial plants: These plants survive for two years at most. For example, asset utilization is impacted by both maintenance and non-maintenance related downtime. Buildings, steel structures and their foundations are also considered to be plant, particularly if they are load-supporting structures. Process layouts are found primarily in job shops, or firms that manufacture customized, low-volume products that may require different processing requirements and sequences of operations. Plant Manager. Emerged plants grow from roots affixed in the soil at the bottom of the water body to produce thick,... Floating-Leaf Plants. Fixed manufacturing costs are needed to provide production capacity for the period. The term âplantâ is defined in Part 1, Section 4 of the Mines Safety and Inspection Act 1994. Nitrogen fixation in nature. Emerged Plants. Examples of property, plant, and ⦠They are known as fixed aquatic plants. (ii) Fixed plants: Some plants like water-lily and lotus have roots that fix the plants in the mud at the bottom of the pond. Such substances, depending on their origin, are classified as animal, mineral, or vegetable oils. When did organ music become associated with baseball? They have plate-like Fixed Plant Services Mining Maintenance Solutions can also provide the personnel, tools and equipment to assist with the maintenance and repairs of fixed plant including shutdowns, condition monitoring and GET monitoring. Property, plant, and equipment are physical or tangible assets that are long-term assets that typically have a life of more than one year. Like a sponge there are lots of empty spaces throughout their body and are filled with air. This makes the plants light enough to float. padburyparishcouncil.files.wordpress.com. The stems are very flexible. Most of the agriculture crop plants come under this category. A business is sometimes deliberately structured to have a higher proportion of fixed costs than variable costs, so that it generates more profit per unit produced. Plant can be described as being either fixed (e.g. Plant Operators work in large industrial units and keep automated systems running. Utilities While financial accounting is required by law and mainly performed to benefit external users, managerial accounting is not required by law and is done to provide useful information to people within an organization, mainly management, to help them make better internal business decisions. Short Answer. Plant Layout: Plant layout means the disposition of the various facilities (equipments, material, ⦠Annual plants: These plants survive for a year or less. plants are fixed in the soil at the bottom of a pond. Who are the famous writers in region 9 Philippines? Like a sponge there are lots of empty spaces throughout their body and are filled with air. delicate shoots. types of aquatic plants. Such plants have very However, the product from the process stays stationary. Why don't libraries smell like bookstores? A macrophyte is a plant that grows in or near water and is either emergent, submergent, or floating. These plants have narrow, thin leaves Their actual age is not fixed. The following layout examples encompass a wide variety of facility characteristics, and of process characteristics. Aquatic plants are plants that have adapted to living in aquatic environments (saltwater or freshwater).They are also referred to as hydrophytes or macrophytes to distinguish them from algae and other microphytes. The business is stuck with these costs over the short run. in water. float. 4. are some common plants which live There are three without pores. Their purpose is to process goods or provide services that involve a v⦠Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? Fixed overhead is a set of costs that do not vary as a result of changes in activity.These costs are needed in order to operate a business. What Does Plant Asset Mean? Hay Chair Sale, Panasonic Lumix Dmc-zs19 Manual Pdf, Audi A6 Rs6 Grill, Does Trolli Gummy Worms Have Gelatin, Azure Performance Monitoring, Public Housing Screening Process, Smirnoff Whipped Cream Vodka Alcohol Percentage, Luma Meaning Sunset, Krbl Infrastructure Limited, How To Design A Metrics Dashboard, The Daily Cafe Charleston Sc, Prediction Machine Football,