meta
dict
text
stringlengths
224
571k
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9366941452026367, "language": "en", "url": "https://www.caixabankresearch.com/en/economics-markets/labour-market-demographics/who-middle-class", "token_count": 1540, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1962890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:11d34f34-e284-442a-8bfa-576bca0cd813>" }
Who is the middle class? Is belonging to the middle class an ambition? What is the quality of life of the middle class like? How has it evolved in recent decades? And how will it evolve in the future? Are middle-class people satisfied with their lives? All these questions are very important, but before addressing them, we must first answer the question, who is the middle class? Despite the absence of any precise definition of the middle class, there is some consensus in describing it, at least in the advanced economies, as a group comprising the largest portion of society that shares particular values, has relative financial stability and a good quality of life that it expects to pass on to its descendants. The middle class is also understood as a portion of society with the means to live comfortably, whatever «comfortably» really means. This may include elements such as having access to housing, leisure, good-quality health care, a certain level of education, a decent retirement and having the capacity to deal with unforeseen expenses. Given that economists need objective measures, we attempt to identify the middle class through their consumption patterns or level of income. For instance, OECD studies usually use income levels to identify the middle class, while another branch of the academic literature defines it based on certain consumption levels. We should also note that, whether using consumption patterns or income, they can both be defined either in relative terms or in absolute terms. The definitions of middle class based on absolute measures classify households’ consumption or income into specific thresholds that are comparable between different countries. For example, much of the academic literature considers that having daily expenses of between 11 and 110 dollars per person (in purchasing power parity terms) is a reasonable measure for identifying the middle class in most emerging economies.1 That said, in many advanced economies the lower threshold of 11 dollars a day lies below what we would consider representative of the middle class. The middle class can also be defined using relative measures: - Various institutions use income distribution to classify households that lie between the 30th and 60th percentiles as middle class.2 An advantage of this definition is that it considers middle class to be the third of society that lies in the centre of the income distribution. However, one limitation of this identification method is that it is not possible to study how the size of the middle class changes over time, since, by definition, it will always represent the same percentage of society (30%). - One measure that can solve this limitation is that used by the OECD in its latest report on inequality,3 which considers middle class the households with an income that represents between 75% and 200% of the median income for their region and year.4 This classification is the most attractive among the relative measures, so it is the one we will use for the remainder of this article. This lack of clarity on the definition of middle class is probably what lies behind the bias in people’s perception of belonging to the middle class. According to OECD data, in developed countries, on average, there are more people who consider themselves middle class than the number who really are (see first chart). Interestingly, however, this is not the case in Spain, and much less so in Portugal, where much of the middle class consider themselves not to be. - 1. See «The emergence of the middle class: an emerging-country phenomenon» in this same Dossier for more details. - 2. In other cases, the 40th and 70th percentiles are used. See, for example, Brainard (2019). «Is the Middle Class within Reach for Middle-Income Families?». US Federal Reserve. - 3. See OECD (2019). «Under pressure: The squeezed middle class». - 4. The income is first adjusted to account for the size and composition of the individuals within the household. Relative weight and income If we set different income thresholds in each autonomous community region, taking account of the differing income levels, middle-class individuals in Spain have an income of between 7,750 and 39,000 euros, with an average of 18,100 euros. This wide income range is due to the disparity between the level of income required in each autonomous community region in order to be considered middle class. For instance, in the Basque Country, an individual is considered middle class if it has an income of between 14,400 and 38,400 euros, while in Andalusia the income range is between 8,900 and 23,800 euros. The proportion of the population that is considered middle class in the various autonomous communities is relatively similar, albeit with a few exceptions (in Navarre, the middle class represents 71% of the population, compared to 59% in Spain as a whole). This is almost identical to the percentage of the upper class and well above the 49% of the working class (which suffers from a very high level of unemployment). Furthermore, among those working as employees, the middle class has a moderate temporary employment rate in comparison with the working class (16% and 39%, respectively). Nevertheless, there are substantial differences in the rate of temporary employment between autonomous communities. Finally, and consistent with the stability of employment that has historically been attributed to the middle class, only 6% of the middle class changed jobs in 2017 (versus 5.3% and 16% in the upper and working classes, respectively). The percentage of middle-class households in which the head of the household has a higher-education qualification (32%) is double that of the working class, although it is clearly exceeded by that of the upper class (68%). This is consistent with the economic literature, which assures that the middle class tends to invest a lot in education, which serves as a driver for economic growth through the accumulation of human capital.7 - 7. See, among others, R. Perotti (1996). «Growth, Income Distribution and Democracy: What the Data Say». Journal of Economic Growth, 1(2), 149-187. These characteristics, which are not exclusive to the middle class, encourage inclusive growth and, with it, a high level of social cohesion. This latter statement can be illustrated through the close relationship that currently exists between the relative size of the middle class and the aggregate social cohesion indicator (ASCI) developed by CaixaBank Research.9 - 9. This index aggregates and synthesises in a single measure the information contained in the 33 social cohesion indicators monitored by the OECD. They are grouped into five pillars according to the type of interaction: personal satisfaction, social environment, trust, political engagement and crime levels. For more details, see «Social cohesion and inclusive growth: inseparable» in the MR01/2019. In addition, the size of the middle class is closely related to four of the five pillars that make up the IACS. Countries with a bigger middle class exhibit higher levels of trust, political engagement and social relations, and suffer less crime. On the other hand, a bigger middle class has no bearing on the levels of personal satisfaction in today’s society. This could suggest that, today, belonging to the middle class is no longer a guarantee of happiness. Indeed, this is a hypothesis already put forward by several authors who speak of an increase in social unrest among the middle classes in the face of the great uncertainties in today’s world.10 - 10. See A. Costas (2017). «El final del desconcierto». Península, Barcelona, 289.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9453459978103638, "language": "en", "url": "https://www.chooseenergy.com/news/article/does-cheap-natural-gas-affect-expansion-solar-energy/", "token_count": 853, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.263671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:70a19643-65a3-492d-b70b-2477a027fca6>" }
The development of usable solar energy has come a long way over the past decade, seeing exponential levels of adoption among American homes and businesses. However, in many states, solar energy is still one of the least used energy sources. Why is that, and what is preventing a significant rise in solar and other renewables? One possible answer: The increasing use of cheap natural gas to generate electricity. Coal has long been the leading generation source for energy – until the discovery of vast reserves of natural gas in the U.S. in 2007. What is it about natural gas that makes it so appealing? In a word, price. When it comes to electricity generation, natural gas is far cheaper and burns much cleaner than coal, which is why a lot of utilities have switched from coal to gas-fired power plants. Harrison Fell, an energy economist at N.C. State University, says increased use of natural gas hasn’t slowed implementation of solar energy in North Carolina – yet. That’s at least in part due to adoption of the Public Utilities Regulatory Policy Act of 1978, which required utilities to buy power from independent companies that could produce power for less than what it would have cost for the utility to generate the power. “The primary driver of the growth of solar in North Carolina has been the favorable terms in which solar is treated under North Carolina’s implementation of PURPA,” Fell said. “In particular, North Carolina’s implementation of PURPA allowed for relatively large solar installations to be eligible for relatively long-term power purchasing agreements.” Changing conditions for solar These terms have changed under 2017 legislation that reduced the size of contracts energy utilities can make with solar producers. House Bill 589 reduced PURPA contract lengths from 15 years to 10 years and limited fixed-price PURPA contracts to solar and other non-hydro renewable energy contracts under 1 megawatt, down from 5 megawatts. Fell outlined how this ultimately will hurt solar energy in the state. “This new legislation has guaranteed Duke [Energy]’s procurement of an additional 2.6GW of solar capacity, which ensures some near-term growth of the solar industry,” Fell said. “In the long-term, the changes to PURPA implementation and low natural gas prices will likely curb the growth of solar in North Carolina unless solar costs and storage costs fall sufficiently to make it competitive.” That means solar energy is being directly affected by cheap natural gas – even if it is just temporarily. If the prices of solar energy don’t fall to compete with natural gas, there likely will not be an exponentially large increase in its usage. What comes next? The question has now become: what can states do to ensure the growth of solar and other renewables? Fell explained what he thinks states could do to change that usage curve. “One option, which many states are already doing, is to implement a Renewable Portfolio Standard (RPS) which mandates a certain percentage of the state’s power sales that come from renewable sources,” Fell said. “I think there is still an argument to be made for the continuation of federal tax incentives associated with renewable generation, and finally, policies which make fossil-fueled power plants pay their full social cost of generation would also help renewable energy growth as it would make the fossil fuel plants relatively more expensive to use and thus incentivize utilities to switch to renewables.” According to Fell, policies that would fit this bill would include emission taxes or cap-and-trade programs for various pollutants. About seven years ago, there was little solar generation. Now solar energy makes up about 1.3 percent of the total U.S. generation mix, according to the Energy Information Administration. Once companies can create cost-effective storage for solar energy, there could be a noticeable rise in its usage. Stephen Sears is a recent graduate of the University of North Carolina at Charlotte and a freelance journalist with bylines at SBNation’s At the Hive and scout.com’s Panthers Insider. Connect with him on Twitter or LinkedIn.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9307670593261719, "language": "en", "url": "https://www.greenbuildingadvisor.com/article/the-economic-plus-of-energy-conservation", "token_count": 258, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.034423828125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ce672c52-545a-4a2c-8166-ab01294a5e3c>" }
The cost of programs designed to save energy works out to 4.4 cents per kilowatt hour, less than half of what power from a conventional coal-burning plant costs, according to an analysis from the Lawrence Berkeley National Laboratory. The authors of the report, released in mid-November, collected energy efficiency information from more than 100 program administrators in 34 states, covering 5,900 “program years” between 2009 and 2013. They looked at a number of residential efficiency programs, including programs that subsidize whole-house retrofits, lighting improvements, appliance swaps, and efficiency measures for electronic devices. Overall, the authors found that the cost of saved energy (CSE) was 4.4 cents/kWh, with residential programs having the lowest cost (3 cents/ kWh); commercial, industrial and agriculture programs followed at 5.6 cents/kWh. This compares with the 9.5 cents/kWh for producing electricity in a conventional coal plant, Merrian Burgeon writes in a blog for the Natural Resources Defense Council. “This means that smarter uses of energy can replace dirty coal at a fraction of the cost of building coal plants to generate electricity (and without polluting our air or exacerbating climate disruption),” she says.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9482032656669617, "language": "en", "url": "http://ominthenews.com/ramping-up-for-bargain-days/", "token_count": 337, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.07080078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:1c6aab73-be3d-4be0-a93a-026b424ca511>" }
Article Title: What impact are bargain days having on peak planning and supply chain? Author of Article: Andrew Tavener Date of article: November 10, 2017 Supply chains have to deal with the growing impact of ‘bargain days’ on inventory management and logistics delivery. Instead of traditional peak demand seasons, firms are increasingly experiencing peak demand year round due to marketing promotions. 1. How can the bullwhip effect be controlled when dealing with a responsive supply chain that includes forward buying as a major issue? Guidance: Review the differences between an efficient and responsive supply chain; review the bullwhip effect. Students should be asked to outline the steps to take to minimize the bullwhip effect. The discussion should include the use of technology to track inventory and plan ahead with collaborative forecasting. 2. What difference will dependent versus independent demand have on the inventory required for peak planning periods in a responsive supply chain? Guidance: Review dependent and independent demand for inventory. Ask students to argue for and against the Q-model and the P-model in regards to the different types of demand. Which model is best suited for peak demand? How can safety stock be reduced? You may introduce risk pooling as a way to reduce safety stock at this juncture. 3. How would a price-break model be helpful with the forward buying issue? Guidance: Review the price-break model and forward buying. Students should be asked to consider the importance of forecasting in their answers, and to consider what assumptions must be satisfied in order for the price-break model to prove helpful for the forward buying issue.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9316173791885376, "language": "en", "url": "https://climatetrust.org/finance-for-your-grasslands-conservation-project/", "token_count": 461, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8f4cbee8-f7ef-476d-9d06-201898525cf4>" }
Elizabeth Hardee, The Climate Trust May 19, 2016 In the ongoing effort to curb worldwide greenhouse gas emissions, increasing attention is being paid to land conservation. While much of this attention has been focused on forests, there are also abundant opportunities to conserve another type of land that possesses significant environmental and social benefits—grasslands. Preserving prairie (or grasslands) plays a key role in preventing the release of greenhouse gases, protecting water quality, maintaining wildlife habitat and biodiversity, and providing recreational opportunities. Grasslands and shrublands have the third-highest rate of sequestration of ecosystem types; covering nearly 60 percent of the West and containing 23 percent of the region’s stored carbon. When grasslands are tilled for conversion to agricultural or development use, the carbon stored in those soils is released into the atmosphere. This is a very real threat, as grasslands are the most common source of land for new conversions to crop production; a largely more profitable enterprise than keeping the land intact. In fact, research has shown that uncultivated land (since at least 2001) accounted for 7.34 million acres of converted land in the U.S. Climate Trust Capital wants to help grassland owners protect their lands from the risk of conversion by offering an upfront investment for conservation. We offer projects up to one half of the overall carbon credit value as an upfront investment—determined in part by the current market carbon price, and the projected credit volume over 10 years. To be eligible for a Climate Trust Capital investment, grassland properties must be located in the U.S., have been maintained as grasslands for 10 years or more prior to implementation of a carbon project, and the landowner must be willing to place the property under a conservation easement. Under this type of easement, land is protected from conversion, but grazing activities may continue. Projects that can generate 50,000 credits or more over a ten year lifetime are most favorable, but Climate Trust Capital will perform an analysis to determine credit potential. By providing upfront financing for conservation easements on grasslands, we hope to accelerate conservation of these important lands, and the greenhouse gas benefits they provide. Applications for Finance are managed through our online portal. Image credit: Flickr/David Cantu
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9637849926948547, "language": "en", "url": "https://eress.eu/news/articles/what-is-erex-exchange/", "token_count": 471, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dd501334-fdfb-412d-b878-aca7f6770fc8>" }
What is Erex Exchange? and why do Infrastructure Managers need it? Erex Exchange receives data from meters installed on electric trains — and secures the validity of the data — before allocating and distributing it to infrastructure managers and train operators, according to national and international requirements. To better understand this process of exchange, we spoke with Bjørn Lysne, Exchange Responsible at Eress. “Exchange, in brief, is an international service which ensures that the energy data that is metered — or measured — on each train is available for use in billing. This means that all parties who would like to invoice based on metered data can do so. Exchange is essentially the crossroads for this metered data,” says Mr. Lysne. “We started with this about twelve years ago, handling multiple countries that had to send their metered data to one another. So, each country needed to communicate with other countries — to get the necessary data — in order to bill the responsible train operators for their energy consumption.” He continues, “We were the first movers of this, but it was out of necessity that we started. Having every meter or system sending data to everyone in an endless variety of special interfaces is not a good way to do it, especially when you consider the lack of efficiency, the potential risks involved, and the technology nightmare it creates. We decided that the best approach was to put a central crossroad — Erex Exchange — in the middle, which would then connect the different metering and settlement systems. This allowed them to easily and securely receive the data and use it for billing purposes. That is the basis for how exchange came to be and why it has grown as big as it is today.” “The difference between the regular energy markets and railway energy markets is that the consumer in the regular energy market is not moving around. Trains, on the other hand, are in motion most of the time — crossing borders into different countries, different grid areas, etc. This is why it is important to have intelligent systems that can handle the values quickly and allocate and export them directly to the correct receiver.” The full version of this article can be found in Eress Magazine 2020 Created Wednesday, October 16, 2019
{ "dump": "CC-MAIN-2020-29", "language_score": 0.926599383354187, "language": "en", "url": "https://techannouncer.com/smart-label-market-development-trends-competitive-analysis-by-leading-industry-players/", "token_count": 666, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1845703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:31c5718d-ca4e-4c9c-a86c-75f50736868a>" }
A smart label refers to an item or product identification slip containing advanced technologies than conventional bar code data. Smart Labels enables the consumers to get equipped with additional details with regards to wide range of beverage, food, household, pet care, and personal care products. These smart labels are made up from paper, fabrics or plastics and used as, electronic labels, printed labels, or chip labels. These smart labels are generally used to detect theft in libraries, shops and retail stores. choice The smart labels enables consumers to gain access to information with their preferred method such as visiting a website and scanning the product codes with the help of smartphone. Smart labels offer the reliable, fast product authentication at the retail store, pharmacist and even at hospital level with the help of office scanning equipment. The high tech tags and labels helps to increase accuracy and productivity along with the enhancement in the product information and inventory management. The major benefits provided by the smart labels include automated reading, high tolerance, re programmability, rapid identification, and the reduction in errors. Due to the wide use of smart labelling, the manufacturers and logistics service providers can easily track their products and maintain data for their inventory management. The major driving factor that drives the smart label market is the reduced risk from theft and counterfeiting among retailers. Also the reduction in training cost, labor cost and the time savings are the other significant factors that drives the profitability of the global smart label market. One of the major restraints that restricts the growth of the smart label market is the high cost involved in replacing e-display. The compatibility of the smart labels with its interfacing devices also plays an important role in restricting the growth of the smart labels market. However, many companies or the major key players in the market have started investing in smart labels to enhance their anti theft systems to avoid any kind of revenue loss and damage to their inventories, that will further rise the requirement or demand over the forecasted time period. For More Information, Request PDF Sample: https://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=39788 The global smart label market is segmented on the basis of components, technology, application, and geography. On the basis of components, the smart label market is segmented as batteries, microprocessors, transrecievers and others. On the basis of technology, the global smart label market is further segmented as RFID, Electronic Article Surveillance, Electronic Shelf Labels, Near Field Communication Tags, and others. RFID technology is further bifurcated into low frequency (LF), high frequency (HF) and ultra high frequency (UHF). On the basis of application, the global smart label market is segmented into retail, consumer electronics, apparel, transportation & logistics, warehousing, inventory management and others. Geographically, global smart label market can be segmented into North America, Europe, Middle East & Africa, Asia Pacific and Latin America. North America market is expected to have a significant market share due to the advanced technology growth, product tracking and distribution of consumer goods, followed by the Europe. Asia Pacific region is anticipated as an emerging growing market for the smart labels due to growing trends in inventory management and growing supply chain and logistics industry.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9415793418884277, "language": "en", "url": "https://www.gov.scot/publications/poverty-income-inequality-scotland-2015-16/pages/3/", "token_count": 622, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06884765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:14ae6d2d-c6fa-46cc-a56e-25b46e0e81cf>" }
Background Notes and Methodology In this publication all statistics are based on net income. That is, income after taxes and including benefits. Income is calculated at the household level, and reflects the income available to the household after taxes (including council tax) are paid and all benefits and tax credits have been received. Unless otherwise stated, incomes for previous years are in 2015/16 prices (real prices). All figures in this publication are rounded to the nearest 10 thousand individuals or whole percentage point. Percentage change and percentage point change are calculated prior to rounding. In some cases, calculations based on the unrounded figures do not match those based on the rounded ones meaning that changes reported in the text of the report do not always match with the rounded figures in the tables and charts. Poverty is measured at the household level. If household income is below the poverty threshold, all people within the household are in poverty. This is based on the assumption that income is shared equally across all members of the household, and they have the same standard of living. The estimates presented in this publication are based on a sample survey and are therefore subject to sampling error. Confidence intervals are a measure of sampling error. A 95 per cent confidence interval for an estimate is the range that contains the 'true' figure on average 19 times out of 20 if sampling error were the only source of errors. For example, when looking at poverty rates for all individuals the true value is likely to be within a range of around +/- 3 percentage points around the central estimate presented in this report, whilst a change of around 4 percentage points or more is generally required to represent a statistically significant change over time. New methodology has been established this year to improve the calculation of confidence limits. More information can be found here: https://www.gov.uk/government/publications/changes-to-dwp-households-and-pensioners-incomes-statistics-201516-statistical-notice Unless specifically stated, annual changes in the numbers and percentages of people in poverty presented in the body of this report are not statistically significant. Caution should be exercised when looking at year on year comparisons, with longer term trends often giving a clearer picture. More information can be found here: Changes to statistics 2015/16 This publication includes a change to the statistics compared with previous publications: Pensioners are defined as all those adults above state pension age. Working age adults are defined as all adults up to the state pension age. Between April 2010 and March 2016 the state pension age for women increased to 63 and it will increase further to 65 by November 2018. At this point the state pension age for men and women will be the same. The changes do not affect the state pension age for men, which remains at 65. Therefore, as with the previous five reports, the age groups covered by the pensioner poverty analysis will change for the 2015/16 report. The pensioner material deprivation statistics will continue to be based on pensioners aged 65 and over. Email: Andrew White
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9728400111198425, "language": "en", "url": "https://www.kidsfinancialeducation.com/resources/board-games/", "token_count": 230, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.216796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dc2dde84-586a-404d-a107-3d56bd3b8d64>" }
Game of Life board game involves players choosing a career and making wise financial moves as they navigate the game board towards retirement. Monopoly board game (and now, app!) was invented in the 1930s and is still teaching players basic money management and cash flow principles as they move around the board, buying and developing real estate. Payday board game teaches players how to manage monthly income and expenditure, including how to handle loan payments, cash windfalls, and other budgeting basics. Charge Large board game teaches sensible use of credit as players navigate the board and build wealth. Acquire is a board game where players place tiles on the board, strategically creating corporations and mergers. When corporations merge, players holding stock in the acquired company can cash out their stocks or receive stocks in the take-over company. The game ends when the market (board) is full, and the winner is the player with the strongest portfolio of stock and cash. For more ideas, check out the following SageVest Kids blog articles: Note: This list does not represent SageVest’s endorsement of any of the products discussed.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8847557902336121, "language": "en", "url": "http://www.climateaction.org/white-papers/world-bank-ecofys-vivid-economicsstate-and-trends-of-carbon-pricing", "token_count": 140, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.12255859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:008a6b66-4e91-44ac-882e-a160d7a78a10>" }
World Bank/Ecofys/Vivid Economics:State and Trends of Carbon Pricing About this report 2015 witnessed an historic global step forward in taking action on climate change. In Paris, world leaders reached an agreement at the 21st Conference of the Parties (COP 21) to the United Nations Framework Convention on Climate Change (UNFCCC) to keep the global average temperature increase to well below 2°C and pursue efforts to hold the increase to 1.5°C. The Paris Agreement encouraged all countries, for the first time, to make individual, voluntary commitments to contribute to this global goal, marking the beginning of a new era in the cooperative effort to limit climate change.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9497913718223572, "language": "en", "url": "https://accountinginstruction.info/tag/expense/", "token_count": 1838, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01409912109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e02edf1e-13b3-486b-aa9b-f064879bbf3c>" }
Hello, in this lecture, we’ll discuss a bank reconciliation. At the end of this, we will be able to describe what a bank reconciliation is perform a bank reconciliation, make a needed adjustments to our books in the reconciliation process, as well as record those adjustments. So this is going to start off the bank reconciliation process. We’ll start off with, of course, the bank statement. So the bank statement is going to come from the bank, generally, it happens at the end of the month, although we could get it electronically at any timeframe. But typically, it’s still good to get it as of the end of the month so that we can have a set timeframe as to when we’re going to reconcile our account and deal with the timing differences at that time. So this bank statement coming from the bank is going to be as of the end of February in this case, and we’ll have a typical information on a bank statement, which will be that we will have the beginning balance, and then we’re going to have the additions to it generally our deposits and then we’re going to have the corrections to it. In this presentation we will discuss what will be included or should be included in inventory costs. So when considering inventory cost, clearly we have the cost of the inventory which would be included. But there are other components that we want to keep aware of. And keep in mind that could be included in the cost of inventory as we record that inventory cost that purchase price or the amount in dollars of inventory on the financial statements. One is going to be Do we have to pay for the shipping costs and that typically will have to do with the terms of fo B shipping point, or fob destination is going to be a common question that is asked and a common factor in practice that we need to consider. In this presentation, we will take a look at a cash payments journal for a service company, the cash payment journal we’ll be dealing with transactions where we have cash payments, that’s going to be the factor that will be the same for all transactions with cash payments meaning this column here cash payments will always be affected wish they kept cash payments journal cash payments journal will be used when using more of a manual system rather than an automated system. However, it’s good to know what the cash payments journal is, even if using an automated system because it’s possible that we or it’s very likely that we would need to run reports that will be similar in format to a cash payments journal. And it’s useful to see this format or how different types of accounting structures can be built. Hello in this presentation we’re going to take a look at the creation of the income statement from the trial balance. First, we want to take a look at the trial balance and consider where the income statement accounts will be. When looking at the trial balance, it will be in order we have the assets in green, the liabilities in orange, the equity in light blue, and then the income statement accounts including revenue and expenses. That’s what we are concentrating here we’re looking at those income statement accounts. And that is what will be used in order to create the financial statements to create the income statement. Note that all the blue accounts represents the equity section. So the income statement really is going to be part of total equity. If we consider that on the balance sheet, then we’re really looking at a component of this capital account. Hello in this lecture we’re going to be creating the equity section of the balance sheet. In prior lectures, we have taken a look at the current assets section, the property plant and equipment section and then the liability section. This will be rounding out the balance sheet where we will finally get to total assets being equal to total liabilities and equity represent in the double entry accounting system. In terms of the balance sheet in terms of the accounting equation, we of course, are pulling these numbers from the adjusted trial balance. the adjusted trial balance also represents the double entry accounting system. However, it represents that double entry accounting system in the format of the building blocks of debits and credits. All we’re doing is taking those building blocks in terms of debits and credits, rearranging them to the accounting equation, so that readers who don’t understand debits and credits can then read them. Now when we look at the equity section, this is a bit confusing when we convert from the trial balance to the equity section. Hello in this lecture, we’re going to record the adjusting entry related to depreciation were recorded on the left hand side, that’s where the journal entry will go. And then we’ll post that to the trial balance on the right hand side trial balance being in the format of assets in green liabilities in the orange. Then we have the equity section in the light blue and the income statement, including revenue and expenses in the darker blue. We’ll first talk about what accounts are affected and then we’ll go back and explain why this is the case. So first, we know that it’s an adjusting entry. So that’s going to have some added rules, you want to keep the adjusting entry separate in your head from just normal journal entries. all entries have at least two accounts and an equal number of debits and credits as well as adjusting entries. But adjusting entries are all made of as of the cutoff date, we’re gonna say 1231 in this case, and they generally have one account above this equity line above the capital meaning a balance sheet account and one account below that line meaning an income statement accounts. Hello in this lecture, we’re going to record the adjusting entry related to insurance, we’re going to record the transaction up here on the left hand side and then post that to the trial balance on the right hand side, the trial balance being in the format of assets in green liabilities in orange. Then we have the equity section in light blue and the income statement, including revenue and expenses in the darker blue. We will start off by just identifying the accounts that will be affected and then talk about why they will be affected. So we know that we have the adjusting entries. Remember that adjusting entries should be kept separate in your head in that they do have the same characteristics of having debits and credits in at least two accounts affected however, they’re also all as of the end of the time period, either the end of the month or the end of the year. Hello in this presentation we’re going to talk about types of adjusting journal entries. When considering adjusting journal entries we want to know where we are at within the accounting process within the accounting cycle. all the entries the normal adjusting entries have been done the bills have been paid the invoices have been entered for the month we have reconciled the bank accounts. Now we are considering the adjusting process. Those adjusting journal entries are needed in order to make the adjusted trial balance so that we can create the financial statements from them. The adjusting journal entries being used to be as close to an accrual basis as possible. those categories of adjusting journal entries, which will then have more types of adjusting entries within each category will include prepaid expense, unearned revenue, accrued expenses and accrued revenue. Let’s consider each of these we have the types of adjusting entries first type prepaid account expenses. prepaid expenses are items paid in advance. Hello in this presentation that we will discuss a thought process for recording financial transactions using debits and credits. Objectives. At the end of this, we will be able to list a thought process for recording journal entries. explain the reasons for using a defined thought process and apply thought process to recording journal entries. When we think about a thought process, we’re going to start with cash as the first part of the thought process is cash affected. We’ve discussed the thought process when we have considered the double entry accounting system in the format of the accounting equation, the thought process will be much the same here we now applying that thought process to the function of debits and credits recording the journal entries with regard to debits and credits. Whoa in this presentation we will be discussing a cash method versus an accrual method objectives. We will be able to at the end of this, define and explain a cash method, define and explain an accrual method and explain the difference between the cash and accrual methods. When considering the cash method and the accrual method, they’re not necessarily completely different or diametrically opposed. But when presented, they are often presented in this format partially because in order to explain one, it’s often useful to know the other it’s useful to be able to compare the differences between the two methods.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9460888504981995, "language": "en", "url": "https://thesoundingline.com/total-us-debt-is-over-75-trillion-debt-to-gdp-lower-than-in-2008/", "token_count": 579, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.18359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:02254c3a-27ff-4665-a410-6c962ede4032>" }
Submitted by Taps Coogan on the 29th of March 2019 to The Sounding Line. Enjoy The Sounding Line? Click here to subscribe. Based on the latest data from Q3 2018, total private and public US debt has hit an all time record-high of $75.3 trillion, or a staggering 365% of GDP. The total debt figure is based on the Fed’s Financial Account of the US and includes: - Open market paper - Treasury securities - Agency and GSE-backed securities - Municipal securities - Corporate and foreign bonds held in the US - Depository institution loans - Other loans and advances - Consumer credit including: student debt, revolving credit, credit card debt, auto debt, and anything else categorized as consumer credit The Fed’s measure of outstanding US treasuries appears to not count certain types of intra-governmental debt holdings as it is consistently lower than the national debt. Therefore, I have added the difference between the national debt and the Fed’s estimation of outstanding treasuries to try and capture the difference. As a result, my estimation of total US debt is higher than some others that you may find. When it comes to total debt in the US, there is some good news. Although the dollar value of the total debt continues to climb, the debt level has shrunk from 405% of GDP in 2008 to 365% in Q3 2018. The modest decline in the total debt-to-GDP since 2008/2009 is a result of a decrease in household debt relative to the economy, as we discussed here, as well as a decline in financial debt. The following chart shows the main components of total US debt. While household and corporate debt-to-GDP has declined, government debt and corporate debt have risen. Corporate debt, while the smallest of the four, is well above its Financial Crisis levels. The following chart from Compass shows that US debt levels are actually lower than some other notable developed economies including: Japan, the UK, and the Eurozone. The total US debt level is also likely below China’s. Virtually all developed economies, and most developing economies, are now grossly over-indebted. In absolute terms, the US is the most indebted economy in the world and the US Federal government is the most indebted single entity. Nonetheless, relative to its economy, the US has actually very modestly deleveraged since the Financial Crisis and, while certain sub-sectors like the federal government and corporate debt are highly concerning, holistically the US is not the most glaring debt bubble economy in the world. If you would like to be updated via email when we post a new article, please click here. It’s free and we won’t send any promotional materials.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9793106913566589, "language": "en", "url": "https://vittana.org/14-far-reaching-reaganomics-pros-and-cons", "token_count": 1874, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2734375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d4b93d86-56a7-432a-a62e-adf8e96502a4>" }
Reaganomics is a term that is used to describe the economic policies that were instituted during the administration of President Ronald Reagan in the United States. Although all policies are included in the term, it is most commonly associated with the promotion of an unrestricted free-market economy that is combined with a reduction of tax responsibilities. It applies to all individuals, but the theory is weighted toward businesses and wealthy individuals. Following the principles of trickle-down theory, the idea is that by giving the wealthiest entities more access to their wealth, they’ll be able to stimulate business investments and that creates more jobs, higher wages, and additional benefits for the rest of society. The advantage of Reaganomics is that it can lead to higher levels of productivity and economic growth. Lower income taxes that apply to all income groups can improve the incentive people have to seek out employment opportunities, explore innovation, or create their own business opportunities. When increased profits are achieved, then the idea is that they can be reinvested to achieve more output, which results in more growth, and that means more money can be earned. The primary disadvantage of Reaganomics is that it took wealth out of the country. Over time, as businesses and the wealthy class have had time to save, they’ve been able to store their money in off-shore accounts. This has helped those entities be able to avoid paying taxes on those amounts, which means only they get to benefit from the positive economic impacts of this policy. Here are some of the other key points to consider when looking at the Reaganomics pros and cons. List of the Pros of Reaganomics 1. Inflation effects could be reduced on a societal level. The primary benefit that everyone experienced from Reaganomics was a reduction of the influences that inflation was having on the economy. This was combined with high interest rates. People could stash money in a bank and see it grow, but they had to use almost all their cash to meet their daily needs. By controlling the influences of inflation, many households were able to see a rise in their disposable income. 2. It lowered taxes for almost everyone. When the first policies of Reaganomics where introduced to the American public, proposals to lower top tax rates first began. The top marginal rate in 1970 was 70%, while the top rate on capital gains was 28%. In 1982, when Reaganomics first began to make its impact, the top rate on regular income became 50%. It would eventually become 28%. At the same time, the top rate on capital gains went to 23.7%, and then 20%. 3. It encouraged legislators to follow good accounting practices. Increased income almost always results in poor purchasing habits. The U.S. government was encouraged to spend money with additional wisdom because of Reaganomics as a way to promote fiscal responsibility. Extraneous programs were eliminated, unused services were stopped, and that helped to save on the amount of money the government required to function properly. 4. Investment opportunities were created. Although Reaganomics targeted the wealthy class, everyone had the opportunity to get involved with investing thanks to their access to extra cash. Everyone could invest whatever wealth they had to create more wealth for themselves so that it could grow. The idea was that each socioeconomic class could improve its quality of life because the government was providing incentives to do less spending and more investing. 5. It created a support network for productivity. Reaganomics also took a hard stance on drug use at the same time it was encouraging fewer restrictions on the free-market economy. According to The Atlantic, this helped to create a society that saw fewer violent crimes through strict drug policies. Interestingly enough, the added restrictions on drug use, including long mandatory sentencing guidelines, created cheaper drugs that more people could afford with their current levels of productivity. 6. Most people saw themselves as being better off. By the end of his Presidency, the final approval ratings of Ronald Reagan were 68% for all 8 years he was in charge. 71% of people approved of how Reagan handled foreign relationships. 62% of people, according to a 1989 article by Steven Roberts in the New York Times, approved of the way that Reagan handled the economy. Even with a sampling error of 3%, at the time, those were the highest approval ratings for any President after World War II. List of the Cons of Reaganomics 1. Inequality doesn’t lead to higher rates of economic growth. The Organization for Economic Cooperation and Development, or OECD, has found that wealth inequality is steadily rising, especially since the Great Recession years of 2007-2009. That inequality, which is promoted by Reaganomics, has created lower levels of economic growth instead of higher levels of growth. In the United States, the impact on growth has created a GDP which is estimated to be almost 10% lower than it would have been without this policy. 2. High wealth earners have no incentive to share their earnings. Reaganomics makes an assumption that high wealth earners will “do the right thing.” The only problem is that an unchecked free-market economy makes no requirement to share. Those with high incomes have a greater ability to accumulate wealth, which allows them to create an even higher income. That is why societies which use some version of Reaganomics see income inequality continue to grow. The wealth can reinvest their own dividends and profits for themselves, which is something the other classes cannot do. 3. It reduced income levels for a majority of Americans. The total of income shares in the United States from 1979-2007 dropped nearly 10% for the lowest 80% of earners. At the same time, the top 1% of earners in the U.S. saw an almost equal rise in their income share. The households that fell into the top 20% were only able to break even with their income. When inflation is accounted for, the value of a household income today for the Middle Class is lower than what it happened to be before Reaganomics. Corporate profits in the U.S. have increased significantly, but real median incomes have seen no benefit from this increase at all. 4. Deficits and the national debt exploded under Reagan. During the years of the Reagan administration, the annual deficits averaged 4.2% of GDP. This was after inheriting a deficit of 2.7% of GDP under the final year of the Carter Administration. The inflation-adjusted rate of growth fell from 4% under Carter to 2.5% under Reagan. Although productivity growth and GDP per working adult rose under Reagan, the national debt nearly tripled under the Reagan Administration. In 1981, the national debt of the U.S. was less than 1 billion. By 1988, the national debt was $2.6 billion. It has continued to rise ever since. 5. It changed the financial status of the United States. Under the 8 years of Ronald Reagan, the U.S. went from being the largest creditor nation in the world to the largest debtor nation in the world. 6. Reagan had to raise taxes to save Reaganomics. Near the end of his administration, Reagan ended up needing to raise taxes to shore up the shortfalls that were being experienced by the government. To stop the economy from going into a recession, more than 10 different tax increases were eventually implemented. When evaluating the Reaganomics pros and cons, it is important to remember what the initial response to the first policies of this economics theory happened to be. More asset sales became taxable, tax breaks were reduced, and base-broadening was implemented. 7. Reaganomics was never fully implemented either. One of the ways Reagan suggested that his tax policies could be funded was to cut spending in certain departments, such as the Department of Education. More than $1 billion was eliminated in spending, but the requested totals never came to fruition. That also helped to contribute to a deficit that reached 6% at its peak during the Reagan Administration. 8. Reagan promoted additional spending in other departments. Spending wasn’t cut across the board in the government. Certain departments saw dramatic increases in spending during the Reagan years, such as the Department of Defense. Under Reagan, the defense budget was increased no less than 6 times. That created a system of defense contracting throughout the country that was unprecedented outside of World War II. Over 50 active defense contractors were employed through Reaganomics. By the second term of Bill Clinton, that number had been reduced to just 5. These Reaganomics pros and cons show an economics system which requires voluntary compliance to be successful. In a free-market economy, this goes against its very principles. Capitalism is about looking out for oneself above anyone else. Reaganomics implies that the wealthy class can improve their standing while helping others succeed too, but that’s why it doesn’t work. Blog Post Author Credentials Louise Gaille is the author of this post. She received her B.A. in Economics from the University of Washington. In addition to being a seasoned writer, Louise has almost a decade of experience in Banking and Finance. If you have any suggestions on how to make this post better, then go here to contact our team.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9752546548843384, "language": "en", "url": "https://www.pnc.com/en/about-pnc/topics/pnc-pov/economy/pnc-pov-money-mystery-large-denomination-bills.html", "token_count": 1000, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.365234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:308e57f4-eabb-4789-b7f0-ce21e18daa97>" }
Personal Small Business Corporate & Institutional About Us Big money is hard to come by, especially if you want a $500 or $5,000 bill. Or how about a $100,000 bill? Large denomination bills were part of everyday currency circulation in the United States for more than 100 years. Between 1862 and 1945, the U.S. Bureau of Engraving and Printing produced $500, $1,000, $5,000 and $10,000 bills, or as they called them, “notes.” The notes were mainly used for large transactions between banks and businesses. The average person may have used them for large purchases, depending on where they lived and if merchants were comfortable accepting such big money. While many of us have seen and used the $100 bill adorned with the face of Benjamin Franklin, it was World War I-era president Woodrow Wilson whose face was on the largest American currency – the $100,000 U.S. Gold Certificate. However, it was printed briefly and only for internal transactions between Federal Reserve banks. The Wilson gold certificate is one of five now-obsolete denominations larger than the $100 bill. Production was halted during World War II and the Federal Reserve Board officially stopped distributing them in 1969. The demise of big bills can be attributed to technology. “In an age of electronic transfer of funds, these notes are obviously not needed,” said Mary Beth Corrigan, PNC Legacy Project consultant. Her role includes advising on the bank's artifacts that date back to the 1700s, including checks written by U.S. presidents. Although the $100,000 gold certificate will never be found in public circulation, there are rare occasions when other big bills find their way into people’s hands. Some years ago, a young man walked into a PNC Bank branch and wanted to deposit 10 $1,000 bills. “The bills had belonged to the man’s grandfather,” said Karen Morgan from PNC’s bank operations. “They were given to him as a college graduation gift and at that time were worth more than their $1,000 face value. “But, the young man was more interested in the immediate cash, so the $10,000 was deposited into his bank account.” Since the bills were no longer in circulation, they had to be given to the Federal Reserve Bank, where they were destroyed, she added. Morgan relates another instance involving a very rare $10,000 bill. “Shortly after an elderly man passed away, members of his family were at his house going through his possessions,” she said. “Down in the basement they found a shoebox and inside was an old $10,000 bill.” Unsure what to do with a bill that was long out of circulation, a family member contacted a local PNC Bank branch for help. Since it was a rare bill, PNC contacted the Federal Reserve to determine the bill’s authenticity. The Fed checked the bill’s serial number and confirmed that it was real. The family cashed in the bill and the bank submitted it to be destroyed, Morgan said. Private collectors are now the primary home of those big bills that are still out there. Although they remain at face-value to the U.S. government, these notes are often traded by their owners for higher values because they are so rare. A select few high-denomination paper notes are also in museums. Numismatists, collectors of coins and banknotes, estimate that a $10,000 note, for example, can fetch its owner between $65,000 and $100,000 on the collectors’ market. For this reason, there is little incentive for owners to deposit them into a bank at face value. If you are lucky enough to come across one of these bills, keep in mind that depositing a large-denomination bill at a bank represents the end of the line for the relic. Because they remain legal tender but are not circulated, the Federal Reserve reconciles the paper note and, through an automated process, destroys it. Visit PNC Legacy Project for information about the history of PNC and its predecessor banks dating back hundreds of years. Mary Beth Corrigan, historian Large denomination “notes” featured the faces of various U.S. presidents and a chief justice. As of their final print date, the bills featured: PNC Point of View Real People. Real Perspective. Real Insights. Read more POV Stories » We have tools to help you bank when and where you want.Mobile Apps Directory » Be part of our inclusive culture that strives for excellence and rewards talent.Visit PNC Careers » The PNC Financial Services Group, Inc. All rights reserved.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9130463600158691, "language": "en", "url": "http://dictionnaire.sensagent.leparisien.fr/Copenhagen%20Consensus/en-en/", "token_count": 3682, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1962890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a06853e2-ec34-4906-a375-1464dcc8f426>" }
voir la définition de Wikipedia Copenhagen Consensus is a project that seeks to establish priorities for advancing global welfare using methodologies based on the theory of welfare economics. It was conceived and organized by Bjørn Lomborg, the author of The Skeptical Environmentalist and the then director of the Danish government's Environmental Assessment Institute. It is now run by The Copenhagen Consensus Center under Lomborg's directorship at the Copenhagen Business School. The project considers possible solutions to a wide range of problems, presented by experts in each field. These are evaluated and ranked by a panel of leading economists. The emphasis is on rational prioritization by economic analysis, justified as a corrective to standard practice in international development, where, it is alleged, media attention and the "court of public opinion" results in priorities that are often far from optimal. The project has held conferences in 2004, 2008 and 2009. The 2008 report identified supplementing vitamins for undernourished children as the world’s best investment. The 2009 conference, dealing specifically with climate change, proposed research into marine cloud whitening (ships spraying seawater into clouds to make them reflect more sunlight and thereby reduce temperature) as the top climate change priority, though climate change itself is ranked well below other world problems. The initial project was co-sponsored by the Danish government and The Economist. A book summarizing the Copenhagen Consensus 2004 conclusions, Global Crises, Global Solutions, edited by Lomborg, was published in October 2004 by Cambridge University Press. A book summarizing the 2008 conclusions is in the process of publication. Eight leading economists met May 24–28, 2004 at a roundtable in Copenhagen. A series of background papers had been prepared in advance to summarize the current knowledge about the welfare economics of 32 proposals ("opportunities") from 10 categories ("challenges"). For each category, one assessment article and two critiques were produced. After a closed-door review of the background papers, each of the participants gave economic priority rankings to 17 of the proposals (the rest were deemed inconclusive). "Nobel Prize" winners marked with (¤) Below is a list of the 10 challenge areas and the author of the paper on each. Within each challenge, 3-4 opportunities (proposals) were analyzed: The experts agreed to rate seventeen of the thirty-two opportunities within seven of the ten challenges. The rated opportunities were further classified into four groups: Very Good, Good, Fair and Bad. The highest priority was assigned to implementing certain new measures to prevent the spread of HIV and AIDS. The economists estimated that an investment of $27 billion could avert nearly 30 million new infections by 2010. Policies to reduce malnutrition and hunger were chosen as the second priority. Increasing the availability of micronutrients, particularly reducing iron deficiency anemia through dietary supplements, was judged to have an exceptionally high ratio of benefits to costs, which were estimated at $12 billion. The fourth priority identified was controlling and treating malaria; $13 billion costs were judged to produce very good benefits, particularly if applied toward chemically-treated mosquito netting for beds. The fifth priority identified was increased spending on research into new agricultural technologies appropriate for developing nations. Three proposals for improving sanitation and water quality for a billion of the world’s poorest followed in priority (ranked sixth to eighth: small-scale water technology for livelihoods, community-managed water supply and sanitation, and research on water productivity in food production). Completing this group was the 'government' project concerned with lowering the cost of starting new businesses. Ranked tenth was the project on lowering barriers to migration for skilled workers. Eleventh and twelfth on the list were malnutrition projects - improving infant and child nutrition and reducing the prevalence of low birth weight. Ranked thirteenth was the plan for scaled-up basic health services to fight diseases. Ranked fourteenth to seventeenth were: a migration project (guest-worker programmes for the unskilled), which was deemed to discourage integration; and three projects addressing climate change (optimal carbon tax, the Kyoto Protocol and value-at-risk carbon tax), which the panel judged to be least cost-efficient of the proposals. The panel found that all three climate policies have "costs that were likely to exceed the benefits". It further stated "global warming must be addressed, but agreed that approaches based on too abrupt a shift toward lower emissions of carbon are needlessly expensive." In regard to the science of global warming, the paper presented by Cline relied primarily on the framework set by Intergovernmental Panel on Climate Change, and accepted the consensus view on global warming that greenhouse gas emissions from human activities are the primary cause of the global warming. Cline relies on various research studies published in the field of economics and attempted to compare the estimated cost of mitigation policies against the expected reduction in the damage of the global warming. Cline used a discount rate of 1.5%. (Cline's summary is on the project webpage ) He justified his choice of discount rate on the ground of "utility-based discounting", that is there is zero bias in terms of preference between the present and the future generation (see time preference). Moreover, Cline extended the time frame of the analysis to three hundred years in the future. Because the expected net damage of the global warming becomes more apparent beyond the present generation(s), this choice had the effect of increasing the present-value cost of the damage of global warming as well as the benefit of abatement policies. Members of the panel including Thomas Schelling and one of the two perspective paper writers Robert O. Mendelsohn (both opponents of the Kyoto protocol) criticised Cline, mainly on the issue of discount rates. (See "The opponent notes to the paper on Climate Change" ) Mendelsohn, in particular, characterizing Cline's position, said that "[i]f we use a large discount rate, they will be judged to be small effects" and called it "circular reasoning, not a justification". Cline responded to this by arguing that there is no obvious reason to use a large discount rate just because this is what is usually done in economic analysis. In other words climate change ought to be treated differently than other, more imminent problems. The Economist quoted Mendelsohn as worrying that "climate change was set up to fail". Moreover, Mendelsohn argued that Cline's damage estimates were excessive. Citing various recent articles, including some of his own, he stated that "[a] series of studies on the impacts of climate change have systematically shown that the older literature overestimated climate damages by failing to allow for adaptation and for climate benefits." After the results were published, members of the panel, including Schelling, criticised the way this issue was handled in the Consensus project. "Nobel Prize" winners marked with (¤) In the Copenhagen Consensus 2008, the solutions for global problems have been ranked in the following order: Unlike the 2004 results, these were not grouped into qualitative bands such as Good, Poor, etc. As with the 2004 project, Lomborg's ranking scheme placed efforts to cut carbon dioxide emissions last. Gary Yohe, one of the authors of the global warming paper, subsequently accused Lomborg of "deliberate distortion of our conclusions", adding that "as one of the authors of the Copenhagen Consensus Project's principal climate paper, I can say with certainty that Lomborg is misrepresenting our findings thanks to a highly selective memory". Kåre Fog further pointed out that the future benefits of emissions reduction were discounted at a higher rate than for any of the other 27 proposals, stating "so there is an obvious reason why the climate issue always is ranked last" in Lomborg's environmental studies. In a subsequent joint statement settling their differences, Lomborg and Yohe agreed that the "failure" of Lomborg's emissions reduction plan "could be traced to faulty design". In 2009, the Copenhagen Consensus established a Climate Change Project specifically to examine solutions to climate change. The process was similar to the 2004 and 2008 Copenhagen Consensus, involving papers by specialists considered by an expert panel of economists. The panel ranked 15 solutions, of which the top 5 were: The benefits of the number 1 solution are that if the research proved successful this solution could be deployed relatively cheaply and quickly. Potential problems include environmental impacts e.g. from changing rainfall patterns. Measures to cut carbon and methane emissions, such as carbon taxes, came bottom of the results list, partly because they would take a long time to have much effect on temperatures. The 2004 Copenhagen Consensus attracted various criticisms. The 2004 report, especially its conclusion regarding climate change was subsequently criticised from a variety of perspectives. The general approach adopted to set priorities was criticised by Jeffrey Sachs, an American economist and advocate of both the Kyoto protocol and increased development aid, who argued that the analytical framework was inappropriate and biased and that the project "failed to mobilize an expert group that could credibly identify and communicate a true consensus of expert knowledge on the range of issues under consideration.". Tom Burke, a former director of Friends of the Earth, repudiated the entire approach of the project, arguing that applying cost-benefit analysis in the way the Copenhagen panel did was "junk economics". John Quiggin, an Australian economics professor, commented that the project is a mix of "a substantial contribution to our understanding of important issues facing the world" and an "exercises in political propaganda" and argued that the selection of the panel members was slanted towards the conclusions previously supported by Lomborg. Quiggin observed that Lomborg had argued in his controversial book The Skeptical Environmentalist that resources allocated to mitigating global warming would be better spent on improving water quality and sanitation, and was therefore seen as having prejudged the issues. Under the heading "Wrong Question", Sachs further argued that: "The panel that drew up the Copenhagen Consensus was asked to allocate an additional US$50 billion in spending by wealthy countries, distributed over five years, to address the world’s biggest problems. This was a poor basis for decision-making and for informing the public. By choosing such a low sum — a tiny fraction of global income — the project inherently favoured specific low-cost schemes over bolder, larger projects. It is therefore no surprise that the huge and complex challenge of long-term climate change was ranked last, and that scaling up health services in poor countries was ranked lower than interventions against specific diseases, despite warnings in the background papers that such interventions require broader improvements in health services." From a purely mathematical point, if economic growth is projected far enough into the future, it will always be better to postpone difficult and expensive problems (like climate change) until an unspecified point in the future when we are much richer than today and have more abundant resources available to solve the problem in question. Projecting a paltry 1.5% rate of economic growth one hundred years into the future implies that the global economy will more than fourfold in size and that any large and expensive project should be handled at that time or later. On the other hand, the use of a smaller 1.5% discount rate, rather than a larger discount rate, ironically actually increases the present costs of global warming that occur to future generations, so choosing a larger discount rate would actually make the problem of global warming smaller today, since it is generally understood that the majority of costs of global warming will result in the future, not the present. So it is not entirely clear that Cline was wrong to choose a smaller discount rate of 1.5%. If climate change is posited to have dire economic consequences and negative implications for economic growth long before that point in time, it might have been judged to be a more immediate problem by the Copenhagen Consensus. By not including any negative feedback loops on the world economy from not solving the issue of climate change, the Copenhagen Consensus implicitly states that it does not matter - from an economic point of view - if we solve climate change or not. In response Lomborg argued that $50 billion was "an optimistic but realistic example of actual spending." "Experience shows that pledges and actual spending are two different things. In 1970 the UN set itself the task of doubling development assistance. Since then the percentage has actually been dropping". "But even if Sachs or others could gather much more than $50 billion over the next 4 years, the Copenhagen Consensus priority list would still show us where it should be invested first." One of the Copenhagen Consensus panel experts later distanced himself from the way in which the Consensus results have been interpreted in the wider debate. Thomas Schelling now thinks that it was misleading to put climate change at the bottom of the priority list. The Consensus panel members were presented with a dramatic proposal for handling climate change. If given the opportunity, Schelling would have put a more modest proposal higher on the list. The Yale economist Robert O. Mendelsohn was the official critic of the proposal for climate change during the Consensus. He thought the proposal was way out of the mainstream and could only be rejected. Mendelsohn worries that climate change was set up to fail. To try and define climate policy as a trade-off against foreign aid is thus a forced choice that bears no relationship to reality. No government is proposing that the marginal costs associated with, for example, an emissions trading system, should be deducted from its foreign aid budget. This way of posing the question is both morally inappropriate and irrelevant to the determination of real climate mitigation policy. Quiggin argued that the members of the 2004 panel, selected by Lomborg, were, "generally towards the right and, to the extent that they had stated views, to be opponents of Kyoto.". Sachs also noted that the panel members had not previously been much involved in issues of development economics, and were unlikely to reach useful conclusions in the time available to them. Commenting on the 2004 Copenhagen Consensus, climatologist and IPCC author Stephen Schneider criticised Lomborg for only inviting economists to participate: In order to achieve a true consensus, I think Lomborg would've had to invite ecologists, social scientists concerned with justice and how climate change impacts and policies are often inequitably distributed, philosophers who could challenge the economic paradigm of "one dollar, one vote" implicit in cost-benefit analyses promoted by economists, and climate scientists who could easily show that Lomborg's claim that climate change will have only minimal effects is not sound science. Lomborg countered criticism of the panel membership by stating that "Sachs disparaged the Consensus ‘dream team’ because it only consisted of economists. But that was the very point of the project. Economists have expertise in economic prioritization. It is they and not climatologists or malaria experts who can prioritize between battling global warming or communicable disease," Contenu de sensagent dictionnaire et traducteur pour sites web Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. calculé en 0,063s
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9443410038948059, "language": "en", "url": "http://fp7-gratitude.eu/about-the-project", "token_count": 437, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0ccafc8f-4ebc-4cbb-90c2-3227c7418ef1>" }
Cassava and yam are important food security crops for approximately 700 million people. However, losses after harvesting and during processing can be as high as 60% (in the case of yam, 30% for cassava), which is not only detrimental to food security and the environment but also means that opportunities to increase the value generated from these crops are lost. Gratitude (Gains from Losses of Root and Tuber Crops), led by the Natural Resources Institute (NRI), University of Greenwich, in collaboration with 15 other organisations, will help find solutions that will reduce waste from post-harvest losses of root and tuber crops and turn unavoidable waste into something of value. Post harvest losses are significant and come in three forms: - economic (through discounting, or processing into low value products) and - from bio-wastes. The Gratitude project aims to reduce these losses to enhance the role that these crops play in food and income security. Post-harvest physical losses are exceptionally high (approximately 30% in cassava and 60% in yam) and occur throughout the food chain. Losses in economic value are also high (eg. Cassava prices discounted by up to 85% within a couple of days of harvest). Wastes come in various forms eg. Peeling losses can be 15-20%. Waste often has no economic value which can make processing a marginal business proposition. Technologies and systems developed and validated within the Gratitude project will particularly benefit small-holder households, and will support small and medium scale enterprises to increase profitability, create new jobs and develop links to large-scale industry. This project will help improve the livelihoods of people on low incomes and enhance the role that these crops play in food and income security. - FP7 Project reference – 289843 - Start date – 1st January 2012 - Duration – 36 months - Project Cost - Contract type – Collaborative project - End date – 31st December 2014 - Project status – Execution - Project Funding – €3,753,138; EU contribution: €2,850,413
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9673461318016052, "language": "en", "url": "https://aseyeseesit.blogspot.com/2012/07/making-case-for-living-wage.html", "token_count": 448, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.130859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bb5af576-b2e1-4813-a423-a1df8289ffcb>" }
While productivity rates increased 250% since World War II, hourly compensation only increased by 130% nearly all of which occurred before 1977. (The tiny increase around 2007 may reflect the increase in the federal minimum wage that year.) If wages had kept pace with worker productivity since 1977 the median income in Think how much more federal revenue there would be if the median family income today was $100,000 per year instead if nearly half that amount. Imagine how much less the government would have to spend shoring up low wage workers. If wages had continued to rise on par with productivity over the past 40 years our median income would far exceed current living wage levels. Even Adam Smith was a supporter of living wages. He viewed them as a way to achieve economic growth and equity. In his Wealth of Nations , Smith recognized that the rising real wages lead to the "improvement in the circumstances of the lower ranks of people" was an advantage to society. According to Smith, the government should align the interests of those pursuing profits with the interests of the labor force in order to grow the nation’s economy. Smith argued that high wages lead to higher productivity and overall growth. In this way he linked higher wages with increased productivity. For much of our history, this alignment has been evident. The only reason her employer can pay her minimum wage and count on her coming to work every day is because so much tax money is spent to supplement her wages. If there were no aid to the working poor her employer would have little choice but to pay this woman a living wage. In this sense, all government aid to the working poor is really a hidden tax break for businesses. Introduction to the Living Wage Calculator In many American communities, families working in low-wage jobs make insufficient income to live locally given the local cost of living. Recently, in a number of high-cost communities, community organizers and citizens have successfully argued that the prevailing wage offered by the public sector and key businesses should reflect a wage rate required to meet minimum standards of living. Therefore we have developed a living wage calculator to estimate the cost of living in your community or region. The calculator lists typical expenses, the living wage and typical wages for the selected location.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9420473575592041, "language": "en", "url": "https://blackandcenter.blog/2009/08/23/getting-honest-about-social-security-part-2/", "token_count": 896, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.40234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:57c0ca10-ecc7-40be-b904-3bf34d1cc628>" }
What are Entitlements? Entitlement Spending, at $1.412 trillion in FY 2006, is over half of the U.S. Federal Budget. The largest entitlement spending programs are Social Security and Medicare, as follows: Social Security – $544 billion Medicare – $325 billion Medicaid – $186 billion All other mandatory programs – $357 billion. These programs include Food Stamps, Unemployment Compensation, Child Nutrition, Child Tax Credits, Supplemental Security for the blind and disabled, Student Loans, and Retirement / Disability programs for Civil Servants, the Coast Guard and the Military How Is Social Security Funded? Social Security is funded through payroll taxes. Through 2017, Social Security collects more in tax revenues than it pays out in benefits because there are 3.3 workers for every beneficiary. However, as Baby Boomers start to retire and draw down these benefits, there will be fewer workers to support them. By 2040, the revenues to pay for Social Security will be less than the expenditures. How Is Medicare Funded? Unlike Social Security, Medicare payroll taxes and premiums cover only 57% of current benefits. The remaining 43% is financed from general revenues (i.e. including any surplus remaining from Social Security). Because of rising health care costs, general revenues will have to pay for 62% of Medicare costs by 2030. Medicare has two sections: The Medicare Part A Hospital Insurance program, which collects enough payroll taxes to pay current benefits. Medicare Part B, the Supplementary Medical Insurance program, and Part D, the new drug benefit, which is only covered by premium payments and general tax revenues. How Will the FY 2008 Budget on Entitlement Spending Affect the U.S. Economy? Through 2012, entitlement spending is budgeted at about 10.5% of GDP, with payroll tax revenue at about 6.5% of GDP, so that these unfunded obligations add to the general budget deficit. For example, in FY 2006 Social Security brought in $608 billion in “off-budget,” extra funds from payroll taxes. However, other entitlement programs had expenses that far outweighed this “extra” revenue, creating a mini-deficit of $574 billion within the entitlement spending budget alone. The amount increases to $784 billion by 2012. Long-term, however, the impact of doing nothing about these burgeoning unfunded mandates will be huge. The first Baby-Boomer turns 62 this year, and becomes eligible to retire on Social Security benefits. By 2025, those aged 65+ will comprise 20% of the population. As Boomers leave the work-force and apply for benefits, three things happen: The percentage of the labor under 55 stops growing, providing less payroll taxes to fund Social Security. GDP growth declines to less than 2% due to fewer workers. By 2040, Social Security alone brings in less than it spends. Obama has stated that any further debate on his health care reform proposals needs to be “honest debate”. He implies that critics have been dishonest, which means we’re just lying. In looking at the facts above, one need only ask the following question: Are the budgetary problems facing ‘government workers’ in Washington, DC caused by the private sector, or by the government? Obama wants to overthrow the private health insurance industry and fold it into a government run entitlement. Yet, the federal government has proven itself incapable of managing its current programs. How is adding more of the burden to the government going to resolve the baby boomer issue? With all due respect, as a wise man once stated, “government is not the solution to our problems, government is the problem.” What we need to be discussing is a way to turn over the government’s primary entitlements: Social Security and Medicare to the private sector, not the other way around. If not, the next thing ‘government workers’ will be proposing is how they can fold State, and private pension money into the black hole of the Social Security Ponzi Fund. Obama’s solution: Solve a problem by compounding it. “We have to spend more money to keep from going bankrupt.” American’s are simply saying, “No”.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9481512308120728, "language": "en", "url": "https://cryptoadventure.org/beginners-guide-to-fundamentals-of-trading/", "token_count": 1468, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.028076171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cf4a3566-f135-4a45-8600-70b09f70b1ab>" }
Beginner’s Guide to Fundamentals of Trading Mainly, a trader buys and sells goods with the aim of making profits. That is, buy for less but sell for more. What and how don’t matter, but when you choose to buy and sell goods, will determine whether you are trading or merely investing in that product or service. Let’s dig into the details. To fully understand the basics involved in trading, you should know the difference between investing and trading. Financial experts define trading as buying commodities, shares and sometimes even currencies just to sell them later to gain short or medium yields. Meanwhile, investing is a gradual process aimed at increasing net worth by buying and holding investible financial instruments such as bonds, stocks, and mutual funds. Investments take years (sometimes decades) to be profitable since they tied with factors like interests of the dividend payments and coupon payments at a specified period. Time creates change which correlates with growth. Hence, commodities, shares, and currencies grow in value after a certain period (time). Therefore, a trader studies the market value of such entities and focuses on the ones that will appreciate quickly in the market. If you purchase an entity whose value takes a long time to increase, then you will be investing in that commodity rather than trading. You can earn fast money in trading by merely taking advantage of the rising and falling market prices. This business is highly volatile, with very high risks. Still, if you are careful, you can be another billionaire on the planet like George Soros (Net worth: US$ 24.2 billion) who made all his earnings through trading. How to Start Trading You already off on a good start if you are reading this article. Most people hear stories that trading is a cash-cow with lots of monetary benefits. Hence, they recklessly jump right into it and end up losing a large sum of money. It is important to know all the basics of trading if you want to be successful, and that is precisely what you are going to get from this piece. Here, you will get to know the following fundamentals; - The key factors of trading - The financial entities you can trade - The concept of buying long and selling short - The trading platforms The Key Factors of Trading As aforementioned, trading has many risks, and in a way, you can lose all your money in a matter of hours. Thus, there is a need to know the main factors you should consider. As a trader, your goal is to gain profits at the shortest time possible. This is the point where you should understand the cycles of trading and knowing the right time of the day to trade. The market cycles include: - The Accumulation Phase - The Mark-Up Phase - The Distribution Phase - The Mark-Down Phase A cycle can last anywhere between a few hours or days to a few weeks, depending on your preferred financial instruments. Also, don’t forget about the Presidential Cycle. Before the election, the election period, after the election and the president’s term in office affects almost all financial instruments in the market. For instance, interests usually decrease during the election year. So, you should know how to schedule your trades to profit during such periods. The Financial Entities You Can Trade Financial Instruments are assets you can trade in the financial market. It is an entity you can buy or sell in the trading process. Examples include; - Currency pairs (the foreign exchange, also the FX market) - Stock indices - The stock of companies (also known as individual equities) When trading with financial instruments, you don’t really take ownership of the assets. What you actually do is predict the price of the futures contracts for currency pairs and Contracts for Differences (CFDs) in the case of stock indices, shares, and commodities. Contract price varies across all financial instruments whereby any slight change of the asset’s cost will result in a similar relative change of its contract. Financial experts recommend trading of futures contracts and CFDs because they can be traded quickly and are cost-effective. The Concept of Buying Long and Selling Short Buying long is a term commonly used in the trading business to refer to buying of a financial instrument. When hear traders are buying long (also going long) means they want the value of that instrument to rise. Example, buying a share at $7 then sell it for $10, you will have made $3 from that share – that’s buying long. Selling short is a technique where traders can benefit when the market value of an instrument falls. If you don’t own an asset you speculate would fall in value, you borrow it from someone who does. After obtaining the shares, you can sell it with the current market value. Suppose you were right and after some time the price of the given share drops in value, you can buy back (the price would be lower than the time you borrowed the share) and return it to the owner. For instance, if the current market price of a share is $2000 then the value drops to $1200 in, let’s say three weeks, you can make $800 at that period. But you have to be sure the market value would drop to get profit. Worse, it would be if the market value rises. Thus you will be forced to repurchase the shares at a higher price since you have to return to the owner before the specified time. Important Note: You don’t have to look for someone who owns a particular to borrow a share, brokers make everything ready for you. All you have to do is press Sell or Buy! The Trading Platform The revolutions on the internet have simplified trading in so many ways as compared to the old times. Nowadays, you can access brokers online can trade in various trading platforms they provide you for each financial instrument. If there is an internet connection, you can access these platforms on any device – whether its PC, smartphones or even tablets. Like any website, trading platforms differ in functionality depending on the ease of operating its interface. You must consider a site with a wide range of features, and it is easy to navigate. It’s a bonus if you can communicate with the broker on-one-one sessions to direct you to the best deals. Please don’t make a deal unless you are sure it’s promising. Trading is straightforward but have so many variables to consider before you can accumulate profits. Since it’s a repetitive process, you can get the hang of it fast, and you will be making money like it’s a walk in the park. More, brokers can help you access profitable shares to trades. Still, you have to be careful of frauds who will sweet talk you to a deal that’s infertile. Once you learn all the basics of trading in this article, you will be able to spot a fraud broker a mile away before they reach you. Please don’t forget learn about charts and trading patterns in depth to fully understand all the fundamentals of trading.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9516382217407227, "language": "en", "url": "https://qualitycustomessays.com/taylors-rule/", "token_count": 360, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:913303c6-0c4a-49ef-9bc8-6ea5eb740be5>" }
Taylor`s rule was designed by John Taylor a Stanford economist as a formula to recommend the setting of a shot-term rates of interest as the conditions of the economy change to attain the inflations long-run goal and the economy stabilizing of a short run goal (Taylor, 197). Taylors rule states that the short-term rate of interest is to be determined by the level of interest in the short run suitable with full employment of the resources, the extent to which economic activity is below or above the level of full employment. Additionally, the rule was applied where real inflation relates to the level of target that is wished to achieve by the fed. It recommends a rate of interest that is relatively higher when the rise in general price level lies above its target or when the level of economy of full employment lies below the economy and a rate of interest that is relatively low in a situation that conflicting goals is experienced. It may also occur in some cases for instance, in economies full employment, the rise in general price level may lie above its level of target. In such cases the rule guides the policy makers on ways to balance considerations that compete in putting in place the appropriate interest rate level. This is relative to stability of the U.S economy in the past decade. Analyses indicate that, Taylors rule is accurate in description of the conduction of the monetary policy in past decade of the chairmanship of Greenspan (Taylor, 207). Economists both outside and inside the fed have cited this fact and attributed it to the control of the rise in the general price level. To conclude, due to the current rise in the level in world economy, it can be applied to help boost the standards of world economic achievements. Here you can get a price quote:
{ "dump": "CC-MAIN-2020-29", "language_score": 0.7827169299125671, "language": "en", "url": "https://rodriguezbernal.com/towards-anti-cultural-goods-laundering-regulation-european-union/", "token_count": 2348, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.478515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d5634e81-58d4-4155-88c2-deab422b52af>" }
Some countries in Europe have a strong industry devoted to cultural goods. Particularly, United Kingdom attracts a meaningful number of all cultural goods transactions that take place in the world. The art market is a prime example of British achievement in a highly competitive global environment. Britain has 29% of the global art and antiques market and is the second largest market in the world, second in size only to the United States. The British art and antiques market generated £7.7 billion in sales and directly supported over 60,000 jobs in 2009. This success is based, above all, on a fiscal and regulatory environment that has enabled the UK to attract the greatest works of art for sale because this market is highly dependent on crossborder trade: in 2009 art to the value of £2 billion was imported into the UK and exports totalled £2.2 billion. In contrast to other regulations in force in Continental Law, like Spanish, French or Italian ones, which are characterized by rigidity, UK legislation on cultural goods are more flexible and less protective than the first ones. The comparative chart below –Comparative Chart 1- depicts the differences between Spanish and English Regulations on Cultural Goods and explains itself why the English market of works of art is so relevant in the world. Public opinion is often alarmed when media discover that important cultural goods coming from plundering are sold almost freely in foreign countries losing their track forever. In case of archeological items, the plundering is specially detrimental. The looting in archaeological digs before the objects have been classified and catalogued makes almost impossible to identify the origin of the items. So they arrive to foreign countries to be sold in the market, by direct sale or at auction, and the return to the origin countries becomes improbable. How can it happen so easily? Is the regulation currently in force sufficient to fight against the illicit traffic of cultural goods? Comparative chart 1 |NATIONAL TRADE||GENERAL RULE||-Free trade||-Right of first offer -Right of repurchase| |EXCEPTIONS||-Treasure finds||Free trade: Goods non registered in Public Registries created for this purpose (Inventario General de Bienes Muebles e Registro de Bienes de Interés Cultural)| |INTERNATIONAL TRADE||EXPORT LICENSE||-Certain cultural objects more than 50 years of age and valued above specified financial thresholds.||-For those cultural goods belonging to Spanish Heritage and: -More than 100 years of age -or registered in Inventario General de Bienes Muebles or in the process to be included in it. -Inexportable goods: Cultural goods registered in Registro General de Bienes de Interés Cultural or in the process to be included in it.| |TERM TO GRANT||-In case it is considered the object does not satisfy any of the Waverley criteria, the licence can be granted in about 2 weeks.||–About 3 months.| |CRITERIA TO GRANT||An object does not satisfy any of the Waverley criteria. -History -Aesthetics -Scholarship||-Undetermined criteria| |IRREVOCABLE OFFER TO ACQUIRE THE CULTURAL GOOD||NO. If export license is refused applicant continues being owner.||YES. State can acquire the cultural goods within the term of 3 months from the day of application submit in exchange for paying the value declared by the applicant.| |INFRIDGEMENT||-Subject to penalties including criminal prosecution under the Customs and Excise Management Act 1979. -Subject to seizure under the provisions of the same Act.||-Subject to penalties including criminal prosecution under Criminal Code. -Subject to seizure under Spanish Heritage Act (Ley 16/1985, de 25 de junio, del Patrimonio Histórico Español)| II. Remedies currently in force Remedies created to fight against irregular traffic of cultural goods have shown insufficient due to several reasons: - Some national regulations (e.g. countries belonging to Civil Code System such as Spain or Italy) are excessively protectors to such extent that almost makes impossible or really difficult any kind of cultural goods transaction. The consequence of this lack of flexibility is the creation of an illegal, parallel field of commerce beyond the control of authorities, which includes cultural goods legally held by their owners who try to escape from legal limits, and those goods directly coming from plundering or illicit activities like robbery, undue appropriations, smuggling, etc. - Some national regulations (e.g. countries belonging to Common Law system like United Kingdom) are too much permissive so no irregular commerce takes place but, in contrast, cultural goods are almost treated as common goods with only a few differences in relation to general regulation of goods transactions. This favorable system causes that a significant portion of heritage could disappear from its origin country. - Transnational regulations passed by European Union Institutions are often impotent to fight against illegal commerce of cultural goods. Periodical reports publicized by European Commission show that the designed mechanisms are achieving very poor results. It must be emphasised that there is not any European Directive or Regulation directly aimed at the fighting against plundering of cultural goods and its consequences. The current regulation deals with cultural goods exportation and the consequences of breach of the exportation procedure. See Council Regulation (EC) No 116/2009 of 18 December 2008 on the export of cultural goods and Council Directive 93/7/EEC of 15 March 1993 on the return of cultural objects unlawfully removed from the territory of a Member State. - International regulations to fight against plundering and illicit exportation are also insufficient to achieve their main purpose. A fundamental reason to this lack of effectiveness is the fact that a lot of countries have not signed or ratified the main International Agreements passed in order to protect cultural goods from unlawful activities. Particularly countries like USA and UK, where take place the most transactions of antiques and work of arts have not sign the UNIDROIT Convention on stolen or illegally exported cultural objects 24 June 1995. As regards to Spanish Law and the return of cultural goods unlawfully removed from the Spanish territory, the deficiencies assessed have been analyzed in my recent article “El comercio de bienes culturales de ilícita procedencia o titularidad controvertida: control, reclamación e impunidad”, publicized in the legal journal Noticias Jurídicas. III. Possible solutions Possible solutions can be studied from several perspectives: - National Law, analyzing the idea to pass new, balanced regulations that combine flexibility and control in transactions on cultural goods, so they do not impede the lawful commerce but without waiving the preservation of historic heritage. - Transnational, European Law, introducing new Regulations and/or Directives dealing with the cultural goods coming from plundering or criminal provenance, boosting measures of control to be complied by all the operators in the market of cultural goods, such as art and antiques dealers, auction houses, art galleries, etc, similar to the ones used in Anti-Money Laundering EU Regulations. - International Law, forcing national authorities to pass harmonic and balanced regulations in the sense given at 1); extending measures of control to be complied by all the operators in the market of cultural goods, similar to the ones used in Anti-Money Laundering Worldwide Regulations. Lawyer. Recognised specialist in the law applicable to the art market, historical heritage and antiques. He participates in the International Research Project “Right and Beauty. From Common Good to Universal Good” at the Università della Calabria (Italy) ARTS ECONOMICS 2009. The British Art Market. A Winning Global Entrepôt. Lapada: The Association of Art & Antiques Dealers [en línea]. [Consulta: 9 enero 2015]. Disponible en: http://www.lapada.org/public/The_British_art_Market.pdf. COMISIÓN EUROPEA, 2000. Informe de la Comisión al Consejo, al Parlamento Europeo y al Comité Económico y Social sobre la aplicación del Reglamento (CEE) n° 3911/92 del Consejo relativo a la exportación de bienes culturales y de la Directiva 93/7/CEE del Consejo relativa a la restitución de bienes culturales que hayan salido de forma ilegal del territorio de un Estado miembro [en línea]. 25 mayo 2000. S.l.: s.n. [Consulta: 6 enero 2015]. Disponible en: http://eur-lex.europa.eu/legal-content/ES/ALL/?uri=CELEX:52000DC0325. /* COM/2000/0325 final */; COMISIÓN EUROPEA, 2005. Informe de la Comisión al Consejo, al Parlamento Europeo y al Comité Económico y Social Europeo – Segundo informe sobre la aplicación de la Directiva 93/7/CEE del Consejo, relativa a la restitución de bienes culturales que hayan salido de forma ilegal del territorio de un Estado miembro [en línea]. 21 diciembre 2005. S.l.: s.n. [Consulta: 6 enero 2015]. Disponible en: http://eur-lex.europa.eu/legal-content/ES/ALL/?uri=CELEX:52005DC0675. /* COM/2005/0675 final */; COMISIÓN EUROPEA, 2009. Informe de la Comisión al Consejo, al Parlamento Europeo y al Comité Económico y Social Europeo – Tercer informe sobre la aplicación de la Directiva 93/7/CEE del Consejo, relativa a la restitución de bienes culturales que hayan salido de forma ilegal del territorio de un Estado miembro [en línea]. 30 julio 2009. S.l.: s.n. [Consulta: 6 enero 2015]. Disponible en: http://eur-lex.europa.eu/legal-content/ES/ALL/?uri=CELEX:52009DC0408. /* COM/2009/0408 final */ OJ L 39, 10.2.2009, p. 1–7. Official Journal L 074, 27/03/1993 P. 0074 – 0079. New Directive has been passed in 2014 and will entry in force in 2015: Directive 2014/60/EU of the European Parliament and of the Council of 15 May 2014 on the return of cultural objects unlawfully removed from the territory of a Member State and amending Regulation (EU) No 1024/2012, Official Journal L 159/1, 28.5.2014. Vid. http://noticias.juridicas.com/
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9590648412704468, "language": "en", "url": "https://smartasset.com/investing/current-ratio", "token_count": 1116, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0380859375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:d1c4d401-c822-418d-b751-1abad5d28c24>" }
The current ratio is an accounting measure that tells you if a company can pay such short-term obligations as payroll and rent for the year. A good metric for investors to use when analyzing securities, the current ratio is a relatively simple calculation: current assets divided by current liabilities. What’s not so simple is how to use the ratio. If this sounds like more than you want to take on, a financial advisor can help you with your investments. To find the right one for you, use SmartAsset’s free financial advisor matching service. What Is the Current Ratio? The current ratio is a measure of how likely a company is to be able to pay its debts in the short term. Short-term debts are generally money owed within a year. The current ratio essentially indicates liquidity. Below 1 means the company will not be able to pay its debts within the year. The formula for calculating the current ratio is: Current Ratio = Current Assets/Current Liabilities As an example, let’s say The Widget Firm currently has $1 million in cash and easily convertible assets (e.g., inventory) and $800,000 in debts due in the year (e.g., payroll and taxes). We can plug this information into the formula to find the current ratio. Current Ratio = $1,000,000/$800,000 Current Ratio = 1.25 Now that you know the current ratio, you can use it as part of your analysis of the company. The following section explains exactly how to use the current ratio in your analysis. How to Use the Current Ratio It is easy to calculate the current ratio, but it takes a bit more nuance to employ it as a method of stock analysis. There isn’t a specific number you are looking for when calculating the current ratio. However, there are some basic inferences you can take from the current ratio once you’ve calculated it. For instance, if the current ratio is less than 1, this means that the company’s outstanding debts owed within a year are higher than the current assets the company holds. This is generally not a good sign, as it could mean the company is in danger of becoming delinquent on its payments, which is never good. Keep in mind, though, that the company may simply be awaiting a big influx of cash, whether in the form of a new investment or payment for a big sale of the product it manufactures. A particularly high current ratio also may not be a good sign. What makes for a high current ratio varies from industry to industry (restaurants tend to have lower current ratios than technology companies). If the current ratio is close to five, for instance, that means the company has five times as much cash on hand as its current debts. While the company is obviously not in danger of going bankrupt, it has a huge amount of cash or easily convertible assets simply sitting in its coffers. A company could reinvest that money. It could hire more employees, build a new facility or expand its product line. The fact that it is not doing so could be signs of mismanagement or inefficiency. Interpreting Changes to the Current Ratio While the current ratio at any given time is important, analysts and investors should also consider how the number has changed over time. That could show how the company is changing and what trajectory it is on. If a company’s current ratio goes up over time, this could mean that it is paying off its debts or bringing in new revenue streams. Investors and analysts should investigate to see what caused the change. It’s possible a new management team has come in and righted the ship of a company that was in trouble, which could make it a good investment target. A current ratio going down could mean that the company is picking up new or bigger debts. It could mean their revenue has gone down. Again, analysts and investors should investigate the cause to determine whether the company is a good investment. The Bottom Line The current ratio compares a company’s current assets to the debts that it will have to pay within the year. It is simply calculated by dividing a company’s total assets (cash and easily convertible assets) by its short-term debts (accounts payable for the year). Once you’ve calculated the current ratio, you can draw inferences about the company. These may help you decide whether or not it is a good investment. Don’t just look at the current ratio at any given time though. Also consider how the current ratio has changed over time and what that might mean for a company’s trajectory. - The current ratio is just one of many metrics to consider before buying a stock. To make sure you’re not missing anything, consider hiring a financial advisor. SmartAsset’s free financial advisor matching service can help you find the right one. You answer a few questions. We match you with up to three advisors in your area, all fully vetted and free of disclosures. You then talk to each advisor and decide how to move forward. - Capital gains taxes are a part of investing. If you want to see how much you may owe when you sell, use SmartAsset’s free capital gains tax calculator. Photo credit: ©iStock.com/marchmeena29, ©iStock.com/filadendron, ©iStock.com/vladans
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9695780873298645, "language": "en", "url": "https://www.finimpact.com/term-loan/", "token_count": 632, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.04150390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e9c8ea4d-872b-4230-b519-98b4811bfd96>" }
What is a Term Loan? A term loan is an upfront payment of cash which is paid off over a certain time period. Term loans can last from one to thirty years, though they are most commonly between one and ten. As term loans are often medium to long term options, they usually offer lower interest rates than other types of loan with shorter repayment periods. Term loans are usually secured by collateral and rates can vary from 6-30% depending on the credit history of the applicant. They can be approved within 48 hours in some instances. This type of financing is sometimes called project financing. Term loans are generally for amounts ranging between $20,000 to $500,000. It can be paid back at various intervals, such as once a week, twice a month, once a month, quarterly, or whatever is agreed by the borrower and lender. It is often once a month or every 3 months. Payments are the same throughout the length of the loan. Term loans can have fixed or floating interest rate repayments, as well as compound interest. The rate is most often fixed. Longer repayment dates have the advantage of flexible financing, but the total amount payed back including interest will be greater. Who Should Apply for a Term Loan? Term loans are ideal for businesses who need to purchase an expensive asset. It might be an upfront cost, but the asset could be essential to the running of a business, such as a high-quality kitchen for a restaurant. Term loans are an easy way for businesses to gain access to finance while still keeping their balance flow healthy. Any business that needs access to finance for a large upfront expense should consider a term loan. It can also be used for less tangible upfront costs, such as websites, employee training or paying off other debts. A term loan is one of the most common forms of debt raised by small businesses, often referred to as term finance. Default and Liability Failure to repay a term loan is not a criminal offense. It is a matter of the civil courts. Only when the loan is used for purposes other than what is stipulated in the contract will the resulting action be deemed a criminal one. A debt default generally results in civil proceedings where the borrower will lose the collateral and business. For borrowers who sign a letter of guarantee, personal belongings may also be lost. If the debt is not secured by collateral (most term loans are) then the lender can sue for bankruptcy, so the company assets are used to repay the debt. In the event of bankruptcy, debtholders are paid first. Pros and Cons of Term Loans - Term loans can be obtained in as little as 24 hours in some cases. - Debt financing lenders have no ownership in the borrowing company. - The maturity of the debt can generally be altered. - Long terms up to 30 years are available. - Interest on debt is tax deductible. - Collateral nearly always required. - Failure to pay fixed interest can lead to bankruptcy. - If inflation remains low for prolonged intervals, real cost of repayment can be more than expected for fixed rates.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.946090817451477, "language": "en", "url": "https://cryptocoremedia.com/blockchain-limitations-dlt/", "token_count": 1574, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2451171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ff446d37-65c0-4d64-8cf7-937e3014e599>" }
Blockchain Is Great, But Let’s Observe The Other Side Of The Medal: What Are The Limitations? People admire decentralized technologies such as the blockchain. They are expected to change the world and so on. Centralization is considered wrong. But the problem is that it’s still hard to find any ultimate decentralized solution implemented in the real world. There’s no doubt that Blockchain allows for quite a lot of advantages. Let’s outline some of them: - Makes it simple to verify the integrity of the ‘database’. - Allows for timestamping of all the changes. - Provides an ability of easy backup in real time. - Delivers consensus to a decentralized environment. - Makes it simple to audit the “book of records” in real time. Note: These advantages are possible only in case of a correctly designed protocol. Undoubtedly, these qualities make the technology unique and give it an extremely high potential in the future. However, just like every new approach, it has a number of disadvantages that should be mentioned: - Governance issue - Responsibility issue - Constantly growing volume of data - Capacity issue - Confirmation time issue Before applying Blockchain in practice, there’s an essential, you can call it, an activity that is to be accomplished — specify the governance. We’re dealing with a decentralized environment, where decision making is carried out on the basis of consensus because participants don’t trust each other. If there would be a certain entity that’s in charge of upgrading the system, dealing with certain difficulties in it and so on — the system would be centralized. Thus, it is extremely hard to determine the correct conditions from the very start. Conditions that would allow accurate decision making in the network, that doesn’t have a responsible party, but a ‘decentralized community’ that, by means of consensus, decides what to do. Ethereum split as an example We won’t be going into many details here but just discuss the very essence. Some hackers found a vulnerability in Ethereum’s smart contract and stole $50 million in Ether. So, Ethereum co-founder Vitalik Buterin decided to ‘take them back’ by upgrading the protocol artificially. In this way, change the past state of Ethereum’s blockchain — ‘create’ a new one, that no longer has a ‘bad transaction’. The community was the one to decide, whether to do it or not. If an individual agrees, he must upgrade his equipment. In a natural way, the community has split in two: those who decided to stay with the original chain (that is now called Ethereum Classic) and those who accepted the change (it was the majority, so this chain is currently the main chain of Ethereum). As you can see, the situation is quite controversial. On the one hand, the decision was made by the community (in a decentralized way). However, there are a lot of arguments related to this, considering the propriety of such a solution. And that is how, the problem with governance brings us to another issue, which is responsibility People are used to centralized governance systems and that’s a fact. Because you can always find someone to blame. If you bought a car and it broke down after a week of use — you go to these guys and claim your money back. The same thing happens when you use a service of a centralized entity. You have problems with your bank account — you go to the closest branch office and they help you fix the issue. In case of a decentralized network, you take all the responsibility, because you are involved in the process of managing the state of the system together with everyone. While, the rules by which the system operates is the only guarantor of integrity. Each participant verifies the activity correctness of the least. Even the court is unable to implement a solution that contradicts the protocol rules. Because management of digital assets is, so far, beyond power and authorities of the court. Conclusion: The concept of responsibility in decentralized systems is extremely vague. People should accept the fact that each participant takes their own risks. Whilst, governments should work out new legal models, sometimes even laws that consider the essence of such behaviour models. Constantly growing volume of data If we draw a parallel between centralized accounting systems and the decentralized one, there is one substantial difference. The centralized ledger only stores the final condition of the database. Ex: Alice sends 1$ to Bob; her account no longer has the data, referring to 1$, while Bob’s has. With the blockchain, we have a chain of blocks (sorry about tautology) that stores the whole history of all the changes that have ever happened for all the time of network’s existence. To conclude, a constantly growing volume of data is not a critical limitation, but rather a peculiarity of the technology, to which a different approach should be applied. Most commonly, a decentralized network has lower capacity than the centralized one. Having a centralized server that processes all the data, services such as Mastercard or Visa are able to verify thousands of transactions per second. While in a decentralized system: 1) the data must be spread across all the participants of the network; 2) all participants must reach consensus about this data. On top of it, the necessity of storing a large amount of data imposes some additional limitations. As a result, we have two factors, which eventually lead to slower performance of the system. In this way, Bitcoin’s throughput is as low as around 3 tps. Confirmation time issue It’s quite obvious that delays, that occur due to the fact that participants should reach consensus among each other, directly affect the response time of the whole network. A fully confirmed transaction on the Bitcoin blockchain can take up to about an hour, or even weeks. One hour is an approximate time when everyone has definitely reached consensus about your transaction. (Five blocks after the one, where your transaction has been verified is considered an optimum result. When you can be fully confident that everyone agreed on it). Nevertheless, it’s worth noting that problem with capacity and confirmation delays in such decentralized networks as Bitcoin, Litecoin and Ethereum is almost resolved, by virtue of such solutions as payment channels and Lightning network. Whereas some consensus protocols do not solve the very problem of the network throughput but considerably increase the performance factor. In this way, the Bitshares protocol allows a fully decentralized payment network, while competing against centralized services, such as Visa and Mastercard, even today. To sum up, all the challenges the Blockchain technology is currently going through are, somehow or another, related to the infancy of such approach. By any measure, it is the people who need to maturate and get used to the new way of how things work. It will undoubtedly be a good lesson for the society, especially, for those at the top of it. Those who steer the control wheel. This article has been edited and published by CCMedia Editor Omar Faridi. It was authored by Dr. Pavel Kravchenko, the Founder of Distributed Lab, blogger, cryptographer and PhD in Information Security. Pavel has been working in the blockchain industry since early 2014 (Stellar). Pavel’s expertise is mostly focused on cryptography, security & technological risks, tokenization. He considers the company’s mission in a creation of an open ecosystem that uses uniform payment and asset management protocol – so-called ”financial web’’.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9394393563270569, "language": "en", "url": "https://izzihub.com/bills-of-exchange-act-1881/", "token_count": 413, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1533203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:42dbaf0c-52bb-4b2b-aeec-6855a15b213e>" }
Every businessman (irrespective of any type of business) wants to sell the products or goods at cash, mainly because to utilities the cash in business for getting more advantages instead of blocking it in different places. The second one is to minimize the risk of bad debts. Bills of Exchange (BOE) help businessman for securing from doubtful debts. Definition as per Bills of Exchange Act 1882 As per Uk Law, An unconditional order in writing, addressed by one person to another, signed by the person giving it, requiring the person to whom it is addressed to pay on demand or at fixed or determinable future time a sum certain in money to the order of specific person, or to the bearer. XYZ (Buyer) want to purchase goods from ABC Co (Seller) but have not money. On the mutual consent they come into the negotiable act of Bills of exchange then ABC (Seller) draw an order on the name of XYZ to pay the amount at the agreed date and send to ABC for Acceptance. ABC accepts it and returned to Drawer (ABC Co –Seller) Can transfer or endorse to unlimited persons but before the maturity date. Types of Bills of Exchange - Inland Bills - Foreign Bills Terms Use in Bills of Exchange Process - Drawer, who draws an order to pay the debt - Drawee To Whom bills of exchange is Drawn, and who accept to pay the amount. - Holder in due course who is the possessor of the bill - An endorser who transfer his rights for the collection of the amount to another, Means who transfer bill to another party against cash or goods. - Endorsee who collect the rights of bills from Endorser. - Unconditioned means that transaction should be clear and not based on any event or act. For Example, Drawee pays the debt if dollar rate will increase or if the government unchanged etc. simple lines that fixed payment should be paid at the particular date.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9380719661712646, "language": "en", "url": "https://lincs.ed.gov/professional-development/resource-collections/profile-798", "token_count": 389, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.037841796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a8e97798-aa00-4a3b-9615-2d8c47a74d5d>" }
Financial Literacy Learning Plans These learning plans, designed for teachers and tutors, facilitate the teaching of financial literacy to adult ESL learners. These learning plans, designed for teachers and tutors, facilitate the teaching of financial literacy to adult ESL learners. A free account must be created to access the learning plans. This resource guides teachers and tutors in creating lessons around financial literacy topics. It also serves to guide them in individualized professional development in order to combine life skills instruction with language skills development. The learning plans have two major sections: Banking and Credit and Loans and Debt. This is an excellent resource for teachers and tutors who need/want to support their adult learners in developing basic financial literacy. It does all of the leg work in locating and linking to quality material elsewhere on the web, while also providing clear, concise guidance on setting goals, assessing learner needs, and developing activity sequences that build on learners’ existing knowledge. The site itself is well organized, attractive, and easy to use. Each of the two major sections combines guidance on setting learning goals with a broad selection of activities and links to other web-based resources. The links take the user to interactive sites that are both informative and fun for both learners and teachers to use. Sources for external resources include government agencies such as the Federal Trade Commission, private sector educational organization such as GCFLearnFree.org, and adult education programs such as the Minnesota Literacy Council. This site includes links to information created by other public and private organizations. These links are provided for the user’s convenience. The U.S. Department of Education does not control or guarantee the accuracy, relevance, timeliness, or completeness of this non-ED information. The inclusion of these links is not intended to reflect their importance, nor is it intended to endorse views expressed, or products or services offered, on these non-ED sites.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9312669634819031, "language": "en", "url": "https://papers.ssrn.com/sol3/papers.cfm?abstract_id=147163", "token_count": 227, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.027099609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3dfa7f99-34b8-48b4-9bb1-e8771bca6024>" }
What Will Technology Do to Financial Structure? 41 Pages Posted: 16 Feb 1999 Last revised: 7 May 2000 Date Written: January 1999 This paper looks at how advances in information and telecommunications technologies have been changing the structure of the financial system by lowering transaction costs and reducing asymmetric information. Households and smaller businesses can now raise funds in securities markets as financial institutions have become better at unbundling risks while financial products can be distributed more efficiently through electronic networks. These changes have reduced the role of traditional financial intermediaries overall efficiency by lowering the costs of financial contracting. Despite these benefits technological progress presents policymakers with some important challenges. First markets for financial products become larger and more contestable, defining geographic and product markets narrowly becomes more problematic. Second, financial consolidation and the trend towards new activities of financial intermediaries require the exploration of new methods to preserve the safety and soundness of the financial system. A combined system of vigilant supervision and constructive ambiguity to deal with failures of larger institutions should be capable of mitigating the potential for increased risk-taking and help preserve the health of the financial system. Suggested Citation: Suggested Citation
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9546136856079102, "language": "en", "url": "https://www.beefmagazine.com/grazing-systems/3-grazing-ratios-you-should-obsess-over-be-profitable", "token_count": 1582, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.1962890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3c105e6b-1a7d-4ef0-9f2d-3c474ff2527d>" }
In a recent short video conversation led by my good friend Allen Williams about AMP (adaptive multi-paddock) grazing, Allen asked me what effects AMP grazing would have on livestock economics. Good question—and it deserves a good answer. Any grazing, whether good or poor, has an effect on the soil—either positive or negative. There are no neutrals. Grazing fits into a total management scheme or system. To be effective, we must manage holistically or, as some people say, use a systems approach. In my articles, I have referred to “Five Essentials for Successful Ranch Management.” The first “essential” is that our approach to management must be both integrative and holistic. The problem most of us face in trying to use a systems approach is that we fail to do enough integration of facts, ideas, principles, possible methods, etc. to enable good understanding of the problems or opportunities we are trying to address. This article is an attempt to help readers understand some of the relationships between how we graze and the potential economic results. Grazing can have a dramatic and profound effect on three key ratios. Now remember that this is a systems or holistic approach. There are other items to manage that also affect these ratios—not just grazing. ∗ Acres per cow is a measure of ranch stocking rate. You can reduce acres per cow in two ways—reduce the size and milking ability of the cows, which reduces the nutrient requirements, or increase the productivity of the land. Both are economically important and effective, but grazing to improve the soil has tremendous power. - Some first attempts at better grazing don’t have enough paddocks and don’t allow adequate recovery times between grazes. This may result in no improvement or even negative effects. - Other attempts using a good number of paddocks and especially adequate recovery times begin to yield positive results. As the intensity (number of paddocks and stock density) increases, the increase in forage productivity accelerates. Then you begin to see what Allen Williams describes as “compounding and cascading effects.” One positive effect builds on itself and leads to other positive effects. - It’s difficult to explain in a few words; but, as the stock density increases because of more and smaller paddocks, more litter is laid on the soil surface, grazing is more uniform, grazing efficiency improves, manure and urine are more evenly distributed and adequate recovery time can be accommodated. These changes lead to improvement in ecosystem functions. - Rainfall and snowmelt infiltration rates are improved and water holding capacity of the soil is improved, leading to increased plant growth through a greater portion of the year. - Nutrient cycling improves because more plant material is returned to the soil either as manure or trampled plant material. This feeds soil microbes which in turn feed plants and also further improves soil moisture-holding capacity. - Then photosynthesis becomes more efficient because of more green leaves during a greater portion of the year which further improves the water and mineral cycles. - While all this is going on, you start to notice greater diversity in the plant, insect, bird, small animal and game animal community. Diversity in the plant community produces different types and depths of rooting which encourages greater diversity in soil microbes and accesses water and minerals from deeper in the soil. This diversity also attracts a greater variety of insects. - All of this variety results in symbiotic relationships between plants which makes the whole more productive. The variety of soil microbes, insects and birds provides plants and animals protection from predators (usually insects rather than coyotes or wolves) and disease. This is just a beginning of what happens in the soil that causes great changes and improvement in the soil and plant productivity. I hope it gives you an idea of the complexity of interconnectedness that exists between many parts of the biological system that drives land and pasture productivity. Think of it this way—if you could spend $50-100 per acre on fence and water development and double your stocking rate (cut acres per cow in half), you essentially would have purchased another ranch for $50 to $100 per acre. You don’t pay any more property tax, shouldn’t have to add employees, vehicles, saddle horses, tools or equipment. This is economic power. ∗ Cows per person or labor hours per cow for small ranches with less than one full time person or for ranches that have several enterprises to spread time across, is another key driver of profitability. Most graziers using adaptive multi-paddock grazing put as many cattle in one herd as possible rather than having them scattered across several pastures with continuous or season-long grazing. Fewer herds simply make it easier to check cattle and make sure they are healthy, have water and are where they belong. Over time, good grazing will produce a healthier feed source that will reduce pests and improve the overall health of the animals. I know a good number of successful graziers with excellent animal performance that use no pesticides and seldom doctor an animal. Good grazing coupled with good herd and pasture organization will enable a significant reduction in labor requirement. I know several ranchers who have doubled carrying capacity through better grazing and have not added labor except a few times each year to work cattle. A few hire contractors to develop water, but most build their own fence, mostly simple electric fence. ∗ Fed feed vs. grazed feed should meet the test of logic. Any time you put a machine between the mouth of a cow and her feed source, it costs more. Cows have legs and a mouth and can feed themselves for much or all of the year. I won’t say that feeding cows is never cost-justified because sometimes it is--however, not nearly as often as some ranchers do it. AMP grazing makes more grazable feed available and for a greater portion of the year. Because of more plant diversity, improved soil moisture holding capability and better soil health, feed quality is better and for more of the year. I occasionally get in arguments with some very smart people who contend that you can dry-lot cows cheaper than you can graze them. If that is true, someone else in the system is losing money. Most “cheap” feedstuffs such as baled straws and stalks have a fertility and soil health benefit back on the ground from which they came. By-products that occur at the processing level such as almond hulls, citrus pulp and distillers grains can be very good feed, but further processing and/or transportation can make them cost prohibitive in many situations. For any dry-lot feeding there are always the machine and labor costs to haul, process, mix and then to remove and properly dispose of or use manure. If pasture lease costs have become so high in your area that a dry-lot is cheaper, most of the people paying those leases will have very poor gross margins unless they are leasing by the acre and can greatly increase carrying capacity with good grazing. Good grazing simply makes it possible to graze more and feed less which almost always saves money. My understanding of ranch economics and finance tells me that to be most profitable on a continuing basis, you should reduce overheads as much as possible, market well, use direct inputs (mostly feed and vet costs) very wisely and then focus on (almost become obsessive about) these three ratios. Teichert, a consultant on strategic planning for ranches, retired in 2010 as vice president and general manager of AgReserves, Inc. He resides in Orem, Utah. Contact him at [email protected].
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9447121620178223, "language": "en", "url": "https://www.btcwonder.com/find-it-hard-to-understand-bitcoin-here-is-everything-you-need-to-know/", "token_count": 630, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.443359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:392d4011-2dc7-44d6-bba6-a881683892c0>" }
Bitcoin is the most widely used form of digital currency in the world. It is a kind of currency that is controlled and put away altogether by high-tech PCs spread over the Internet. More individuals and organizations are starting to use this currency. Unlike a plain U.S. dollar or Euro, bitcoin is a type of payment system, much like Paypal. (Learn how to buy Bitcoin from PayPal in two minutes) You can hold on to it, spend it or exchange it. It can be moved around as efficiently and effortlessly as sending an email. For those seeking information on how to make online transactions anonymously, Bitcoin is the perfect answer. Bitcoin allows its users complete anonymity to make transactions and leaves behind no traces of personal information. Down below, we have discussed some important points as to how Bitcoin makes a complete sense and whether it can be used as an alternative to the traditional currency. Important Points to Understand Bitcoin: - Bitcoin transactions are stored in public ledger called Blockchain. These transactions are recorded online and anyone with the access to Blockchain can view them. Thus, making the process more transparent. The transparency also drives new interest to the economy and results in the downfall of illegal use of the currency, such as drug ring. - Saying that Bitcoin is more than just a currency makes perfect sense. Unlike traditional currency, it can be transferred from one country to another without any legal obligations. It dissolves the global barriers and frees currency from the control of federal governments. However, the value of Bitcoin is still dependent on the U.S. dollar. - Bitcoin is an open source software and the technology used in Bitcoin is quite interesting. The currency works under the laws of mathematics and is overseen by a group of highly skilled professionals. The software can run on thousands of machines simultaneously, operating in different parts of world – making it one of the unique programs out there. - Bitcoin was created and released onto the internet 8 years ago by an anonymous programmer called “Satoshi Nakamoto”. The software is designed to run on multiple machines – called bitcoin miner – simultaneously. These machines can be accessed and operated by anyone on the planet with the basic understanding of a PC. - Bitcoin miners are used to generate new coins and are designed to mine no more than 21 million coins before the year 2140. These machines enable the coins to slowly expand and encourage the Bitcoin miners to keep the system growing. - When new coins are generated, they are given to the miners. The miners keep track of all the bitcoin transactions and add them to the Blockchain ledger. In exchange, they are rewarded with a few extra bitcoins. The currency reward limit currently stands at 25 bitcoins, which is paid out to the world’s miners about six times per hour. Those rates can change over time. These are some facts that make Bitcoin unique in comparison to other currencies out there. Taking everything into account, bitcoin pushes the limits of innovation. Much like PayPal in its outset. However, the marketplace will have to decide if the risks related to Bitcoin and payment system make good sense in a longer run.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9426926970481873, "language": "en", "url": "https://clikngo.com/largest-sector-of-pakistan-economy-biology-essay/", "token_count": 9170, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.29296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c289c580-f068-4141-8cbf-5ae28ff2894d>" }
Agribusiness is the largest sector of Pakistan economic system and is presently lending 21.8 per centum to the gross domestic product.Among agribusiness Livestock is the most of import sub-sector which contributes 51.8 per centum to the agribusiness value added. Livestock besides contributes significantly towards national exports and 8.5 – 9.0 per centum of entire exports. ( Economic study of pakistan 2008-09 ) . This sector provides natural stuff for the industry. It serves as a societal safety for the rural hapless as they can utilize it as beginning of gaining at the clip of their demand. 44.7 % people are raising 2 to 3 cattle/buffaloes and 5 to 6 sheeps/goats in their backyards and are deducing 20 to 25 percent income from it. All this take portion in heightening livestock function in agribusiness. ( Bhattacharya, S. 2008 ) Livestock include cowss, American bisons, sheep, caprine animals, camels, Equus caballuss, buttockss and mules. These animate beings contributes to meat, milk and many more intents. During the last five old ages, the combined population of cowss, American bison, sheep and caprine animal increased from 113 million, 1998-99, to 225 million, 2008-09, bespeaking a entire addition of 12 million or 24 lac caputs per annum. ( Beginning: Economic Survey 2008-09 ) Among livestock animate beings Cattle are the most common type of big domesticated animate beings. About 800 strains of cowss are recognized all over the universe, some which adapted to the local clime, others which were bred by worlds for specialised intents. ( Purdy, Herman Breeds Of Cattle 2008,2nd erectile dysfunction ) . Breeds autumn into two chief classs. Bos indicus ( or Bos taurus indicus ) cows, besides called zebu, are adapted to hot climes. Bos Sanchez ( or Bos taurus Sanchez ) are the typical cowss of Europe, north-eastern Asia, and parts of Africa and are adapted to cooler climes. Loanblends of Taurus/indicus are widely bred in many heater parts, holding combined features of both types. In some parts of the universe farther species of cowss are found ( both as natural state and domesticated animate beings ) , and some of these are related so closely to taurine and indicus cowss that interspecies loanblends have been bred. ( Breeds of cowss by Oklahoma State University OSU 2006 ) . Cattle belongs to subfamily Bovinae, and are the most widespread species of the genus Bos, and are most normally classified as Bos primigenius. Cattle are raised as farm animal animate being for meat, milk and for bill of exchange intent.Other merchandises include leather and droppings for manure or fuel. In some states, such as India, cowss are sacred. It is estimated that there are 1.3 billion cowss in the universe today. In 2009, cowss became the first farm animal animate being to hold its genome mapped. Among cattle strains our chief accent is on Red Sindhi and Tharparkar. Some strains are used for multiple intents i.e dairy, beef, draft, athleticss etc. These two strains are used for double intents e.g Red Sindhi is for dairy and beef intent while Tharparkar is for dairy and draft intent and comes under class of zebu ( History and Development of Zebu Cattle in the United States James O. Sanders J Anim Sci 1980. 50:1188-1200 ) . Zebu cowss originated in Southwest Asia and that their posterities were non-humped, they have evolved from three strains of Indian cowss. Zebu cowss belong to the Bos primigenius species of cowss. They were taken to Africa at an early day of the month and within the last 100 old ages, have been exported to Brazil and the US. Zebu cowss are normally ruddy or Grey in coloring material, are horned, have loose tegument, big ears and have a bulge above their shoulders. This strain is popular for its milk, meat and for bill of exchange intent. In India they are sacred and are merely used for bill of exchange and milk. In Brazil and other meat bring forthing states they are produced mostly for their beef as they cope better than European strains in sub-tropical environments. Today the Zebu is present on all continents, chiefly in India and Brazil, which has the largest commercial herd in the universe, with 155 million caput. India has over 270 million Zebu and the United States has over 2 million Zebu. Red Sindhi are the most popular of all Zebu dairy strains. This strain is originated in the Sindh state of Pakistan. They are widely kept for milk production across India, Pakistan, Bangla Desh, Sri Lanka, and other states. They have been used for hybridizing with temperate ( European ) origin dairy strains in many states to unite their tropical versions ( heat tolerance, tick opposition, disease opposition, birthrate at higher temperatures, etc. ) with the higher milk production found in temperate parts. It has been crossed with Jerseys in many topographic points, including India, the United States, Australia, Sri Lanka, etc. Other strains it has been crossed with include Holstein-Friesian, Brown Swiss and Danish Red. It has besides been used to better beef and double intent cowss in many tropical states, as it is sufficiently meaty to bring forth good beef calves in such crosses and the high milk production helps give a fast growth calf which is ready for market at one twelvemonth. It is slightly smaller than the really similar Sahiwal and produces a small less milk per animate being as a consequence. The ensuing cattles have characters of both ; which are three-fourthss Sahiwal and one-quarter Red Sindhi, can non be distinguished from pure Sahiwal cowss. The Red Sindhi scope in colour from a deep ruddy brown to a xanthous ruddy, but most normally a deep red. They are distinguished from the other dairy strain of Sindh, the Tharparkar or White Sindhi, both by colour and signifier, the Red Sindhi is smaller, libertine, with a more typical dairy signifier, and with short, curving horns, while the Tharparkar are taller with a form more typical of Zebu bill of exchange strains, and with longer, lyre molded horns. Under good direction conditions the Red Sindhi norms over 1700 kilogram of milk after suckling their calves but under optimal conditions at that place have been milk outputs of over 3400 kilograms per lactation. The Tharparkar a Bos indicus strain used for milk production and as bill of exchange animate beings. The Tharparkar came into prominence during the first World War when some animate beings were taken to provide milk for the Near East ground forces cantonments. ( Genus Bos: Cattle Breeds of the World, 1985 ) In India and abroad, these cowss are known as Tharparkar since they come from the territory of that name in the Province of Sind. The Tharparkar is, nevertheless, known otherwise in its ain part. In its native piece of land and the countries neighbouring on it, the strain is called Thari, after the desert of Thar ; and it is besides on occasion known as Cutchi, because the strain is besides found on the boundary lines of Cutch which adjoins Tharparkar to the South. In the past these cowss have been known as White or Gray Sindhi, since they are native to the Province of Sind and similar in size the Red Sindhi: this name, nevertheless, is no longer used. The Thari is non a homogenous strain, but that it has the influence of the Kankrej, Red Sindhi, Gir and Nagori strains. Average animate beings of the Tharparkar strain are deep, strongly built, moderate-sized, with consecutive limbs and good pess, and with an qui vive and bouncy passenger car. Thari cowss are said to be really stalwart and immune to several tropical diseases but definite day of the month is missing. Although animate beings of the strain are first-class foragers and can stand the asperities of climatic and environmental conditions, they have non been used chiefly as a beginning of meat, and breeders have given small attending to meat qualities. ( Joshi, N.R. , Phillips, R.W. ( 1953 ) Zebu Cattle of India and Pakistan, FAO Agriculture Studies No. 19, Publ. by FAO, Rome, 256 pp ) . Present survey is designed to qualify these two Pakistani cowss strains ( Red Sindhi and Tharparkar ) genetically, it is indispensable to understand their familial architecture and relationship among different strains. This depends on the cognition of their familial construction based on molecular markers like D-loop and Cytochrome b part of mitochondrial DNA. Mitochondrial DNA ( mtDNA ) and cytochrome B cistron is widely used as molecular tool in phylogeography and in the illation of human evolutionary history, in sensing of the domestication of farm animal and in forensic scientific discipline. In worlds and other craniates the popularity of mtDNA can be partly attributed to an premise of rigorous maternal heritage, such that there is no recombination between mitochondrial line of descents Hence the nowadayss survey is designed to see the evulotionary relationship of two Pakistani cowss strains ( Red Sindhi and Tharparkar ) with the following aims: To place breed differences of two Pakistani cowss strains ( Red Sindhi and Tharparkar ) through Single Nucleotide Polymorphisms ( SNPs ) sensing in mitochondrial D-loop and Cytochrome B part. To analyze the evulotionary relationship of two Pakistani cowss strains ( Red Sindhi and Tharparkar ) . To analyze the non cryptography and coding part of mitochondrial D- cringle and cytochrome B cistron severally REVIEW OF LITERATUR Hauswirth et al. , ( 1980 ) We have determined the location of the cistrons stipulating the big and little ribosomal RNAs by hybridisation analysis and negatron microscopic observations of R-loop signifiers By Using a physical map of bovid mitochondrial Deoxyribonucleic acid which was derived from the liver of a individual Holstein cow. By utilizing negatron microscopy, the place of the beginning of DNA reproduction ( D-loop ) has been located and besides the way of D-loop enlargement and the mutual opposition of the big and little ribosomal RNA cistrons were determined. Hauswirth et al. , ( 1984 ) Heterogenity of Mitochondrial DNA was observed by utilizing Mitochondrial Deoxyribonucleic acid from bovid tissue contains heterogenous sequences located within an evolutionary conserved cytosine homopolymer sequence near the 5 ‘ terminal of the D-loop part. This portion of the mammalian mitochondrial genome is known to incorporate the beginning of heavy strand DNA synthesis and the major transcriptional booster for each strand. Nucleotide sequence analysis of cloned DNA and cataphoretic analysis of appropriate little fragments from carnal tissue uncover a population of length polymorphs incorporating from 9 to 19 C residues. No single length species represents more than 40 % of the population. These informations imply mtDNA sequence heterogeneousness, which most likely occurs intracellularly every bit good. The localisation of variableness to a homopolymer tally suggests that reproduction slip-page generated the sequence population. We besides report that when recombinant ringers incorporating this part are repeatedly passaged in E. coli, they begin to renew length fluctuation similar to that seen in carnal mtDNA. King et al. , ( 1987 ) The genomes of mammalian chondriosomes are duplex DNA circles which is conserved part. The two major transcriptional boosters and the beginning of DNA reproduction for one Deoxyribonucleic acid strand are located in a individual part which contains no structural cistrons and occupies about 6 % of the genome. This part is called the supplanting cringle ( D-loop ) part since it is frequently ternary construction in which the heavy strand of the genome has been partly replicated. The boosters and the sites of induction of D-loop DNA synthesis have been mapped in the human and mouse genomes and may demo limited sequence preservation. We have mapped these sites in the bovine mitochondrial genome. Some characteristics are conserved between all three species. The deficiency of sequence homology is in contrast to the greater than 80 % sequence preservation which has been reported in parts of the D-loop part which are located distal to the beginning of DNA reproduction and far from the transcriptional boosters. These consequences imply that closely related species may hold developed different agencies of commanding mitochondrial cistron look. Suzuki et al. , ( 1993 ) To obtain information on maternal phyletic relationships between West African N’Dama ( Bos Sanchez ) and East African Zebu ( B. indicus ) cattle a survey of polymorphisms of mitochondrial DNA ( mtDNA ) was carried out. A comparatively big sample size was made possible by utilizing polymerase concatenation reaction ( PCR ) elaboration of Deoxyribonucleic acid prepared from little blood samples to bring forth fragments of two known polymorphous mtDNA parts, one within the cistron encoding subunit 5 of NADH dehydrogenase and one embracing the full D-loop. This attack allowed us to accomplish a higher declaration limitation analysis on mtDNA from more animate beings and is more appropriate method than conventional methods. PCR-amplified mtDNA of 58 animate beings from five populations was examined at 26 limitation sites by 16 enzymes. In this manner 154 bases of mtDNA were scanned for polymorphism. Six polymorphous sites were located by this agencies, five of which were within the D-loop and one of which was within the NADH dehydrogenase 5 cistron. None of the polymorphisms observed could be considered typical of strain or type. Eledath et al. , ( 1996 ) To qualify 16 Holstein maternal lines we use allele-specific polymerase concatenation reaction ( ASPCR ) and single-strand conformation polymorphism ( SSCP ) . These methods detect polymorphous bases at eight different places in the supplanting cringle ( D-loop ) of bovine mitochondrial DNA ( mtDNA ) 16022, 16057, 16074, 16231, 16247, 106, 169 and 363. ASPCR analysis of the maternal lines showed fluctuations at nucleotide places 106, 169, and 363 of the mtDNA. Within-line fluctuation was observed in five maternal lines for nucleotide 363. SSCP analysis of the mtDNA D-loop part revealed fluctuations that classified the maternal lines into six different genotypes. Based upon the fluctuations observed by ASPCR and SSCP analysis, the animate beings stand foring the 16 maternal lines could be assigned to 10 different genotypic groups. These processs provide a rapid, simple, non-radioactive, and dependable method of observing polymorphism in the D-loop part of bovine mtDNA. Janecek et al. , ( 1996 ) Nucleotide sequence development of the mitochondrial cytochrome degree Celsius oxidase fractional monetary unit II ( COII ) cistron was used to analyze the molecular phylogenetics and development of the Bovinae, a subfamily within the mammalian order Artiodactyla ( ungulate mammals of the order Artiodactyla, which includes cowss, cervid, camels, river horses, sheep, … ) . The COII cistron was sequenced in representatives of three bovine folks ( Bovini, Boselaphini, and Tragelaphini ) and the outgroup taxon Capra ( subfamily Caprinae ) . COII information besides supported a close relationship between African and Asian American bisons. Analysis of nucleotide permutations in the COII cistron prompted a system of differential weighting of nucleotide permutations for deducing phyletic relationships across the scope of divergency times examined here ( 2-20 million old ages ) . Ratess of development in the COII cistron are examined and compared to evolutionary rates in mtDNA tRNA/rRNA cistrons and the D-loop among other even-toed taxa. Janecek et al. , ( 1996 ) To analyze relationships between output traits and mtDNA polymorphism we use two independent informations files from the engendering herd of Iowa State University and six North Carolina herds were used.Maternal line of descents were established. Datas from Iowa State University were 1476 records from 602 cattles from 29 maternal line of descents. Eleven sites of polymorphism were found. An carnal theoretical account for cistron permutation was used to analyze the relationship between sequence differences and output traits. Traits analyzed were mature. Effectss of sequence differences were important for most traits. Sequence information from the D-loop was available for 12 line of descents from North Carolina. The consequence of polymorphism at 4 sites was examined utilizing 1472 records from 668 cattles. No important relationships existed between any of the traits and D-loop polymorphism, but consequences suggested that an association might be between polymorphism and concentrations of milk output, fat per centum, and energy. Whenever a important relationship was detected, the consequence of mutant ( rare genotype ) was damaging. Lau et al. , ( 1998 ) Sequenced mitochondrial DNA ( mt DNA ) for 303 bp of the Cytochrome B cistron for 54 animate beings from 14 populations, and for 158 bp of the D-loop part for 80 animate beings from 11 populations of swamp and river American bison. The phyletic relationships among the 33 D-loop haplotypes, with a bunch of 11 found in swamp American bison merely. The clip of divergency of the swamp and river types, estimated from the D-loop information, was 28000 to 87000 old ages ago. They hypothesized that the species originated in the mainland South-East Asia, and that was dispersed north to China and west to the Indian subcontinent, where the river type evolved and domesticated. Following domestication in China, the domesticated swamp American bison spread through two separate paths, through Taiwan and the Philippines to the eastern islands of Borneo and Sulawesi, and south through mainland south-east Asia and so to the western islands of Indonesia. Lau et al. , ( 1998 ) Mitochondrial DNA ( mtDNA ) and cytochrome B cistron of swamp and river American bison was sequenced for 303bp of 54 animate beings from 14 population. We obtained five polymorphous sites. Merely one cytochrome B haplotype is found in river American bison and four in swamp American bison. These sites are consequence of transversion permutation. An extra site is found which is consequence of passage permutation. out of these four 1 is found in each population.out of 33 D.loop haplotype 11 is found in swamp buffalo.This shows that the development of swamp and river American bison is from swamp like animate being. These two American bisons are different from each other by two nucleotide places. Mirol et al. , ( 2003 ) A part of the mitochondrial D-loop was sequenced in 36 animate beings from five Creole cowss populations in Argentina and four in Bolivia. Sequence comparings revealed three chief groups: two with the features of European strains and a 3rd demoing the passages representative of the African taurine strains. The African sequences were found in two populations from Argentina and three populations from Bolivia. The most likely account for the determination is that animate beings could hold been moved from Africa to Spain during the durable Arabian business that started in the 7th century, and from the Iberian Peninsula to America eight centuries subsequently. However, since African haplotypes were non found in the Spanish sample, the possibility of cowss transported straight from Africa can non be disregarded. Sultana et al. , ( 2004 ) Mitochondrial DNA ( mtDNA ) of 30 Pakistani caprine animal is sequenced and we obtained 22 new haplotypes ; mt-lineage Angstrom it has farther two bunchs A1 and A2.17bp omission and 76bp interpolation was observed in 232 Pakistani goats.This shows high diverseness of mtDNA cistron. Kierstein et al. , ( 2004 ) We analyzed the full mitochondrial D-loop part of 80 H2O American bisons of four different strains, i.e. , 19 swamp American bisons and 61 river American bisons. Sampled in Brazil and Italy. We detected 36 mitochondrial haplotypes with 128 polymorphous sites. Pooled with published informations of South-East Asian and Australian H2O buffaloes we show grounds that both river and swamp American bisons decent from one domestication event, likely in the Indian subcontinent. However, the today swamp American bisons have an unravelled mitochondrial history, which can be explained by introgression of wild H2O American bison mtDNA into domestic stocks. Lei et al. , ( 2004 ) The complete mitochondrial D-loop sequences in 22 persons from 8 cowss strains in China were analyzed. Comparisons of these 22 sequences revealed 66 polymorphous sites, 5 types of mutant and 19 mitochondrial haplotypes, the per centum of haplotype was 86.36 % , demoing that abundant mitochondrial familial diverseness exists in Chinese cowss. The lowest mean per centum of mtDNA D-loop nucleotide fluctuation was in Xizhen cowss, Mongolian cowss, Holstein, Qinchuan cowss. The molecular phyletic tree of mtDNA D-loop of 8 Chinese cowss strains was constructed by Neighbor-Joining method. The NJ tree indicated that these mtDNA sequences fell into 3 distinguishable haplotype groups, it besides suggested in molecular degree that there were likely 3 maternal beginnings, of which the chief beginnings of Chinese cowss were from Bos Sanchez and Bos indicus. Sung et al. , ( 2005 ) To find the beginning and familial diverseness of Chinese cowss, we analyzed the complete mtDNA D-loop sequences of 84 cowss from 14 breeds/populations from sou’-west and west China, together with the available cowss sequences in GenBank. Our consequences showed that the Chinese cowss samples converged into two chief groups, which correspond to the two species Bos Sanchez and Bos indicus. Although a dominant line of descent was clearly discerned in both B. Sanchez and B. indicus mtDNAs, web analysis of the line of descents in each of the two species further revealed multiple clades that presented regional difference. The B. Sanchez samples in China could be grouped into clades T2, T3, and T4, whereas B. indicus harbored two clades I1 and I2. Age appraisal of these discerned clades showed a clip scope of 14,100-44,500 old ages.It is suggested that B. indicus contributed more to the cowss from south and southwest China. The familial diverseness of Chinese cowss varied among the strains studied. Lai et al. , ( 2005 ) This survey determined yak ‘s complete sequence of mitochondrial DNA control part ( D-loop ) of 35 persons in 5 yack strains at the first clip. The consequence showed that the length of D-loop in yack was 891 -895 bp. There were 55 polymorphous sites.24 haplotypes was defined in this survey, in which haplotype H4 and H6 were major haplotypes of Chinese yack. The consequences indicated that the familial diverseness of Chinese yack was really abundant. Analysis of molecular discrepancy and web building consequences indicated that there was important divergency among Chinese yacks strains. The web building indicated that Chinese yack had been divided into 2 types and had likely 2 maternal beginnings or 2 domesticated topographic points. David et al. , ( 2006 ) Mitochondria are critical cell organs that perform a assortment of cardinal maps runing from the synthesis of ATP and besides involved in programmed cell decease ( programmed cell death ) . It has six compartments: outer membrane, interior boundary membrane, intermembrane infinite, cristal membranes, intracristal infinite, and matrix, chondriosomes have a complex, internal construction. Mitochondria contain their ain DNA ( mtDNA ) , encoding a little figure of critical cistrons. Glycolysis occurred in chondriosome and supply energy for life due to its compartmentalisation. Lei et al. , ( 2006 ) 231 samples of mitochondrial D-loop were used to research the beginning and familial diverseness of Chinese cowss through sequencing. Four of the antecedently identified mitochondrial DNA line of descents ( T1-T4 ) were identified in the Bos Sanchez type, including line of descent T1, which was found for the first clip in Chinese cowss. Two line of descents ( I1 and I2 ) were identified in the Bos indicus type. Our consequences back up the suggestion that the Yunnan-Guizhou Plateau is the domestication site of Chinese zebu. We besides found grounds that Tibetan cowss originated from taurine and zebu cowss. It was possible to split Chinese cowss in this survey into two major groups: northern and southern cowss. Schlumbaum et al. , ( 2006 ) Analysis of nucleotide diverseness within the mitochondrial D-loop revealed high haplotype diverseness and similar diverseness to a European cows mention group. Mitochondrial T3 haplotypes radiated star-like from two similarly frequent haplotypes, perchance bespeaking two different enlargement paths. The strain construction of Evol & A ; egrave ; ne cowss can be explained either by an debut of diverse female line of descents from the domestication Centre or by ulterior alloy. Liu et al. , ( 2006 ) Mitochondrial D-loop sequences in 82 single cowss from 4 strains were analyzed. The consequences revealed 31 mitochondrial haplotypes and 65 polymorphous sites. The nucleotide diverseness and haplotype diverseness ( H ) estimated from mtDNA D-loop part in 4 cowss strains in Guizhou demoing that abundant mitochondrial familial diverseness exists in Guizhou cowss strains. The Neighbor-Joining ( NJ ) molecular phylogenyetic tree of mtDNA D-loop of 4 Guizhou cowss strains was constructed harmonizing to the 31 haplotypes. The NJ tree indicated that the beginning of cowss strains was from Bos Sanchez and Bos indicus which had about the same influence on cowss strains in Guizhou. Lei et al. , ( 2007 ) The evolution of H2O American bison is still controversy because the domestic American bison is derived from adult male choice. For more survey about Mitochondrial D.loop we analyse 80 baffalo samples ; 61 river and 19 swamp buffalos.we detect 36 Mitochondrial sites and 128 polymorphous sites. Consequence showed that both American bisons are domesticated from individual domesticated event. Halbert et al. , ( 2007 ) To see introgression in mitochondrial and atomic domestic cowss ( Bos Sanchez ) 11 US federal bison populations were examined. Mitochondrial introgression was examined through polymerase concatenation reaction methods and confirmed through analysis of D-loop sequences. Nuclear introgression was assessed in 14 chromosomal parts through scrutiny of microsatellite electromorph and sequence differences between bison and domestic cowss. Merely one population was identified with domestic cowss mitochondrial DNA introgression. In contrast, grounds of atomic introgression was found in 7 of the examined populations. The designation of genetically alone and undisturbed populations is critical to species preservation attempts, and this survey serves as a theoretical account for the familial rating of interspecies introgression. Lei et al. , ( 2007 ) Complete mitochondrial D-loop sequences of 119 samples stand foring seven native types were observed. Two mitochondrial DNA ( mtDNA ) lineages ( lineages A and B ) were determined for the Chinese swamp American bison. Examination of the diverseness patterns suggest that line of descent A has undergone a population development. Difference of line of descents A and B was estimated at 18,000 old ages ago. Combined analyses of mtDNA sequences from Chinese, Indian, Brazilian/Italian and Southeast Asian/Australian American bison samples showed independent domestication events in the swamp American bison from China and the river American bison from the India subcontinent. Our informations support the hypothesis of the development of domesticated swamp and river American bison from hereditary swamp-like animate beings. These hereditary animate beings were extensively distributed across mainland Asia and most likely are represented today by the wild Asiatic American bison ( Bubalus arnee ) . Edward et al. , ( 2007 ) mtDNA of wisent ( Bos primigenius ) was sequenced from 59 archeological skeletal discoveries. All aurochs belonged to the antecedently designated P haplogroup, bespeaking that this represents the Late Glacial Cardinal European signature. The Neolithic and Bronze Age samples all carry P haplotype mitochondrial DNA. Previous work has shown that most ancient and modern European domestic cowss carry haplotypes antecedently designated T. This in combination with our new determination of a T haplotype in a really Early Neolithic site in Syria. During the period of coexistence, it appears that domestic cowss were kept separate from wild wisents and introgression was highly rare. Andrea et al. , ( 2007 ) We classified local cowss breed harmonizing to their beginning, as alien or Creole. Exotic strains imported in the last 100 old ages, both zebuine and taurine, make the local population. Locally altered Creole strains, originated from cowss introduced by the European vanquishers are derive from natural choice and strain alloy. While Brazilian Creole breeds gives a small information on their familial composition.Studty of familial diverseness, phyletic relationships and forms of taurine/zebuine alloy was carried out on 10 cowss strains in Brazil. Jia et al. , ( 2007 ) The complete mitochondrial D-loop part from 123 persons in 12 Chinese cowss strains and two persons in Germany Yellow cowss strain was sequenced and analyzed. The consequences were shown as follows: 93 fluctuations and 57 haplotypes were detected. In the Neighbor-Joining tree, 13 cowss strains were divided into two chief clades, Bos Sanchez and Bos indicus. The importance of Yunnan cowss in the beginning of Chinese cowss was besides confirmed based on their abundant haplotypes. Then, a really particular haplotype i1 ( Haplogroup i1 is a Y chromosome haplogroup associated with the mutants identified known as individual nucleotide polymorphisms ( SNPs ) discovered in 27 Chinese cowss strains, including i1 strains in this survey and 16 strains in the GenBank, played the function of a karyon in Chinese zebu. At the same clip, the building of Chinese zebu nucleus group based on haplotype i1 validated the distinguishable beginning of Bos indicus in Tibet, which was different from that of the other cowss strains with zebu haplotypes in China. Guz et al. , ( 2007 ) mtDNA sequence analysis of yake revealed that there are no differences with cowss in the yak mitochondrial genome organisation. Interestingly, within the D-loop, the conserved sequence blocks are less conserved than environing parts. Neighbor-Joining ( NJ ) trees based on individual cistrons, cistron sets and cistrons of mitochondrial genome were constructed. The analysis identified the yack as a sister group of a cattle/zebu clade. Based on permutations in 22 transfer RNA cistrons, 12S rRNA cistron and 16S rRNA cistron, the dating of divergency between yack and cattle/zebu, and yack and H2O American bison, was proposed to hold occurred 4.38-5.32 and 10.54-13.85 million old ages before present, severally. This is consistent with the paleontologyical information ( Paleontology is the survey of prehistoric life, including beings ‘ development and interactions with each other ) . Yak and sheep/goat divergent dating predicts that their divergency occurred at 13.14-27.99 million old ages before the present twenty-four hours. Caixetal et al. , ( 2007 ) Mitochondrial DNA ( mtDNA ) is unusual in its rapid rate of development and high degree of intraspecies sequence fluctuation. To research that how mtDNA population rate is so rapid we have determined here the nucleotide sequence of all or portion of the D-loop part in 14 motherly related Holstein cattles. Four different D-loop sequences can be distinguished in the mtDNA of these animate beings. One account is that multiple mitochondrial genotypes existed in the maternal source line and that enlargement or segregation of one of these genotypes during oogenesis or early development led to the rapid genotypic displacements observed. Laisj et al. , ( 2007 ) In order to clear up the beginning and familial diverseness of yack in China, we analysed mitochondrial DNA ( mtDNA ) control part sequences in 52 persons from four domestic yack strains, every bit good as from a loanblend between yacks and cowss. Twenty-five samples were farther selected for partial cytochrome B sequencing based on control part sequence information. Two yack samples shared sequences with Chinese cowss ( Bos Sanchez ) ; the staying yack mtDNAs converged into two major clades in the phyletic analysis. Familial diverseness varied well among the strains, with the intercrossed yack showing the highest diverseness. Our consequences suggest that the Chinese yack was domesticated from two distinguishable materinal line of descent beginnings or from a heterogenous pool incorporating both divergent line of descents, with occasional cistron introgression from cowss. Tsai et al. , ( 2008 ) We studied the complete mitochondrial DNA D-loop construction of pigeon. Amplification three partial fragments of the D-loop and so combing the three fragments to cover the full length of the D-loop were done. Ten samples from pigeons were collected and were successfully amplified and sequenced. Insistent sequences of a VNTR and an STR were both observed at the 3′-end of D-loop part. Deoxyribonucleic acid sequence informations revealed polymorphous sequences including SNPs, VNTR and STR within the D-loop. Each sample was different due to four genotyping processs, SNPs, VNTRs and STRs. The polymorphous nature of the D-loop can be a valuable method for maternal designation and familial linkage of pigeon in peculiar forensic scientific discipline probes. Cortezs et al. , ( 2008 ) To clear up the familial diverseness and the mitochondrial DNA ( mtDNA ) diverseness of the Lidia cowss breed, 521-bp D-loop fragment was sequenced in 527 animate beings. Haplotype T3 was the most common, followed by the African T1 haplotype, really low frequences were recorded for haplotypes T and T2. Haplotype T3 was present in all those analysed ; in five it was the lone one nowadays, and in merely one line of descent ( Miura ) was its frequence lower than that of T1. T1, a haplotype reported in Criollo strains and to day of the month in merely a individual European strain, was found in a individual animate being. Network analysis of the Lidia strain revealed the presence of two major haplotypes: T3 and T1. The Lidia strain appears to be more closely related to prehistoric Iberian and Italian than to British wisents. Xin et al. , ( 2009 ) . We analysed six Y-STR venue ( UMN0929, UMN0108, UMN0920, INRA124, UMN2404 and UMN0103 ) utilizing 576 unrelated males and 10 females of the Qinchuan cattle population in Chinese Shaanxi Province. Allele frequence, cistron diverseness, the polymorphous information content, and the figure of effectual cistron were calculated. All venue are identified harmonizing to the Hardy-Weinberg equilibrium ( P & A ; gt ; 0.05 ) . The population informations were compared with published informations of other cattle strains. It suggested that Qinchuan cowss were originated from Bos Taurus. It gives information about single designation, paternity testing, and origin analysis of Qinchuan cowss strain. Chuan et al. , ( 2009 ) The complete D-loop part of mitochondrial Deoxyribonucleic acid from 206 persons in 16 Chinese autochthonal cowss strains was sequenced and analyzed to observe variableness of D-loop part of chondriosome DNA. Results showed 101 fluctuations and 99 haplotypes were found, in which 73 haplotypes were of Bos Sanchez and the other 26 haplotypes were of Bos indicus. Harmonizing to the phyletic tree, 16 cowss strains were divided into two groups, Bos Sanchez and Bos indicus. Based on the Network artworks, the 73 haplotypes of Bos Sanchez were classified into 3 groups and the 26 haplotypes of Bos indicus were classified into 5 groups. It was concluded that those purine C lessening was perchance originated in Chinese Bos indicus. There was merely 16 % of H3 haplotype sequences similar to the sequence of Nellore, and 84 % of those sequences had purine C fluctuation in Chinese autochthonal cowss strains through the analysis on their common H3 haplotypes. Stock et al. , ( 2009 ) Mitochondrial DNA has been the traditional marker for the survey of carnal domestication, as its high mutant rate allows for the accretion of molecular diverseness within the clip frame of domestic history. Additionally, it is entirely motherly inherited and haplotypes become portion of the domestic cistron pool via existent gaining control of a female animate being instead than by crossbreeding with wild populations. Initial surveies of British wisents identified a haplogroup, designated P, which was found to be extremely divergent from all known domestic haplotypes over the most variable part of the D-loop. An extra, separate wisent haplotype, E. was found by analysis of a big and geographically representative sample of wisent from northern and cardinal Europe.Until late, the European wisent appeared tso have no matrilineal posterities among the publically available modern cowss control parts sequenced ; if wisents mtDNA was incorporated into the domestic population, aurochs either formed a really little proportion of modern diverseness or had been later lost. However, a haplogroup P sequence has late been found in a modern sample, along with a new divergent haplogroup called Q. Here we confirm the outlying position of the fresh Q and E haplogroups and the modern P haplogroup sequence as a descendant of European wisent, by retrieval and analysis of cytochrome B sequence informations from 20 antediluvian natural state and domesticated cowss archeological samples. Yang et al. , ( 2009 ) The complete mitochondrial D-loop part from 187 persons in 18 Chinese cowss strains, which include 5 northern strains, 9 southern strains, 3 Tibetan strains and Germany Yellow Cattle ( merely 2 persons ) , was sequenced and analyzed by PCR and sequencing technique. The consequences showed as follows: The part of D-loop in 186 persons in 18 strains ranged from 909 bp to 913 bp. There were wholly 97 haplotypes sorted into 62 taurine 1s, 34 zebu 1s and a yack one. And there were 110 fluctuation sites spread outing from the start to the terminal of the D-loop part, with merely 75 1s separating taurine and 43 1s separating zebu. In the Neighbor-Joining tree and web, 18 cowss strains were chiefly divided into two clades, Bos Sanchez and Bos indicus.The survey for Clade proved that Chinese Tibetan cowss had an introgression of yack motherly. This is the first clip to observe yak line of descent in Chinese cowss based on the complete D-loop sequence6. Analysis showed that Tibetan cows was an stray type, non merely for zebu but besides for taurine. Zhang et al. , ( 2009 ) The complete D-loop part of mitochondrial Deoxyribonucleic acid from 206 persons in 16 Chinese autochthonal cowss strains was sequenced to observe variableness of D-loop part of chondriosomes DNA for those strains. The consequences showed 101 fluctuations and 99 haplotypes were found, in which 73 haplotypes were of Bos Sanchez and the other 26 haplotypes were of Bos indicus. Based on the Network artworks, the 73 haplotypes of Bos Sanchez were classified into 3 groups and the 26 haplotypes of Bos indicus were classified into 5 groups. There was merely 16 % of H3 haplotype sequences similar to the sequence of Nellore, and 84 % of those sequences had purine C fluctuation in Chinese autochthonal cowss strains through the analysis on their common H3 haplotypes. It was concluded that those purine C lessening was perchance originated in Chinese Bos indicus. Jia et al. , ( 2010 ) We sequenced 856 person of cowss for D.loop in which 264 were of Chinese cowss and the remainder sequences are of cowss from six Asiatic states. Our consequences indicated that cowss from six Asiatic states fell into three clades, Bos Sanchez ( taurine ) , Bos indicus ( zebu ) and yak. Four chief haplogroups T1A, T2, T3 ( including T3A and T3B ) and T5 were found in taurine, and two haplogroups I1 and I2 in zebu. We besides found that I1 and I2 haplogroups were separated by four variable sites instead than five 1s and four haplogroups or sub-haplogroups of T1A, T3A, T3B and T5 were found for the first clip in these Asiatic cowss. These informations brought us a new penetration into cowss ‘s familial construction in these six Asiatic states. Wang et al. , ( 2010 ) To find the beginning and familial diverseness of Yunnan mithan and cows we analysed mtDNA control part sequences of 71 samples and SRY cistron ( sex finding part Y ) sequences of 39 samples, together with the available sequences in GenBank. The neighbour-joining evolution and the decreased average web analysis showed that Yunnan mithan originated from the hybridisation between male Bos frontalis and female Bos Sanchez or Bos indicus, and that Yunnan cowss largely originated from B. indicus, besides incorporating some loanblends of male B. indicus and female B. Sanchez. The phyletic form of Yunnan cowss was consistent with the late described cattle matrilinear pool from China and indicated more part to the Yunnan cowss from B. indicus than from B. Sanchez.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9631600975990295, "language": "en", "url": "https://difference.guru/difference-between-a-check-card-and-a-debit-card/", "token_count": 651, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.060791015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9e697a0e-f7f1-4241-9eac-386f0a9f2a58>" }
Financial experts always talk about how we need to be fiscally responsible. You know that keeping track of your money is important, but you aren’t sure how to do that in the best way. You are constantly seeing commercials that say you should use a certain type of payment, whether it be cash, credit, debit or something else entirely. We know it can be confusing, and this article sets out to talk about one way of paying for purchases: debit cards. In the past, the term check card was used to describe an identification card issued by a retailer, which enabled the holder to make payments by check. Payment for any purchases came directly from the cardholder’s checking account. Before debit cards became the preferred method of payment, retailers and supermarkets issued check cards to consumers. A debit card is also called a check card, and is used to pay for purchases instead of using cash, just as you would use a credit card. However, unlike a credit card, the money comes out of the cardholder’s bank account immediately, instead of paying for the purchase later. The popularity of debit cards has overtaken, if not completely replaced, checks. In some countries, debit cards have entirely replaced cash transactions. The rapid growth of debit cards made some countries adopt different systems that were found to be incompatible with other countries. Starting in mid 2000, initiatives were taken to allow debit cards to be used in other countries and for online and phone transactions. There are different types of debit cards, and prepaid debit cards are one of the more popular types. These reloadable cards appeal to a lot of people, including those who do not use banks and credit unions. With a prepaid card you “load” the money onto the card and don’t need to have the money withdrawn from your bank. Instead, the money is taken off the card. Prepaid debit cards are accepted in a wide range of places, as they carry the Visa or MasterCard logos. Without a credit card, cardholders don’t have to worry about monthly credit card bills, recurring fees, or even falling into credit card debt. Although there are many benefits to a debit card, there are certain scenarios where you may not want to use your card. Financial experts agree that it is not advisable to use check cards or debit cards at business establishments who place a hold on a debit cardholder’s checking account whenever a debit card is used to make a purchase. These establishments would place a hold on the card for an amount greater than the actual purchase price. This works as a guarantee that they will receive a payment. However, if the amount the establishment placed on your card turns out to be greater than the amount in the linked account, the account holder ends up paying overdraft fees. So what’s the difference between a check card and debit card? A check card is no different from a debit card as a canine is no different from a dog. Check cards and debit cards both allow a cardholder to make purchases online, over the phone or at a business establishment. Payment for the purchase is then taken out from the savings or checking account that is linked to the card.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9812687039375305, "language": "en", "url": "https://drdianehamilton.com/fear-of-past-dot-com-crash-venture-capitalists-only-interested-in-consumer-targeted-companies-like-facebook-or-groupon/", "token_count": 678, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.384765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c71a320c-7e8a-41da-8887-5ce6594fe709>" }
The dot com crash has had a big impact on how venture capitalists invest in the current market. To understand why, it is important to know a little history about the impact of the Internet and why these investors are leery. The Internet became commercially popular in the mid-1990s. By 1995, there was an estimated 18 million users on the net. This led to the creation of online businesses which led to speculation about how big these companies could grow. The problem came with how much these companies were actually worth vs. how much they were perceived to be worth. What causes a bubble and eventual crash? When people get excited about a company stock, it can drive the price up but if it inflates to an unrealistic point where investors get wise to the fact that the company can’t be worth as much as they hoped, people bail, sell the stocks, the price drops, and the company crashes. The pain of those dot com crashes are still felt today. Venture capitalists now may be more hesitant to invest. Tom Abate with SFGate.com said that venture capitalists in 2000 made about 8000 investments valued at $100.5 million. “In 1999 and 2000, Wall Street invested in 534 venture-backed initial public offerings.” Those, who cashed in early, made a lot of money. As large amounts of money were being put into the market and speculation was growing, the bubble was forming. NASDAQ hit its peak on March 10, 2000 at 513252, only to lose 78% of its value by October, 2002 when it dropped to 11411. In 2001-2002 while a lot of companies were over-valued and going bankrupt, people found their stock purchases were not such a great investment. So now when Facebook and Twitter are considering going IPO it has some potential investors concerned. This is especially true in the case of Twitter that has yet to publically show their business plan. What has the effect been on venture capitalists investing? An article in Investopedia stated, “In the year 1999, there were 457 IPOs, most of which were internet and technology related. Of those 457 IPOs, 117 doubled in price on the first day of trading. In 2001 the number of IPOs dwindled to 76, and none of them doubled on the first day of trading.” SFGate.com reported, “In 2008 and 2009, a total of just 18 venture-backed companies went public.” Investments have picked up for the consumer-oriented companies like Facebook and Groupon. However there has been a venture squeeze for companies with business products. Wall Street Journal reported, “In the first three months of this year, venture-capital investment in consumer tech companies nearly tripled to $874 million from $310 million a year earlier. Meanwhile, investments in tech firms with business products rose at a slower rate to $2.3 billion from $1.9 billion a year earlier. The shift away from business-oriented technology start-ups has been gathering steam over the past few years. Venture investment into such companies was $11.9 billion in 2010, down 35% from $18.4 billion in 2006, according to VentureSource. The overall number of financing rounds these companies received also dropped 18% to 1,261 during that time.”
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9287526607513428, "language": "en", "url": "https://smallbusiness.chron.com/markup-margin-calculations-56763.html", "token_count": 497, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.037109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:333710c7-ce1e-4367-93c1-d8333dd4a097>" }
Markup & Margin Calculations For the profitability of your business, price markup and profit margin are two calculations you should know cold. Adding a profit margin to the cost you pay for goods for resale is what makes your business a business. Price markup and profit margin are two calculations which approach your pricing profitability from two ends. Markup is the amount you add to the wholesale cost of an item to get the price at which you will sell that item. If you buy something for $10 and sell it for $15, the markup is $5. To price multiple items with different costs and categories, a business typically sets a percentage markup for each type or category of item sold. For example, you sell automotive parts and want a 50 percent markup on replacement parts and a 100 percent markup on custom accessories. The markup pricing strategy should work to maximize profits and keep prices competitive at the same time. Calculate your sales price for an item by multiplying the wholesale price times one plus the markup. As an example, you have three different items which cost your business $8, $10 and $12, respectively. All are to be marked up 50 percent. The sale prices will be 8 times 1.5 equals $12, 10 times 1.5 equals $15 and $12 times 1.5 equals $18. A markup percentage increases the dollar amount of markup as wholesale prices get higher. Profit margin is the percentage of the sale price which counts as gross profit for your business. Margin is calculated by subtracting cost from the sale price and then dividing by the sale price. For the item that cost $10 and sold for $15, the margin is the $5 difference divided by the $15 price for a 33 percent profit margin. Profit margin percentages quickly show what portion of your sales revenues are available to pay business expenses and generate net income. Sales Process Pricing Markup and profit margin can be viewed as the two ends of moving products through your business. Markup is based on prices you pay and set when products are received from your suppliers. Profit margin is the result for your business of the sales that have been made. Adjust your markup rates to manage the profit margin produced by your retail sales. The gross profit margin of a business -- percentage of total sales minus cost of goods sold -- gives a picture of the cash flow and profitability of the business. - Ryan McVay/Photodisc/Getty Images
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9498482346534729, "language": "en", "url": "https://smartasset.com/taxes/income-taxes", "token_count": 3454, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0732421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6780af42-cc69-4e6d-a2ab-ba726b627b58>" }
Overview of Federal Taxes Income in America is taxed by the federal government, most state governments and many local governments. The federal income tax system is progressive, so the rate of taxation increases as income increases. Marginal tax rates range from 10% to 37%. Number of State Personal Exemptions Your Income Taxes Breakdown |Tax Type||Marginal | |Total Income Taxes| |Income After Taxes| * These are the taxes owed for the 2019 - 2020 filing season. Your 2019 Federal Income Tax Comparison - Your marginal federal income tax rate - Your effective federal income tax rate - Your federal income taxes Total Estimated 2019 Tax Burden Total Estimated Tax Burden $ Percent of income to taxes = % - About This Answer...read more Our income tax calculator calculates your federal, state and local taxes based on several key inputs: your household income, location, filing status and number of personal exemptions. Also, we separately calculate the federal income taxes you will owe in the 2019 - 2020 filing season based on the Trump Tax Plan. How Income Taxes Are Calculated - First, we calculate your adjusted gross income (AGI) by taking your total household income and reducing it by certain items such as contributions to your 401(k). - Next, from AGI we subtract exemptions and deductions (either itemized or standard) to get your taxable income. Exemptions can be claimed for each taxpayer as well as dependents such as one’s spouse or children. - Based on your filing status, your taxable income is then applied to the the tax brackets to calculate your federal income taxes owed for the year. - Your location will determine whether you owe local and / or state taxes. - Last Updated: January 1, 2020...read more When Do We Update? - We regularly check for any updates to the latest tax rates and regulations. Customer Service - If you would like to leave any feedback, feel free to email [email protected]. - Our Tax Expert Jennifer Mansfield, CPA Tax Jennifer Mansfield, CPA, JD/LLM-Tax, is a Certified Public Accountant with more than 30 years of experience providing tax advice. SmartAsset’s tax expert has a degree in Accounting and Business/Management from the University of Wyoming, as well as both a Masters in Tax Laws and a Juris Doctorate from Georgetown University Law Center. Jennifer has mostly worked in public accounting firms, including Ernst & Young and Deloitte. She is passionate about helping provide people and businesses with valuable accounting and tax advice to allow them to prosper financially. Jennifer lives in Arizona and was recently named to the Greater Tucson Leadership Program. We pay $30 for 30 minutes on the phone to hear your thoughts on what we can do better. Please enter your email if you'd like to be contacted to help. Please enter your name Please enter a valid email The Federal Income Tax The federal personal income tax that is administered by the Internal Revenue Service (IRS) is the largest source of revenue for the U.S. federal government. Nearly all working Americans are required to file a tax return with the IRS each year. In addition to this, most people pay taxes throughout the year in the form of payroll taxes that are withheld from their paychecks. Income taxes in the U.S. are calculated based on tax rates that range from 10% to 37%. Taxpayers can lower their tax burden and the amount of taxes they owe by claiming deductions and credits. A financial advisor can help you understand how taxes fit into your overall financial goals. Financial advisors can also help with investing and financial plans, including retirement, homeownership, insurance and more, to make sure you are preparing for the future. Calculating Income Tax Rate The United States has a progressive income tax. This means there are higher tax rates for higher income levels. These are called “marginal tax rates," meaning they do not apply to total income, but only to the income within a specific range. These ranges are called brackets. Income falling within a specific bracket is taxed at the rate for that bracket. The table below shows the tax brackets for the federal income tax. It also reflects the rates for the 2019 tax year, which are the taxes you pay in early 2020. 2019 - 2020 Income Tax Brackets |$0 - $9,700||10%| |$9,700 - $39,475||12%| |$39,475 - $84,200||22%| |$84,200 - $160,725||24%| |$160,725 - $204,100||32%| |$204,100 - $510,300||35%| |Married, Filing Jointly| |$0 - $19,400||10%| |$19,400 - $78,950||12%| |$78,950 - $168,400||22%| |$168,400 - $321,450||24%| |$321,450 - $408,200||32%| |$408,200 - $612,350||35%| |Married, Filing Separately| |$0 - $9,700||10%| |$9,700 - $39,475||12%| |$39,475 - $84,200||22%| |$84,200 - $160,725||24%| |$160,725 - $204,100||32%| |$204,100 - $306,175||35%| |Head of Household| |$0 - $13,850||10%| |$13,850 - $52,850||12%| |$52,850 - $84,200||22%| |$84,200 - $160,700||24%| |$160,700 - $204,100||32%| |$204,100 - $510,300||35%| You’ll note that the brackets vary depending on whether you are single, married or the head of a household. These different categories are called filing statuses. Married persons can choose to file separately or jointly. While it often makes sense to file jointly, filing separately may be the better choice in certain situations. Based on the rates in the table above, a single filer with an income of $50,000 would have a top marginal tax rate of 22%. However, that taxpayer would not pay that rate on all $50,000. The rate on the first $9,700 of taxable income would be 10%, then 12% on the next $29,775, then 22% on the final $10,525 falling in the third tax bracket. This is because marginal tax rates only apply to income that falls within that specific bracket. Based on these rates, this hypothetical $50,000 earner owes $6,858.50, an effective tax rate of 13.7%. Calculating Taxable Income Using Exemptions and Deductions Of course, calculating how much you owe in taxes is not quite that simple. For starters, federal tax rates apply only to taxable income. This is different than your total income (also called gross income). Taxable income is always lower than gross income since the U.S. allows taxpayers to deduct certain income from their gross income to determine taxable income. To calculate taxable income, you begin by making certain adjustments from gross income to arrive at adjusted gross income (AGI). Once you have calculated adjusted gross income, you can subtract any deductions for which you qualify (either itemized or standard) to arrive at taxable income. Note that for the 2019 tax year, there are no longer personal exemptions. Prior to 2018, taxpayers could claim a personal exemption ($4,050 in 2017), which lowered taxable income. The new tax plan signed by President Trump in late 2017 eliminated the personal exemption. Deductions are somewhat more complicated. Many taxpayers claim the standard deduction, which varies depending on filing status, as shown in the table below. Standard Deductions(Updated December 2019) |Filing Status||Standard Deduction Amount| |Married, Filing Jointly||$24,400| |Married, Filing Separately||$12,200| |Head of Household||$18,350| Some taxpayers, however, may choose to itemize their deductions. This means subtracting certain eligible expenses and expenditures. Possible deductions include those for student loan interest payments, contributions to an IRA, moving expenses and health-insurance contributions for self-employed persons. The most common itemized deductions also include: - Deduction for state and local taxes paid. Also known as the SALT deduction, it allows taxpayers to deduct up to $10,000 of any state and local property taxes plus either their state and local income taxes or sales taxes. - Deduction for mortgage interest paid. Interest paid on the mortgages for up to two homes, and a total of $1,000,000 in debt can be subtracted. Homes purchased after Dec. 15, 2017 will have this lowered to the first $750,000 of the mortgage. - Deduction for charitable contributions. - Deduction for medical expenses that exceed 7.5% of AGI. (Note that the income threshold was 10% until the new tax plan changed it to 7.5%.) Keep in mind that most taxpayers don’t itemize their deductions. If the standard deduction is larger than the sum of your itemized deductions (as it is for many taxpayers), you receive the standard deduction. Once you have subtracted deductions from your adjusted gross income, you have your taxable income. If your taxable income is zero, that means you do not owe any income tax. How to Calculate Federal Tax Credits Unlike adjustments and deductions, which apply to your income, tax credits apply to your tax liability (which means the amount of tax that you owe). For example, if you calculate that you have tax liability of $1,000 based on your taxable income and your tax bracket, and you are eligible for a tax credit of $200, that would reduce your liability to $800. In other words, you would only owe $800. Tax credits are only awarded in certain circumstances, however. Some credits are refundable, which means you can receive payment for them even if you don’t owe any income tax. (By contrast, nonrefundable tax credits can reduce your liability no lower than zero.) The list below describes the most common federal income tax credits. - The Earned Income Tax Credit is a refundable credit for taxpayers with income below a certain level. The credit can be up to $6,557 per year for taxpayers with three or more children, or lower amounts for taxpayers with two, one or no children. - The Child and Dependent Care Credit is a nonrefundable credit of up to $3,000 (for one child) or $6,000 (for two or more children) related to childcare expenses incurred while working or looking for work. - The Adoption Credit is a nonrefundable credit equal to certain expenses related to the adoption of a child. - The American Opportunity Credit is a partially refundable credit of up to $2,500 per year for enrollment fees, tuition and course materials for the first four years of post-secondary education. There are numerous other credits, including credits for the installation of energy-efficient equipment, a credit for foreign taxes paid and a credit for health insurance payments in some situations. Calculating Your Tax Refund Whether or not you get a tax refund depends on the amount of taxes you paid during the year (because they were withheld from your paycheck), your tax liability and whether or not you received any refundable tax credits. When you file your tax return, if the amount of taxes you owe (your tax liability) is less than the amount that was withheld from your paycheck during the course of the year, you will receive a refund for the difference. This is the most common reason people receive a tax refund. If you paid no taxes during the year and owe no taxes, but are eligible for one or more refundable tax credits, you will also receive a refund equal to the refundable amount of the credits. Paying Your Taxes If you aren’t getting a tax refund and instead owe money come tax day, there may be a way to lessen the sting. For starters, you should still file your taxes on time. Otherwise, you will also have to pay a fee for filing late. If you don’t think you can afford your full tax bill, then you should pay as much as you can and contact the IRS at 1-800-829-1040. The agency may be able to offer you a few payment options to help you pay off your bill. For example, the IRS may offer a short-term extension or temporarily delay collection. You may also have the option to pay your remaining bill over multiple installments. You will likely still pay any interest charges on overdue balances, but in some cases, the IRS may even waive penalties or fees. Again, you should call the agency at the number above to discuss your options. As you pay your tax bill, another thing to consider is using a tax-filing service that lets you pay your taxes by credit card. That way you can at least get valuable credit card rewards and points when you pay your bill. The IRS has authorized three payment processors to collect tax payments by credit card: PayUSAtax, Pay1040 and Official Payments. However, it’s important to keep in mind that all three processors charge fees of about 2% of your payment for credit card payments. If you had a bill of $100, a 2% fee would mean you pay an extra $2. Double check that any rewards you will earn are worth that extra cost. The cheapest way to pay a tax bill is still via a check or via IRS Direct Pay, which allows you to pay your bill directly from a savings or checking account. All major tax filing services will provide you with instructions for both of these payment options. State and Local Income Taxes Many states, as well as some cities and counties, have their own income tax, which is collected in addition to the federal income tax. States that do have a state income tax require that you file a separate state tax return, as they have their own rules. If you are curious about a particular state’s tax system and rules, visit one of our state tax pages. Photo credits: ©iStock.com/Veni, ©iStock.com/Pgiam, ©iStock.com/ShaneKato Places with the Lowest Tax Burden SmartAsset’s interactive map highlights the counties with the lowest tax burden. Scroll over any county in the state to learn about taxes in that specific area. To find the places with the lowest tax burdens, SmartAsset calculated the amount of money a specific person would pay in income, sales, property and fuel taxes in each county in the country. To better compare income tax burdens across counties, we used the national median household income. We then applied relevant deductions and exemptions before calculating federal, state and local income taxes. In order to determine sales tax burden, we estimated that 35% of take-home (after-tax) pay is spent on taxable goods. We multiplied the average sales tax rate for a county by the household income after taxes. This balance is then multiplied by 35% to estimate the sales tax paid.For property taxes, we compared the median property taxes paid in each county. For fuel taxes, we first distributed statewide vehicle miles traveled to the county level using the number of vehicles in each county. We then calculated the total number of licensed drivers within each county. The countywide miles were then distributed amongst the licensed drivers in the county, which gave us the miles driven per licensed driver. Using the nationwide average fuel economy, we calculated the average gallons of gas used per driver in each county and multiplied that by the fuel tax. We then added the dollar amount for income, sales, property and fuel taxes to calculate a total tax burden. Finally, each county was ranked and indexed, on a scale of 0 to100. The county with the lowest tax burden received a score of 100 and the remaining counties in the study were scored based on how closely their tax burden compares.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9415420293807983, "language": "en", "url": "https://supreme-thesis.net/essays/economics/long-range-planning-and-capital-budgeting.html", "token_count": 496, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.025146484375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6f6a2518-bca9-4d4d-af91-d09f59468cd5>" }
This paper tackles what the relationship is between long – range planning and capital budgeting. It also discusses the three phases in which a project’s cash flows are organized. Long-range planning is the forecasting and organizing activities for the realization of the envisaged goals or objectives. It involves the inclusion of all the aspects to be covered under the scheme and provisions made for their acquisition and operations. It therefore means the determination of goals and the activities of operations for their achievement. It entails forecasts made into the future and organization of events in the present for the execution or realization of the predetermined goals. Capital budgeting on the other hand involves the decisions concerning the acquisition and allocation of funds to various sectors of the project having established the relevance or worthiness of the expenditures. It involves use of different valuation methods to determine the economic reasonableness of investing some amount on a project. There exists a strong relationship between the two terms; in the process of long-term planning, forecasts on the various amounts of funds to be utilized are considered. It is through capital budgeting, that it is possible to tell whether some investment is worth undertaking or not. Where it promises a return, then it is planned for and where a good return is not guaranteed, the project or investment can be abandoned (Steven, 2003). Therefore capital budgeting is done prior to approval of funds for the long-range planning. In a project cash flows are in three phases depending on the activities in each level or stage. - The first is the planning and zoning. This involves expenses incurred to get approvals from the local authorities concerning the development project to be constructed. - There is the construction phase also. In this level, expenses are made towards the actual construction of the project. This encompasses costs of materials, professional fees and labor costs. - The last phase is the post construction stage that majorly constitutes the incomes accruing to the project after completion and the expense incurred by the finished project. The incomes can be from rent collections or if owner occupied, it would be the incomes generated from the business or the cost of alternative accommodation for similar properties. Expenses would include service charges like insurance, security, cleaning and repairs and maintenance. In project development, it is vital to diligently examine and evaluate the long-range plans and then do the capital budgeting to gauge the profitability of the project before implementation. This would give an informed decision that guarantees a good return to the investor.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9353097081184387, "language": "en", "url": "https://www.adelphi.de/en/project/bridging-european-and-local-climate-action-beacon", "token_count": 626, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.10986328125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ee7feb85-16a8-4269-b929-fd32abf38a6f>" }
The European Union (EU) has set itself ambitious climate and energy policy goals within the framework of the Paris Agreement. In many European countries, however, ambitious climate protection measures often face scepticism or even rejection. The reason for this is often a lack of understanding of the potential of climate action, or investment power that has been weakened by the financial and economic crisis. Although there are progressive actors who implement climate policy measures and recognise their potential, the number, scope and visibility of these projects must be significantly increased to enable a sustainable transformation from the local to the national level. Promoting climate policy, strengthening European integration The project "Bridging European and Local Climate Action (BEACON)" is part of the European Climate Initiative (EUKI), which was launched by the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU). It aims to increase the acceptance and impact of climate policy projects in Central, Eastern and Southern Europe and to initiate further climate mitigation measures. How can energy costs be cut down on? How does sustainable mobility improve the quality of life in cities? What does climate policy have to do with a future-oriented investment policy? By promoting transnational dialogue on issues such as these, the project acts as a BEACON for municipalities across Europe by making the benefits of climate policy measures – and thus also of EU climate policy – more tangible. The focus of the project is on cooperation and the exchange of good practices among municipalities, national decision-makers, and educational institutions with the aim of reducing political, technical and social barriers at local and national level. Last but not least, cross-border cooperation should strengthen cohesion within the EU. Fostering knowledge transfer in the EU In close cooperation with local partner organisations, adelphi is leading the cooperation with 25 local authorities from the Czech Republic, Poland, Romania, Greece and Portugal, as well as nine German municipalities. Through various dialogue and advisory formats, technical knowledge and process-related know-how are conveyed to the municipalities: - Good practices in local climate policy will be discussed in multi-country workshops. An open dialogue on obstacles and opportunities provides new impetus for the implementation of climate policy measures on the ground. - Individual advisory support enables the 25 European municipalities to deepen and operationalise this knowledge. - Valuable examples and proven strategies from Germany are made available through translation and country-specific adaptation of existing guidelines. - Five city partnerships will benefit from one-to-one support to reinforce their cooperation and develop joint climate action projects. - All participants and further municipalities will come together at two European municipal conferences (2019 and 2021) to present their local actions and network in an informal and supportive atmosphere. adelphi is also working together with Ecofys to identify and analyse successful national climate policy instruments in sectors outside of the European Emissions Trading System (transport, buildings, agriculture, waste, small industrial plants). The goal is to feed the research findings into the design of medium-term climate strategies and policies in Germany and other EU member states, and thus promote mutual learning within the EU.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.965122401714325, "language": "en", "url": "https://www.babypips.com/forexpedia/world-bank", "token_count": 105, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.16796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:01dba6a8-3e49-4b17-9e73-267ed5949d5a>" }
The World Bank is a group of international financial organizations around the globe aimed provide assistance to its 187 member countries. The World Bank was formed after World War II to help devastated Western European countries and provide them with capital. As it grew, the World Bank has expanded to include other developing nations. Now, its basic goal is to combat poverty by providing member countries with sound financial advice, loans (such as low or no interest loans), and research. http://www.worldbank.org/ World Bank – Official Website
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9502213597297668, "language": "en", "url": "https://www.cleanenergywire.org/factsheets/road-freight-emissions-germany", "token_count": 1767, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1376953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e57d8062-21a9-410d-8da2-5350f5c04cfe>" }
What the experts say In Germany, “many technologies for increasing efficiency have hardly been used so far, although they are available on the market and offer comparatively cost-effective mitigation options.” – Federal Environment Agency (UBA) “[We] believe that the [emissions] reduction levels proposed by the [European] Commission for 2025 and 2030 are far too aggressive [...] the 2025 ambition level is too stringent given the short lead-time for this first-ever CO2 target.” – European Automobile Manufacturers’ Association (ACEA) “Trucks were responsible for nearly 40 percent of the growth in global oil demand since 2000; they are the fastest growing source of oil demand, in particular for diesel. Without further policy efforts, trucks will account for 40 percent of the oil demand growth, and 15 percent of the increase in global energy-sector CO2 emissions, to 2050.” – International Energy Agency Over 95 percent of CO2 emissions in Germany’s transport sector are caused by road traffic, and about one third of this is caused by long and short-distance road haulage, according to the state-funded German traffic research portal FIS. While the country’s total emissions sank by nearly one third since 1990, carbon volumes have largely stagnated in the transport sector over this period and even increased slightly in recent years due to the reduced use of biofuels and the sustained boom of the German economy. Despite the introduction of a nationwide road toll for freight trucks in 2005, traffic volume growth has even outpaced economic growth since then. Temporary reductions in CO2 levels have been made possible by greater engine efficiency, but were also caused by substituting petrol with diesel and more frequent refuelling stops at near-border service stations abroad, where fuel often is cheaper than in Germany, the FIS says. However, while the specific emissions caused by road haulage are greater than those generated by long-distance inland shipping or railroad carriage, the difficulties of installing adequate infrastructure for the latter two makes trucks and other freight vehicles the most energy-efficient method for bulk transports on shorter distribution distances, according to the Heidelberg Institute for Energy and Environmental Studies (IFEU). The country’s cargo logistics industry is dominated by trucking, which moves 73 percent of all commodities traded domestically and internationally, followed by railroad transport with nearly 18 percent, and inland shipping with just over 9 percent. In all of Europe, freight carrying vehicles accounted for around a third of road transport CO2 emissions in 2016. This figure could soon hit 40 percent, as passenger car emissions are projected to gradually sink – while trucking emissions are not. Over 40 percent of the trucks on German roads and highways are registered abroad, the Federal Office for Goods Transport (BAG) says. Most of these are eastern European freight companies, which often cater for German customers. Despite the large number of international freight forwarding companies active in Germany, the country’s domestic fleets of lorries, light-weight trucks, tanker trucks, and semi-trailers lead Europe in terms of volume, carrying 310,142 million tonne-kilometres (tkm) annually, mostly within the country. In recent years, their freight volume, emissions, and profits have shot upward, reaching a temporary high in 2016. Currently, trucks are not subject to CO2 emission or fuel-consumption standards either in Germany or at the EU level. For this reason, unlike with automobiles, trucking has not experienced significant advances in e-mobility and fuel efficiency. According to a report carried by the energy policy newsletter Tagesspiegel Background, German EU commissioner Günther Oettinger has recently sought to water down a joint initiative by several member states to tighten European emissions limits for trucks – the first one of its kind. Under this scheme, new lorries – the largest first, then smaller ones – would have to have lower carbon emission levels by 15 percent by 2025 based on 2019 levels, and then by another 15 percent by 2030. The same legislation would incentivise the use of zero and low-emission heavy-duty freight vehicles. According to EU experts, this could reduce CO2 emissions by 54 million tonnes (equal to Sweden’s annual emissions) between 2020 and 2030. Moreover, it would save trucking firms plenty at the pump. Of course, in the aftermath of the ‘dieselgate’ scandal, measuring compliance with emissions regulations is critical. Authorities would have to monitor “real-world fuel consumption data” based on mandatory, standardised fuel consumption meters, and serve penalties for non-compliance - a necessity that remains difficult to implement for passenger vehicles. While the EU targets may be called too ambitious by carmakers, an alliance of 36 freight forwarders, associations, and transport companies – including DB Schenker, Siemens, Tchibo, and five countries (Germany not among them) – had lobbied for a 24 percent emissions reduction by 2025. Otherwise, they insisted, the transport sector won’t hit the targets specified in the Paris accord. There is a whole range of options suitable for bringing down carbon emissions in road freight transport, from the introduction of hybrid trolley trucks or exhaust pollutant limits to green freight programmes and fuel-efficiency labelling. Studies show that trucks and semis can gain significantly in fuel efficiency by improving engine technology and aerodynamics, which could reduce the carbon footprint of, for example, tractor-trailers by 40 percent. The electrification of freight logistics could help cut on-road freight’s emissions by over a quarter by 2050. German luxury carmaker Daimler, which also is the biggest producer of trucks in the country, has recently displayed a range of options that it researched for making its vehicle engines more sustainable and meeting the EU’s 2030 vehicle emissions limits. The company says hybrid engines are no solution for heavy-duty vehicles as they cannot be used in an economically viable way, arguing that traditional diesel engines still have an untapped emissions reduction potential of 10 to 20 percent. According to Daimler researchers, diesel trucks remain far superior to electric trucks when it comes to energy efficiency. They argue that a truck filled with 224 litres of diesel fuel would have the same range as an e-truck with an eight-tonne battery. A possible remedy for the difficulty of installing economically viable batteries in heavy vehicles could be Germany’s first e-highway, a concept currently being tested in the federal state of Hesse and financed in the framework of the country’s 2020 Climate Action Programme. Trucks use a catenary system (overhead electrical wires) to charge their batteries as they drive along the five-kilometre stretch of the highway. The trial course is designed and operated by industry heavyweight Siemens, the Technical University of Darmstadt, and five haulage companies - and could be copied countrywide if successful. There are also strategies and pilot projects aimed at promoting the switch to alternative fuels (hydrogen, liquefied natural gas, synthetic fuels, renewable methane), redesigning the road toll system to benefit zero or low-carbon trucks, or increasing the diesel tax. In June 2018, the German government set up a support programme for freight vehicles that run on more efficient and low-carbon engines. Freight companies receive grant payments of up to 40 percent of the additional cost for modern cargo vehicles or lump-sums for specific technologies, for example 40,000 euros for e-trucks with a weight of more than 12 tonnes. According to Germany’s Federal Environment Agency (UBA), a reform of the toll system for heavy-goods vehicles is necessary to reveal the “true cost” of road haulage. Possible reforms include an extension to vehicles below 7.5 tonnes of weight; including emissions levels in the tariff system; or staggering road pricing according to efficiency criteria. If trucks and trailers had to pay punitive tolls and carbon costs according to the ‘polluter pays’ principle, their competitive advantages over rail would diminish, the UBA says. For the “last mile” of commodity transports in inner cities, the expanded use of cargo bikes and pedelecs (pedal electric cycles) is an obvious possibility that is already being tested on a large scale. Projects like logSPAZE by the Fraunhofer IAO in Stuttgart test alternative urban delivery concepts with pedal-powered vehicles in cooperation with local companies to find out which solutions are most suitable to relax traffic levels and reduce emissions and air pollution in densely populated areas.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8595329523086548, "language": "en", "url": "https://www.es-partnership.org/swg-10-es-in-the-circular-bio-economy/", "token_count": 900, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.10009765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4ab9a199-92a6-43ad-94e7-318255277939>" }
1. Introduction and Objectives In the transition towards a low fossil carbon (and eventually decarbonized) economy, scarcity of resources represents a global societal challenge. It underpins the need for circular (self-sustaining) nature-based resource management systems supplying human needs while ensuring ecosystem health and preserving production systems. As resources get scarcer, circulating them within the economy is increasingly valuable. This also applies for carbon (scarce in a low-fossil economy): given the urgency to stabilize global climate, re-circulating carbon along with inducing negative emissions are well-acknowledged necessities. Parallel to increasing scarcity of resources, there are increasing global demands for clean water-, soil- & air, arable land, healthy food and sustainable consumer products, among others. These demands all put pressure on the boundaries of our finite planet. This SWG takes a systemic approach to identify regulatory, social and economic barriers and enablers of a transition towards an environmentally sustainable low (fossil) carbon economy. It focuses on circular- and bioeconomy. We model current and future ecosystem health and services, e.g. emissions capture, nutrients cycling and climate change mitigation strategies. We also model circular resource management systems, upcycled biowaste value chains and high value biorefinery systems. Monetary and non-monetary valorization of environmental restoration and climate mitigation services are proposed policy measures and investment decision support tools to boost a circular regenerative bioeconomy. Aim: Support the transition towards an environmentally sustainable low fossil carbon economy, with focus on circular- and bioeconomy - Develop a global database documenting several circular resource management systems and bioeconomy conversion pathways. - Quantifying ecosystem health restoring resource flows, e.g. nutrient recycling and carbon capture and reuse - Assessing the performance of green engineering and waste-based biorefineries as instruments for enhancing ecosystem health and services - Identifying, at the light of the Sustainable Development Goals, the key parameters characterizing the performance of circular regenerative (bio-)economic value chains - Developing monetary and non-monetary valorization approaches quantifying environmental restoration and climate mitigation services from urban-industrial production systems - Proposing policy measures and monitoring frameworks to support ecosystem health and to ensure the preservation of ecosystem services for future generations - Developing integrated assessment tools quantifying the preservation of ecosystem services from circular resource management and production systems - Developing decision support tools for cross-sectoral resource conservation and for establishing closed loop short value chains at the local community, urban-industrial and inter-industrial levels. - Teaching theories, principles and assessment tools addressing ES in circular (bio-) economy; - Disseminating and communicating research-based knowledge on ecosystem health and service preserving resource flows within a circular (bio-)economy 2. Lead Team & Members - Professor Marianne Thomsen, Head of Research Unit – EcoIndustrial Systems Analysis, Department of Environmental Science, Platform leader at the Aarhus Center for Circular Bioeconomy, Aarhus University, Denmark - Associate Professor/Principal Investigator Lorie Hamelin, Engineering of Biological Systems & Processes Department, Federal University of Toulouse, France 3. Activities & Outputs - PhD summer schools on the overall two following subjects: Ecosystem health-preserving resource flows within and between human and natural systems & The role of circular- and bioeconomy in the transition towards an environmentally sustainable low fossil carbon economy - Stakeholder workshops and conferences aiming at identifying enablers of closing the natural resource loops while eliminating Environment & Health externalities - Calling for abstract submissions for the 2019 Global ESP Conference,S10-Circular Bioeconomy – a solution to the global challenges of climate change, decreasing natural resources and environmental degradation ? - Calling for Special Issue proposals for ‘Ecosystem services in a bio- and circular economy’. Deadline manuscript submissions is 31 October 2019. Read more here ESP conference outputs - 2019 ESP10 world Conference: Circular Bioeconomy – a solution to the global challenges of climate change, decreasing natural resources and environmental degradation? Book of abstracts, Presentations - 2018 Europe ESP Conference: Circular (bio-) economy – the solutions to the global challenges of climate change, decreasing natural resources and environmental degradation? Book of abstracts
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9506809711456299, "language": "en", "url": "https://www.icarda.org/media/news/science-offer-strategic-solutions-indias-pulses-dilemma", "token_count": 1111, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12451171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:54e7cdf6-9dcd-4f6d-bbcd-535a9c806b99>" }
Science to Offer Strategic Solutions to India’s Pulses Dilemma India is the world’s largest producer, importer, and consumer of pulses, but the country is facing a crisis situation with pulses’ availability. The retail prices of pulses have soared to an all-time high in India over the past year, causing panic among the policy-makers, traders, and retailers. Most importantly, pulses have become hard to afford for the hundreds of millions of Indians who rely on them as part of their staple diet and as the main – often the only – source of protein. The rising prices have been attributed to the inability to keep up with demand because of recent drops in production due to a lack of rainfall, expensive pulse imports, and farmers focusing too much on growing wheat and rice. Two key pulses, peas and lentils, together make-up India’s staple food ‘dal’, which is what most people eat twice daily. With a large population of 360 million vegetarians and high poverty levels, most Indians either don’t consume or can’t afford meat. Pulses provide twice as much protein on average (23%) as wheat and three times as much as rice. From July 2014 to July 2015, prices of the major pulses increased anywhere from 12 to 50 percent. The ‘tur daal’, a popular lentil typically eaten daily with rice, particularly in lower income and poor households, has doubled in price in just over a year’s time. In many Indian cities, one kilogram of pulses is now often more expensive than a kilogram of chicken or a dozen eggs. According to the Indian Pulses and Grains Association, farmers in India see less than half the pulses yield per hectare as compared to farmers in advanced countries. This is one of the reasons that India is significantly reliant on pulse imports. Imports reached 4.4 million tonnes for the 2014-2015 cropping season with costs in the range of $US 2.8 billion; this constituted over 20% of all pulses consumed. The rate at which imports are rising is a cause for alarm as they rose by about 29 percent over the 2013-2014 season. India gets between a quarter and a third of all those pulses from Canada. In the meantime, to meet the demands, Canada has ramped up production and is transporting a majority of their pulses from Saskatchewan all the way to ports in Vancouver, British Columbia to be loaded for ships headed to India. All in all, it’s a three to four month process that is indicative of India’s waning self-sustainability. On February 24, ICARDA is launching its Global Pulses Research Platform in partnership with the Indian Council of Agricultural Research (ICAR) and the Government of India. Implemented under the aegis of the National Security Food Mission, the platform aims to enhance food and nutritional security and improve livelihoods of farmers. A key objective of the platform seeks to promote and expand on cropping of lentils in rice fallows – an approach to intensifying crop production while enabling sustainable cropping systems. Approximately 11.7 million hectares were left fallow for period of time in 2015, constituting over a fourth of India’s total rice area. Researchers initially estimate that pulses can be grown on a minimum of three million of those hectares. The crop improvement research program at ICARDA has developed short-duration, high-yielding, disease resistant lentil varieties which mature as early as 100 days or less and can be planted and harvested between two rice cropping seasons. Pulses are the only type of crop that replenish soils with nitrogen, and therefore will help reinvigorate the soils to benefit the next rice planting season. These crops will provide farmers additional income, supply the local community with more food diversity, and will contribute to a much-needed increase in pulses production. The improved lentils are also bio-fortified – bred to contain more micro-nutrients than the local varieties – another strategic approach of ICARDA’s research to address the widespread micronutrient deficiency in India and the larger South Asia region. The condition known as “hidden hunger” leads to stunted growth in children, a major challenge in the region. The improved varieties offer 25% more iron and 60% more zinc than traditional varieties used by farmers. Additionally, the platform in India will be working with national partners in Nepal and Bangladesh to benefit the whole South Asia region from the platform’s research outputs, innovations and capacity building activities. Throughout South Asia the pulse situation is dire, but solutions are taking hold. In nearby Bangladesh the rice-pulses approach has already proven successful. Lentil cropping has spread to more than 85 percent of rice fallows, now bringing in an additional annual income of US $26.6 million to farmers. The platform launch is in lock step with the UN’s 2016 International Year of the Pulses aiming to promote pulses production and consumption the world over because of their unique mix of benefits for health, nutrition, sustainability, and food security. Compared to other sources of protein like meat, milk, and nuts, pulses have excellent water efficiency, low emittance of greenhouse gases, and an abundance of micro-nutrients. Click here to learn more about ICARDA’s global pulses research and projects.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.954656720161438, "language": "en", "url": "https://www.schwab.com/resource-center/insights/content/is-your-teen-financially-fit", "token_count": 1344, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.003570556640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:834a7224-506b-49fe-ae3e-b5c252a117e0>" }
Kids want to learn about money management, but only 17 states require a financial literacy class to graduate. Parents can fill the gap by giving teens real-life money experiences and responsibilities. Being a good role model and sharing your own money management practices is one of the best ways to prepare your teen for life after graduation. As your child heads back to school, you're probably looking at class choices and schedules, maybe even beginning a college search. But unless you're lucky enough to live in one of the 17 states that require completion of a financial education course prior to high school graduation, there may be one area that's sorely lacking in your teen's education: money management. That may not seem like the most exciting topic, but here's a stat that should get everyone's attention: Teens spend nearly $260 billion annually1! With so little financial education available, that means teens do a lot of spending (and probably borrowing) without understanding the basics of how to manage money or the consequences of mismanaging it. The good news—kids want to learn The good news is that, according to a 2011 Schwab Teens & Money Survey, 86 percent of teens would rather learn about money management in class before making mistakes in the real world. Even better news is that, based on results from the Money Matters: Make It Count program offered through Boys & Girls Clubs of America (BCGA), kids show dramatic improvement in their understanding of personal finance concepts after participating in the program. And most important of all, a study of more than 1,600 teens who completed the program showed that 17 percent switched from being spenders to being savers, and 23 percent were sticking to a budget. This corroborates my own personal experience working with Money Matters and BGCA. Now in its 13th year, with nearly three-quarters of a million teens having completed the program, Money Matters has shown some remarkable results. Part of those results are derived from BGCA’s efforts to make the program relevant to teens through a wide range of experiential activities and content. I'm particularly excited about two new elements of the program—Reality Store, a real-world interactive workshop, and a digital game called $ky—where kids discover first-hand how education, career, family and spending can impact their financial futures. There's nothing like experiential activities to give kids a taste of financial reality. But regardless of whether a class or program is available, I believe it's up to parents to fill in the gaps in a young person's financial education. Here are some ideas on how to introduce your own kids to today's financial realities. Five practical ways to get started As parents, we can always do a better job of communicating conceptual information, but real learning comes from doing. So instead of just talking about money, try teaching your kids through hands-on money experiences and responsibilities. Use daily opportunities as “teachable moments.” For instance, when you’re paying bills, let your teen see what it costs to run a household, and how bills get paid and a checkbook balanced. If you pay your bills online, show them how it's done—and how online bill-pay is linked to your checking account. Pay an allowance only once a month. To learn how to budget and make their money last, teens have to have their own money, whether it's through an allowance or earnings from a part-time job. Help them understand the difference between needs and wants and how to budget for both. Make them responsible for a certain share of their own expenses (for example, clothing, a new electronic gadget, or extra-curricular activities). Most of all, let them learn from their mistakes. Don't immediately come to the rescue if they come up short. Open a checking account. Show your teen how to use a check register and review monthly statements online or on paper. Guide them in using a debit card wisely and keeping track of debit expenditures. Show them how to use an ATM machine for deposits and withdrawals. Help make savings a habit. Help your teen open a savings account. Talk about goals and setting savings priorities. Suggest saving a percentage of any earnings or gifts toward a goal. If your teen has a job, have them set up a direct deposit to their checking account that links to their savings account. Consider matching a portion of their savings to further motivate them to put money away. Show them your 401(k) or IRA statement. Explain what a 401(k) plan is and how it works. Kids aren't thinking about retirement yet, but they should be once they get their first job. Introduce the idea of saving and investing 10 percent of their salary toward retirement by opening up a Roth IRA from the get-go. A savings calculator is a great way to demonstrate how painless it is to start accumulating a really nice nest egg if you start early. Other topics teens want to know about With a solid foundation in the basics, you can add more sophisticated topics. The Teens & Money Survey found that kids are particularly interested in things such as the kinds of insurance they'll need when they're on their own, how to invest to make money grow, how income taxes work, and how to establish good credit. It's definitely encouraging to see that the interest is there. Now it's up to us to help turn that interest into practical experience. Be a good role model Our kids are watching us and looking to us for guidance. The more we can share with them about our own money management practices—from household budgeting to saving to investing—the better for everyone. So be open and talk freely about how you handle your money and teach them by your good example. And if you feel you need a bit of sharpening up yourself with any particular areas of personal finance, visit SchwabMoneywise.com, which offers tools and resources for anyone who wants a refresher in the basics of personal finance, as well as those who are just starting out. But don't let the schools off the hook. If a financial literacy class isn't on the schedule at your teen's school, you might petition your school district to have one added to the curriculum. To me, money management is a critical life skill for everyone. And the sooner it's learned, the better. Have a personal finance question? Email us at [email protected]. Carrie cannot respond to questions directly, but your topic may be considered for a future article. For Schwab account questions and general inquiries contact Schwab.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9419232606887817, "language": "en", "url": "https://www.womenfitnessmag.com/7-important-money-lessons-to-teach-your-kids/", "token_count": 812, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f1c5dd22-5c51-4ff9-b661-d6193c4dcf0e>" }
7 Important Money Lessons to Teach Your Kids : Kids are always looking for new ways to have fun. They create endless imaginary worlds with their toys and cruise around on their bikes while they explore the world. What they interact with inspires them to learn, which is why kids become curious about money at an early age. They watch their parents spend money on things they love, like chocolate milk or mac and cheese. It makes them wonder why money is so important and how they can use it to get the next big toy on their wish list. Start healthy financial habits by reading about these seven crucial money lessons to teach your kids. These lessons cover everything kids need to understand, and how to introduce them to money so they know how to handle it later on. Money Doesn’t Grow on Trees When your paycheck lands in your bank account, your kids don’t see it. They also don’t understand how hard you work at your job to earn that money. All they see is money magically appearing from your wallet or purse when it’s time to buy something. Make sure to explain that money doesn’t grow on trees. You get paid when you work hard and earn your paycheck. A great way to put this lesson into action is to start an allowance. After your kids complete their work for the week, they’ll receive what they’ve earned and value money differently. Money Has Different Values After they understand that money has value and that’s why you can buy stuff with it, kids should learn about how each coin and bill are different. Use toy money to operate a fake store in your living room, where they buy toys or play an educational game to make learning about money more fun. Saving Is Important If you spend all your money, you won’t have enough left over for your future. Show your kids that saving is essential by painting piggy banks with them or creating savings goals that take time to get them what they want. Give When You Can Kids have a limited understanding of how the world works, so introduce them to the idea of giving to charities. They’ll learn how some people struggle more than others and feel the joy that comes from helping those in need. If they start young, they’ll continue giving later on. Budget First, Spend Later Older kids in middle or high school should learn how to budget before they get their first job. One way to do this is to explain needs versus wants and teach your kids how to prepare for both with the responsibilities they have now. Credit Isn’t Free Teenagers might love the idea of a credit card because of the instant gratification, but that’s one reason so many young people end up in debt. Explain how credit isn’t free by practicing tiny loans with interest at home before they can sign up for a card. Money Isn’t Everything It’s easy to focus on earning, saving and spending money, so remind your kids that money isn’t everything. It’s a tool to make life better, but it shouldn’t be their primary focus. Start While They’re Young Think about how your kids enjoy learning. Are they visual or interactive learners? Craft lessons and make them fun, so your family loves the activities and takes the lessons to heart. Related Infographics about Important Money Lessons to Teach Your Kids : Related Videos about Important Money Lessons to Teach Your Kids : The best ways to teach your little kids about money 10 Things The RICH Teach Their Kids About MONEY The Surprising Way to Teach Your Kids to be Smart with Money Money Lessons To Teach Your Kids! 8 Essential Money Lessons You Should Teach Your Kids Set Your Kids Up for Financial Success How to teach your children important money lessons How to Teach Your Kids About Money 7 Important Money Lessons to Teach Your Kids
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9678970575332642, "language": "en", "url": "http://www.newsminer.com/plan-would-convert-north-slope-natural-gas-to-electricity/article_9d8bd4df-34d1-5481-9562-1f7289f21813.html?iframe=true&width=100%25&height=102%25", "token_count": 1150, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.322265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2c50f05a-d8db-4060-a59d-fac7b7777014>" }
ANCHORAGE, Alaska — An idea studied years ago by an oil company for producing vast stores of North Slope natural gas without building a giant pipeline has emerged again, this time before state legislators trying to find relief for residents crushed by heating and electricity costs. Why not convert the natural gas into electricity and transmit it around the state by wires, including to rural and Interior Alaska? Officials with a rural power utility and a Alaska Native corporation that includes a former subsidiary of Arco Alaska have been pitching the proposal for the last month. The effort has roots in a recent Commonwealth North study called "Energy for a Sustainable Alaska: The Rural Conundrum." On Tuesday, the utility and the Native corporation presented the idea to a small group of legislators in Anchorage as a cost-effective answer. The talk drew a sizable crowd that included power company representatives from Anchorage and the Mat-Su. "We haven't considered electric heat as a viable energy source in this state and that's where we've missed the boat," Meera Kohler, president and chief executive of the Alaska Village Electric Cooperative, told lawmakers. "We've never really had an all-Alaska solution." In rural Alaska, almost 80 percent of the energy used to heat homes comes from burning diesel fuel, which is far more expensive than natural gas, she said. The average cost for a gallon of diesel fuel bought by the nonprofit cooperative last year was $4.27, triple what it was in 2002, she said. The retail price in rural villages is $6 to $10 a gallon, she said. For the poorest rural residents, home energy costs absorbed almost half their household income in 2008, according to a study by the Institute of Social and Economic Research cited by Kohler. In Anchorage, the lowest income group spent around 9 percent on home energy, while in other large communities and road-system towns, the figure was about 18 percent. And rural Alaska residents use less than half as much energy as those whose power comes from natural gas or hydro sources, she said. A solution could come from natural gas, but not through a pipeline, legislators were told. A scientist who once worked for the Arco Alaska subsidiary that studied the idea is Robert Jacobsen, an astrophysicist who now is vice president of science and technology for Marsh Creek, an Alaska Native joint venture. Arco was one of Alaska's major oil producers until it was sold in 2000 to what is now Conoco Phillips. Jacobsen said that decades ago, Arco studied whether electricity produced from a natural-gas-fueled power plant on the North Slope could be transmitted south and sold at competitive rates. The project made sense financially, he said. But politically, the state of Alaska was fixated on a natural gas pipeline, and Arco didn't want to get crossways with the government, he said. "So we canned it," he told the Anchorage Daily News (http://is.gd/y8oSYi). While the Parnell administration is still pushing construction of a pipeline to bring North Slope gas to commercial markets, Jacobsen said the window for that opportunity may have closed with cheap natural gas from shale deposits in the Lower 48 and by Asian markets likely to develop their own gas fields. The system he is promoting would involve building a natural-gas-fueled North Slope power plant, a high voltage direct current (HVDC) power line, plus converter stations to transform the DC power to AC, or alternating current, so it can be used by consumers. A power plant designed for the Arctic, converter stations in Fairbanks and Anchorage, and an 860-mile high-voltage DC power line would cost just under $4 billion, he said. With power lines of at least 300 miles in length, direct current transmission is an economical and stable method, he said. And technological advances since the late-1990s make an electricity distribution system off a big HVDC line affordable, he said. A quick analysis by his company, Marsh Creek, and ABB, a global company that works in power and automation technology, indicated a system could deliver electricity from the North Slope to the Railbelt for 9.3 cents a kilowatt hour, much less than what customers are now paying in Fairbanks. At that price, electric heat becomes cost effective, Jacobsen said. Similar costs for bringing the electricity to rural communities weren't included. Kohler, the utility cooperative executive, said backers probably will request $2 million to $3 million from the Legislature for a detailed study. The reborn idea has emerged too quickly to have generated much reaction. Chugach Electric Association is still analyzing the concept. A utility trade group doesn't yet have a position. A Conoco Phillips spokeswoman, Natalie Lowman, said the oil producer, with much of the North Slope gas reserves, couldn't speculate on whether such a project was viable. "However, we continue to search for commercially viable ways to get Alaska North Slope natural gas to new markets," she said in an email Tuesday. Sen. Bill , D-Anchorage, heard a presentation to a Commonwealth North group a month ago and was so intrigued, he arranged for Tuesday's session. He and Joe Thomas, D-Fairbanks, said they'd like to see a study. The state needs to find a way to get cheaper power to rural Alaska and the Interior, Wielechowski said. "Those communities are suffering and they are not going to be there for much longer if we continue to go the way we are going."
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9750593304634094, "language": "en", "url": "http://www.perseveranceco.com/blog/everyone-is-not-college-material", "token_count": 650, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12353515625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0e2fc34f-591b-4aea-b0f3-51e257731e3a>" }
Today, almost two thirds of high school students are planning to attend a four-year university right after they graduate. This sounds like a positive change in education, but is it really a good thing for the next generation of youth? Today’s high school seniors face more anxiety in being accepted by the colleges of their choice, and constant pressure from their teachers and parents to truly evaluate whether college is the right choice for them. A university education is often oversold to high schoolers, yet it under delivers in teaching these young minds the skills they need for an educated workforce. 46% of college graduates are in jobs that don’t even require degrees. They have the knowledge, but lack the experience to land jobs that would justify their massive student debts. Yet, most of these grads are not qualified to enter higher paying jobs, such as construction, mechanical work, or other high-demand vocations. Many times high school seniors are pressured into pursuing college degrees when they could spend much less time and money at a vocational school. A certificate program at a trade school costs an average $30 - $40,000 for two years, whereas a bachelor’s degree at a public college typically costs close to $100,000 over four years. Oftentimes, college graduates express their regrets when they have accumulated so much debt, yet still cannot find jobs in their field that pay more than their pre-degree earnings. Many studies show that those who do earn college degrees often earn more than their peers who have no other education. But, the likelihood of students finishing their degrees is less-- approximately 46%. And for children from low-income families (who typically benefit much more from earning a degree), the graduation rate is much smaller. According to the nonprofit organization, Complete College America, less than 10 percent of low income students complete a two-year degree within three years. Many drop out or do not have the financial support to finish their degrees. College tuition has one of the highest inflation rates. The costs of a four-year degree have risen by over 1100% in the past 30 years. If this were true for other goods or services, we would be paying approximately $25 for a gallon of gas. Even though college tuition has gone up, the value of those degrees as actually gone down. Because more students are going to college than 30 years ago, the job market is saturated with college graduates who have no other distinguishing qualifications. This is where real-world experience and vocational training become more valuable to someone entering the workforce than a degree. These four years can be spent earning valuable experience, or attending a hands-on technical school that will funnel them directly to high-paying jobs. By 2020, a majority of jobs will not require a college degree at all. With this pursuit of a college-educated society, soon we will be lacking skilled electricians, construction workers, mechanics, IT, and other skilled technicians and craftspeople. Those wanting to “get ahead of the game” should seriously consider vocational training as an alternative to the often costly and risky business of college education. We just sent you an email. Please click the link in the email to confirm your subscription!
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9227395057678223, "language": "en", "url": "https://essayhub.net/essays/international-financial-accounting-standards-assignment", "token_count": 3014, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:61cdf596-90e5-47ee-9ced-5cbec21cf72f>" }
Conceptual Framework refers to various ideas and objectives that help to set different rules and regulation for accounting. Conceptual Framework has a great significance in the process of accounting all over the world. The role, advantages problems and critics of current conceptual framework is discussed below. Role: In the process of accounting, the conceptual framework has a significance role to play. One of the major roles of conceptual framework is to assist international Accounting Standard Board (IASB) in making the future International Financial Accounting Standards (IFRS). Not only making the rules it helps IASB to review the existing IFRSs. Conceptual framework plays an important role to make harmony among all the existing accounting rules and regulations by reducing the number of alternative accounting treatments. It helps to set a universal accounting standard. It also provides assistance to the national standard-setting bodies to set the accounting standards in the specific counties (Weil, Schipper and Francis 2013). Another major role of conceptual framework is that it helps to address the new accounting issues that are not recognized yet. Conceptual framework plays an important part in the auditing process also. It assists the auditors all over the world in forming opinions that the financial statements of the audited organizations comply with the accounting rules and regulations of IFRS or not. The users of financial statements like the investors, stakeholders and others use the conceptual framework to interpret various facts of the financial settlements. Last but not the least, conceptual framework provide valuable information to the members of IFRS to develop new accounting rules and regulations. These are the most important roles of conceptual framework (Macve 2015). Benefits: As mentioned in the above discussion, conceptual framework is nothing but some theoretical principles that assist in the process of financial accounting and financial reporting. There are some major benefits or advantages of the conceptual framework. The major benefit of conceptual framework is that it helps to evaluate or clarify the various concepts in the accounting conceptual framework (Smieliauskas 2016). There are various accounting concepts that are tough to explain with the help of conceptual framework. Another benefit of conceptual framework is that it assists the international as well as national standards setters to set the accounting rules and regulations on a consistent basis. On the other hand, conceptual framework assists the auditors, users of financial statements and preparers of financial standard setters to understand the different approaches, nature and functions of accounting and financial information. These are the main benefits of conceptual framework (Henderson et al. 2015). Problems and Critics: One of the major problems of conceptual framework is that whether the liabilities and assets of an organization are measured based on cost or value. This is one of the problem areas where the current conceptual framework is criticized. This measurement problem has created a conflict that raises questions towards the measurement framework of assets and liabilities. Another problem is that there are many methods available for the assets and liabilities valuation and this has made the conflict more complex (Craig, Smieliauskas and Amernic 2014). General-purpose financial statement is an important financial document that assists creditors and investors in the decision making process. Various components of general-purpose financial statement are income statement, balance sheet, statement of owner’s equity, cash flow statement and many others. There should be some major objectives of general-purpose financial statements. The most important objective of general-purpose financial statement is to provide valuable important financial information about the reporting organization to its creditors, investors, lenders and others that assist them to take important decisions. These decisions include buying decision, selling decision, equity decision, investment decision and others. Another important objective of general-purpose financial statement should be to provide important information about the reporting entity regarding the economic resources. This information includes the information about the economic resources, various claims about the organization and others (Nobes 2014). General-purpose financial statement provides other important information that affects the economic resources and claims of the reporting entity. These information has a lot of importance to the investors and others as they can judge the financial strengths and weaknesses of the organization. On the other hand, this information indicates the liquidity and solvency position of the reporting organization. Another objective of general-purpose financial statement should be providing cash flow related information to the investors and creditors. The investors can assess the reporting entity’s ability to generate future cal inflows with the help of information provided by general-purpose financial statement. These should be the major objectives of general-purpose financial statement (ifrs.org 2017). In the above discussion, it is discussed that one of the most important objective of general-purpose financial statement is to provide valuable and relative information about the reporting company to the investors, creditors, lenders and others. The main agenda behind this supply of information is to make the investors, creditors, lenders and others aware about the financial position of the organization. This process will help them to take effective decisions about sales, purchase, investment and others. As per the Exposure Draft, more emphasis is given on this process so that more prominent and accurate information can be provided to the investors and others. In order to achieve this milestone, it is proposed by the Exposure Draft to reintroduce the term ‘stewardship’ in a more prominent and effective way. The Exposure Draft says that the term ‘stewardship’ needs to be used continuously in order to implement accountability in the accounting process (van Mourik and Katsuo 2014). In this regard, it can be said that the board has taken the right decision to reintroduce ‘stewardship’. This move has many positive impacts. First, this process will help the investors, creditors, lenders and others to take effective prominent decisions like buying and selling decisions, investment decisions, loan decisions and others. On the other hand, it will help to implement accountability in the process of accounting. In addition, this process will resolve the issue of costs from the conceptual framework. All these reasons contribute to the acceptability of the tentative decision of the board (ifrs.org 2017). Prudence is considered as one of the most important concepts in accounting. In the process of accounting, various uncertainties can be seen over some specific factors like the collection of doubtful debts, the probable useful life of plant and machinery and many others. The concept of prudence says that an accountant needs to record liabilities at the time of their occurrence, but he/she should record the revenues when they are realized. In a more precise note, prudence refers to some degree of cautious that prevents the overstatement of assets and incomes and understatement of liabilities and expenses (ifrs.org 2017). The above discussion shows the meaning of prudence. However, there is another important concept that is called Asymmetrical Prudent. There are many similarities between prudence and asymmetrical prudence. Asymmetrical prudence occurs when the accountant makes judgment about any asset or liability under the situation of uncertainty. However, there is a lot of differences between asymmetrical prudence and cautious prudence. The main function of asymmetrical prudence is to make the accounting treatment for incomes and liabilities for one period. It has been seen that the asymmetrical prudence leads to the understatement of income for one period and overstatement of incomes for the future periods. The main reason of this is that the prudence is that the accounting rules and regulations allow asymmetrical prudence to take into consideration the incomes that are assured to get after only period (Gl?ckner 2016). The board has taken a tentative decision to reintroduce the concept of prudence in order to bring more transparency in the accounting processes of the companies. Prudence has a lot of significance in recognizing the losses than the profits. It has been decided in the Exposure draft on 18 May 2016 that in the new conceptual framework, prudence needs to be described as exercise of caution at the time of passing the judgments about uncertainty. It has been decided by the board that there is no need to separate mention the extent of prudence it has already been included in the framework. In addition, the board has also decided that the staffs need understand how to acknowledge prudence in the conceptual framework (Schilder 2013). Based on the above discussion it can be said that the reintroduction of prudence in the conceptual framework is a good idea from the side of the board. This process will give more importance to the concept of prudence and it will bring more transparency in the determining the future loss and gains of the organizations. However, the treatment of prudence in the Exposure Draft is not adequate. There are some major facts about prudence that are not present in the Exposure Draft. Some aspects of prudence and asymmetric prudence are missing in the Exposure Draft. One of the major issues regarding the Exposure Draft is that IASB has acknowledged the concept of prudence in the accounting process but they have not included it in the Exposure Draft. In order to make the draft more accurate, all these missing facts need to be included in it (Marshall and Lennard 2016). Another major accounting concept is the concept of substance over form. As per this concept, all the financial transactions of an organization needs to be recorded in the financial statement rather than only presenting the legal form and documents of those transactions. This is done so that the true and fair view of the business entities can be recognized. According to this concept, the accountants of the organizations have a lot of responsibility at the time of accounting. It is their responsibility to derive all the accounting and financial transactions from the various documents of the organizations and record them in the financial statements of the organization. Another reason of this action is to use these financial documents as per future references. As per the example, IAS 17 Lease can be mentioned in this regard. As per this rule, any particular asset can be leased without transferring the legal documents to the lessee. However, in this process, the transaction of lease must be recorded in the financial documents of both the parties. The process of substance over form has a great significance in the accounting process as it helps in the true and fair representation of all accounting and financial information. In presence of substance over form, all the assets and liabilities of an organization show the trues value of them (Ahmed, Sabirzyanov and Rosman 2016). According to the proposed exposure draft, it has been decided to reintroduce the concept of substance over form. It has been said in the Exposure Draft that substance over form helps to document all the information about the financial activities rather than only presenting the legal documents of those transactions. In a more precise note, it can be said that the substance over form refers to the faithful representation of all accounting and financial information. On 18 May 2016, the board has decided that the proposed Exposure Draft will include all the substances of substance over form to make the financial statement transparent (Disle et al. 2016). As per the decision of the board, the proposed Exposure Draft will describe the uncertainties in measurement to implement faithful representation. On the other hand, the Exposure draft will also include the Basis of Conclusion I the revised conceptual framework. The board has also taken decision not tom include some factors in the conceptual framework like the brief explanation of existence, measurement and outcome uncertainties and others. After the above discussion, it can be said that the board has taken a correct step regarding substance over form as this process will be resulted in the fair and true representation of all the necessary accounting and financial information. The legal documents have a lot of importance, but the true and fair presentation of accounting and financial information is necessary for the success of the organizations (Walton 2015). Ahmed, M.U., Sabirzyanov, R. and Rosman, R., 2016. A critique on accounting for murabaha contract: a comparative analysis of IFRS and AAOIFI accounting standards. Journal of Islamic Accounting and Business Research, 7(3). Craig, R., Smieliauskas, W. and Amernic, J., 2014. Assessing Conformity with Generally Accepted Accounting Principles Using Expert Accounting Witness Evidence and the Conceptual Framework. Australian Accounting Review, 24(3), pp.200-206. Disle, C., P?rier, S., Bertrand, F., Gonthier-Besacier, N. and Protin, P., 2016. Business Model and Financial Reporting: How has the Concept been Integrated into the IFRS Framework?. Comptabilit?-Contr?le-Audit, 22(1), pp.85-119. Gl?ckner, A., 2016. New development: The protective role of conservatism in public sector accounting. Public Money & Management, 36(7), pp.527-530. Henderson, S., Peirson, G., Herbohn, K. and Howieson, B., 2015. Issues in financial accounting. Pearson Higher Education AU. ifrs.org. (2017). Conceptual Framework for Financial Reporting. [online] Available at: [Accessed 7 Jan. 2017]. ifrs.org. (2017). IASB Staff Paper November 2016 Effect of Board redeliberations on the Exposure Draft Conceptual Framework for Financial Reporting. [online] Available at: [Accessed 7 Jan. 2017]. ifrs.org. (2017). STAFF PAPER May 2014 REG IASB Meeting. [online] Available at: [Accessed 7 Jan. 2017]. Macve, R., 2015. A Conceptual Framework for Financial Accounting and Reporting: Vision, Tool, Or Threat?. Routledge. Marshall, R. and Lennard, A., 2016. The reporting of income and expense and the choice of measurement bases. Accounting Horizons, 30(4), pp.499-510. Nobes, C., 2014. International Classification of Financial Reporting 3e. Routledge. Schilder, A., 2013. The evolving role of auditors and auditor reporting. In CReCER Conference, Colombia. Smieliauskas, W., 2016. Auditability of Accounting Estimates and the IASB's Conceptual Framework Exposure Draft (2015). Browser Download This Paper. van Mourik, C. and Katsuo, Y., 2014. The IASB and ASBJ conceptual frameworks: same objective, different financial performance concepts. Accounting Horizons, 29(1), pp.199-216. Walton, P., 2015. IFRS in Europe–an observer's perspective of the next 10 years. Accounting in Europe, 12(2), pp.135-151. Weil, R.L., Schipper, K. and Francis, J., 2013. Financial accounting: an introduction to concepts, methods and uses. Cengage Learning.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9471979737281799, "language": "en", "url": "https://www.carubecopper.com/mining-in-jamaica.html", "token_count": 839, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0947265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cda125c6-2a1c-4fc1-8cca-2f99f815c4b3>" }
Mining in Jamaica The mining industry is well developed in Jamaica due to a thriving bauxite industry and along with tourism is one of the two largest contributors to the Jamaican tax base. In 1971, six companies began bauxite mining in Jamaica and today, bauxite mining together with alumina refining, combine to drive the island’s economy. All potential infrastructure, environmental and social issues related to mining have been identified and addressed by the bauxite industry in concert with the government. Mining regulations are well developed, reasonable and are updated in a systematic fashion. The government has a history of consulting the mining industry before implementing changes. A relatively small but expert legal community that specializes in mining laws and regulations is present. A strong history of Canadian based expertise in the industry is based on over four decades of work in Jamaica by the Canadian International Development Agency (CIDA) and the Geological Survey of Canada (GSC). An electrical power industry is present in Jamaica and services some of the substantial power requirements of the alumina industry. Local infrastructure close to the Bellas Gate and Browns Hall licenses includes power supply from the Jamaica Public Service (JPS). Local rail connections are available to the deep water container port at Kingston and to the bulk handling port facility at Port Esquival, 30 kilometres from Bellas Gate. Narrow, but paved roads link the coastal area with the interior mountains. Skilled and unskilled labor is abundant in Jamaica. English is the national language and the legal system is based on British parliamentary democracy and English common law. Import-Export rules are not burdensome as the administration is accustomed to the bauxite mining companies importing an abundance of equipment. All forms of heavy equipment are readily available, except for modern core drilling rigs. When required, the importation of drilling equipment is relatively simple and expedient. The corporate income tax rate for large companies is 30% and new mining operations can negotiate a tax holiday to accelerate payback schedules. A general consumption tax (GCT) is a value added tax and the standard rate is 16.5%. Payroll taxes contributed by the employer equal 12.25%. Mining in Jamaica falls under the authority of the Ministry of Science, Technology, Energy and Mines (MSTEM). The National Environmental Planning Agency (NEPA) has established requirements to bring new development including mining operations into existence. Regulations would require an Environmental Impact Study as part of any mining license application. In 1996, AusJam, a private Australian company, permitted a small open pit gold mine-cyanide mill complex to the west of the Bellas Gate area, known as the Pennants operation. Pennants is an epithermal vein deposit discovered by BHP utilizing the CIDA database and was subsequently sold. The Environmental Impact Assessment for the operations was written by Golder Associates and noted that there were no critical risks to flora or fauna in that portion of the Central Inlier. Special Exclusive Prospecting Licenses (SEPLs) are granted for mineral exploration and development activities. This includes but is not limited to, drilling, geophysical and geochemical surveys, water rights, and access roads. Surface access notification to local land owners with compensation for disturbance is set forth in the SEPL rules. No Environmental Impact Study (EIS) is required for exploration or development activities; however archeological sites, including old mining sites must get approval before disturbance or removal. An SEPL costs $JM600 per km2 for the application and $JM 400per km2 each year thereafter. Minimum expenditures are approximately $JM5000 per square kilometre per year. Upon completion of exploration, a mining lease costing approximately $JM1200 per km2 is granted. The renewal cost for a lease is $JM600 per km2per year for 25 years. SEPLs are located by reference to a post or beacon usually set at convenient map coordinates by GPS devices. Boundaries are located from map coordinates by GPS devices or topographical maps. Licenses require twice-annual prospecting reports to be filed Metallic minerals in Jamaica are subject to a royalty of 5% of the commercial value of the metal produced.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9483471512794495, "language": "en", "url": "https://www.cclaw.com/2009/05/20/the-evolving-effect-of-working-requirements-in-foreign-jurisdictions/", "token_count": 842, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.34765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ab7213bb-653d-4b4a-8156-cb9f4c473e4f>" }
The patent rules of many foreign jurisdictions contain working requirements. A working requirement is the requirement that, after a certain number of years, the patented invention be worked on a commercial scale in the country. The effect of these requirements has evolved in the last few years in many countries due to amendments to their patent laws. Among the jurisdictions that have working requirements are India, China, Canada, and Australia. Generally, information must be submitted each year regarding the working of the invention in the country. The patent holder must disclose whether the invention is being worked in that country. If the invention is being worked, information such as whether the invention is actually worked within the country or imported, the quantity and value of the invention, and whether any licenses have been granted must be provided. Failure to submit the required information can result in a monetary fine and loss of patent protection. In India, the working requirements were amended in 2005 to extend the term of a patent to twenty years and increase the fine for failure to file an information statement. The requirement of filing an information statement on the working of an invention in India was in place prior to 2005, but there was only a nominal fee, a few hundred dollars, assessed against a patent holder who did not file the required information. Now the fine for not filing an information statement can be up to $25,000. The impact of these amendments can be better understood when viewed in conjunction with India’s compulsory license procedure. A person who wishes to practice a patented invention can petition the patent office to grant a compulsory license three years after the grant of the patent. One of the grounds an applicant can rely on in applying for a compulsory license is the failure to work the patented invention in India. This compulsory license procedure may be used more often than it has been in the past due to the amendments. For instance, when the patent term was only seven years, applying for a compulsory license was likely considered an unwise decision, considering that a compulsory license can only be applied for three years after the grant of the patent and considering the time involved in obtaining the compulsory license. But now that the patent term is twenty years, the compulsory license procedure is more attractive. Also, it is important to note that the Indian patent office does not monitor information statement filings to ensure that each patented invention is worked in India. However, whether an information statement has been submitted is checked when an application for a compulsory license is made. Considering the compulsory license procedure will likely be used more often and the substantial fine that could be imposed for failure to file, the risk of not filing may be too great to ignore. China provides another example of these changing requirements. Amendments to Chinese patent law will take effect on October 1, 2009. Like India, Chinese patent law requires a patented invention to be worked in the country to retain patent protection and provides for a compulsory license procedure. One ground for issuing a compulsory license under the new law is the failure to work the invention, without justification, within three years from the date of issuance or four years from the filing date. The applicant for a compulsory license must show that the applicant was unable to negotiate a license with the patent holder within a reasonable time. Though obtaining a compulsory license may be difficult (China has never granted a compulsory license), the procedure still should provide patent holders an incentive to work their invention in China. In conclusion, companies with foreign patent portfolios, particularly in China and India, should be aware of the changing rules. Failure to stay on top of the evolving working requirements could lead to unnecessary fines. An associate at Carstens & Cahoon, LLP, Mandy Jenkins’ primary area of practice is intellectual property law and litigation. She has experience in patent and trademark prosecution, trade secret and licensing matters, and in all phases of litigation.This blog is maintained by Carstens & Cahoon, LLP to inform readers of recent developments in intellectual property. Solely informational in nature, this blog is not intended to create an attorney-client relationship or to be used as a substitute for legal advice or opinions. For more information, please visit www.cclaw.com. By Mandy K. Jenkins
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9312607645988464, "language": "en", "url": "https://www.conventionalloanrequirement.com/fed-lowers-interest-rates/", "token_count": 944, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.041259765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f0f399db-206c-4c2f-9a05-eb0f5ad34018>" }
So the Fed left interest rates unchanged, but what does that. – Generally speaking, the Federal Reserve raises (or lowers) interest rates in response to inflation. When the Fed is concerned about prices rising too fast, it will raise rates. Loan interest rates now: Who benefits as they drift lower. – After the Fed raised rates seven times in 2017 and 2018, it’s now signaling a pause for 2019 That has reversed the previous steady climb in interest rates As lower rates filter down to loans and. Fed decision today: Federal Reserve raises interest. – 19-12-2018 · Fed decision: central bank lifts interest rates, lowers forecasts to two hikes in 2019. The Federal Reserve raised interest rates and forecast two more hikes next year. Education | Why did the Federal Reserve System lower the federal. – Discussion of the Fed's monetary policy to counteract the slowing economy by lowering interest rates in the federal funds market and at the discount window. Why Does the Federal Reserve Raise Interest Rates. – Increments. A small increase in interest rates can have a profound effect, so normally the Fed only lowers or raises rates by very small increments. Usually, it will raise or lower rates by a quarter of a percent at a time. A change of a half percent or higher is rare, but not unprecedented in a time of economic uncertainty. Bank share declines weigh on indices as Fed signals no. – The Fed’s latest dot plot, a chart showing each of the FOMC members’ target interest rates for the near- and long-term, pointed to a median of zero rate hikes in 2019. This is lower than the. Federal Reserve Raises Interest Rates, U.S. Stocks Pare Gains. – Fed Raises Rates, Turns More Cautious on Outlook for 2019 Hikes. dialing back projections for interest rates and economic growth in 2019. How Mortgage Rates Are Determined Documents Needed To Apply For A Mortgage Standard Loan Application Form New Home Loans With No Down Payment Standard Bank – application form – Current Standard Bank relationship manager: * annual income. assessing this credit application or updating our information in future.. such agencies, who may in turn share the information with other credit providers, about how I/we manage our loan during the term of this agreement..A Quick Guide to Getting a Mortgage Broker License in California – Some have to submit a surety bond as an extra layer of protection for homebuyers, whereas others need to pass pre. as real estate brokers and mortgage brokers in California. Due to the combined.How to Get the Best Mortgage Rates? – SuperMoney – All mortgage lenders establish their rates based on the prime interest rate. This rate is determined by the federal government. This rate is determined by the federal government. It represents the best rate banks charge each other for borrowing money overnight. Slower US growth means no rate rise for 2019, says Fed – BBC News – The US Federal Reserve does not expect to raise interest rates for the. fed members changed their outlook for 2019 from the two increases. Pre Qualify Online For A Mortgage Prequalify for a Mortgage | U.S. Bank – Prequalify for a mortgage Learn how much you could spend on your next home. Exploring how much you may qualify to borrow is a great place to start your home-buying journey. Online prequalification is fast, free and won’t affect your credit report. FLAGGING INTEREST Donald Trump shouldn't blame the Fed if rising. – Rising interest rates can deter home buyers, but it's not the Fed's fault.. Lower rates can also lead to inflation, which benefits borrowers. Here is an introduction to the Federal Reserve and interest rates including the funds rate and the discount rate.. The discount rate is the interest rate banks are charged when they borrow funds overnight directly from one of the Federal Reserve Banks. When the cost of money increases for. Fha Loan Pmi Calculator Mortgage Insurance Calculator – PMI Calculator – HSH.com – This Private Mortgage Insurance (PMI) calculator reveals monthly PMI costs, the. Use this calculator to make an amortization schedule for a loan of any term, How Might Increases in the Fed Funds Rate Impact Other Interest Rates? – A rising fed funds rate means other short-term interest rates would. Through a series of increases since then, the target rate has been. Fed May Bow to Trump's Call for Rate Cuts If Inflation Softens – 1 day ago. With his call for lower interest rates, President Donald Trump has weighed into a debate inside the Federal Reserve about what central bankers.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9252133369445801, "language": "en", "url": "http://archives.nereusprogram.org/climate-change-could-cause-10-billion-in-annual-revenue-loss-to-fisheries-by-2050/", "token_count": 796, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.061767578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:fcfdb7e8-7fc5-4f36-9274-2adb7151f9de>" }
Global fisheries could lose approximately $10 billion of annual revenues by 2050 if climate change continues at current rates, and countries most dependent on fisheries for food and livelihoods will feel more of the effects, finds new Nippon Foundation-Nereus Program research published today in Scientific Reports. Climate change impacts such as rising temperatures and changes in ocean salinity, acidity and oxygen levels are expected to result in decreased catches, as previous research from UBC’s Institute for the Oceans and Fisheries has found. In this study, the authors examined the financial impact of these projected losses for all fishing countries in 2050, compared to 2000. “Developing countries most dependent on fisheries for food and revenue will be hardest hit,” said Vicky Lam, Nereus Program Fellow at UBC, and the study’s lead author. “It is necessary to implement better marine resource management plans to increase stock resilience to climate change.” While many communities are considering aquaculture, also known as fish farming, as a solution to ease the financial burden of fishing losses and improve food security under climate change, when researchers examined the growing industry, they found it may exacerbate the negative impact on revenues. “Climate adaptation programs such as aquaculture development may be seen as a solution,” said William Cheung, Nereus Program Director of Science and a study co-author. “However, rather than easing the financial burden of fishing losses and improving food security, it may drive down the price of seafood, leading to further decreases in fisheries revenues.” The researchers used climate models from the Intergovernmental Panel on Climate Change to examine the economic impact of climate change on fish stocks and fisheries revenues under two emission scenarios. In a high emission scenario, the rates continue to rise unchecked, while a low emission scenario meant ocean warming is kept under two degrees Celsius. “Global fisheries revenues amount to about $100 billion every year,” said co-author, Rashid Sumaila, professor at UBC’s Institute for the Oceans and Fisheries and Liu Institute for Global Studies. “Our modeling shows that a high emissions scenario could reduce global fishing revenue by an average of 10 per cent, while a low emissions scenario could reduce revenues by 7 per cent.” The researchers found the countries that rely highly on fish are the most vulnerable, including island countries like Tokelau, Cayman Islands and Tuvalu. Meanwhile, many developed countries, such as Greenland and Iceland, could see revenue increases as fish move into cooler waters. This study was a collaborative effort between Nippon Foundation-Nereus Program and OceanCanada Partnership. Nippon Foundation-Nereus Program The Nereus Program, a collaboration between the Nippon Foundation and the University of British Columbia Institute for the Oceans and Fisheries, has engaged in innovative, interdisciplinary ocean research since its inception in 2011. The program is currently a global partnership of six leading marine science institutes with the aim of undertaking research that advances our comprehensive understandings of the global ocean systems across the natural and social sciences, from oceanography and marine ecology to fisheries economics and impacts on coastal communities. Visit nereusprogram.org for more information. OceanCanada is a Partnership of 18 institutions across the nation dedicated to building resilient and sustainable oceans on all Canadian coasts and to supporting coastal communities as they respond to rapid and uncertain environmental changes. Our research synthesizes social, cultural, economic and environmental knowledge about oceans and coasts nationally (and globally). Over the life of the project and beyond, we are taking stock of what we know about Canada’s three oceans, building scenarios for the possible futures that await our coastal-ocean regions, and creating a national dialogue and shared vision for Canada’s oceans. Visit http://oceancanada.org/ for more information.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9424110651016235, "language": "en", "url": "http://www.fullertreacymoney.com/general/post7722/", "token_count": 830, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.047119140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6f7f2197-a0a0-4e59-966e-c1d45fed0846>" }
“Hello David, it's me again. The article below on solar power appeared in today's City A.M. paper (London). The author draws a graphic analogy of panels installed to date with the Ford Model-T which was produced from 1908-1927. Look at the sophistication, reliability and affordability of modern cars by comparison and we get some idea of how amazing solar power is likely to become in coming decades. It will transform the world, in my humble opinion.” Ed: Here is a section: First, grid parity – when electricity generation is competitive with grid-electricity rates without subsidies – is edging closer. In 2012, Bloomberg reported that Germany, Denmark, Italy, Spain, Portugal, Australia, and Brazil could already expect to achieve at least a 6 per cent return on PV investments. Many of these countries still offer indirect subsidies, so the market isn’t competitive quite yet. But the direction is clear. The average US PV market will likely reach proper grid parity around 2020, and states like California should reach that point sooner. Within a few years, arguments about feed-in tariffs will become irrelevant in many countries, because the solar industry won’t need subsidies. Second, large companies are flocking to solar. Thanks in part to cheap PV modules, non-energy businesses are becoming mini power generators. The retail giant Walmart already has a solar-energy capacity of almost 90 megawatts (MW) in the US. If the retailer installed panels on every US store, it could generate 1.5 to 2 gigawatts – or about twice the output of my local nuclear power station. If other big-box retailers follow – and many are already doing so – we could see collective generation capacity skyrocket, making solar increasingly viable as part of the energy mix. Its potential goes beyond retail. Solar is well-suited to industrial and processing applications: in Saudi Arabia, the Al-Khafji solar-powered seawater desalination plant is set to produce 30,000 cubic metres of salt-free water per day. And entrepreneurs are honing new applications. The US startup WaterFX, for example, is developing solar “troughs” that remove salt from water by distillation to deal with drought. But these innovations are only possible because solar technology is developing rapidly. Today’s domestic PV modules are the Ford Model-Ts of solar: cheap, mass-produced, commercial pioneers. But they are poor at converting sunlight into electricity (efficiencies of around 10 to 15 per cent are common). These figures, however, could easily double. Scientists from the California Institute of Technology and partners are developing a new multi-junction cell with a target efficiency of over 50 per cent. Building-integrated PV – glazing that generates power – could further popularise solar power. And PV is not the only form of solar energy. Improvements in other approaches, such as concentrated solar power (CSP), are possible. CSP uses mirrors to concentrate a large amount of sunlight onto a small area, driving a turbine. Just look at Spain’s 50MW Solnova Solar power station. Many thanks for the article, as informative emails are most welcome, not least in the field of technology. Solar farms can be understandably contentious if they are anywhere near recreational areas and sights of natural beauty, although they are considerably less menacing than noisy windmills. Today, virtually every business can now lower its long-term energy costs by utilising solar power on its buildings, as we read with Walmart above. Similarly, many homes will also benefit from the addition of solar panels, ideally when the target efficiency is above 50% with the help of graphene and other rapidly developing technological advancements. Additionally, I do not see why California and other cities prone to drought cannot benefit from solar-powered seawater desalination plants as Saudi Arabia is doing. My thanks to Eoin for pointing out this Carlsbad desalination plant, near the San Diego region of California. The state will need more of these plants.Back to top
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9312694668769836, "language": "en", "url": "https://blog.transportinindia.in/rfq-rfi-rft-rfp-in-tendering-auction-procurement-heavy-road-transportation-industry/", "token_count": 1128, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0732421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:827c472a-28be-492e-9757-4d11426704d7>" }
RFI – Request for Information An open enquiry that spans the market seeking broad data and understanding. RFQ – Request for Quotation An opportunity for potential suppliers to competitively cost the final chosen solution(s). RFT – Request for Tender An opportunity for potential suppliers to submit an offer to supply goods or services against a detailed tender. RFP – Request for Proposal Sometimes based on a prior RFI; a business requirements-based request for specific solutions to the sourcing problem. Request For Information (RFI) As the name suggests, procurement uses RFI’s to gather information to help decide what step to take next before embarking on negotiations. RFI’s are therefore seldom the final stage, but instead are often used in conjunction with the other three requests detailed in this article. An RFI is a solicitation sent to a broad base of potential suppliers for the purpose of conditioning, gathering information, preparing for an RFP or RFQ, developing strategy, or building a database which will all be useful in later supplier negotiations about: - The suppliers, including: facilities, finances, attitudes, and motivations - The state of the supply market - Supply market dynamics - Trends and factors driving change - Alternative pricing strategies - Supplier competition - Breadth and width of product/service offerings, by supplier - Supplier strategic focus, business, and product plans Procurement may use RFIs to include a detailed list of products/services for which pricing is requested. The pricing should be used for comparative purposes for later negotiation, not as the basis of negotiators buying decisions. Through analysis of RFI responses, strategic options, lower cost alternatives, and cost reduction opportunities may be identified. Request For Quotation (RFQ) RFQ’s are best suited to products and services that are as standardised and as commoditised as possible. Why? Procurement want to make the suppliers’ quotes comparable before negotiations begin. An RFQ is a solicitation sent to potential suppliers containing in exacting detail a list or description of all relevant parameters of the intended purchase, such as: - Personnel skills, training level or competencies - Part descriptions/specifications or numbers - Description or drawings - Quality levels - Delivery requirements - Term of contract - Terms and conditions - Other value added requirements or terms - Draft contract Price per item or per unit of service is the bottom-line with RFQ’s, with other dimensions of the negotiation deal impacting the analysis process as determined by the buyer. Supplier decisions are typically made by the procurement department following a comparison and analysis of the RFQ responses for negotiation benchmarking advantage. RFQs are typically used as supporting documentation for sealed bids (either single-round or multi-round) and may be a logical pre-cursor to an electronic reverse auction. Request For Tender (RFT):- An RFT is a procurement open invitation for suppliers to respond to a defined need as opposed to a request being sent to potential suppliers. The RFT usually requests information required from a RFI. This will usually cover not only product and service offerings, but will also include information about the suitability of the business. It is not unusual for a buyer to put out unclear or vague business requirements for an RFT. This lack of clarity on behalf of the procurement department can make it challenging for the supplier to propose a solution. This is not the best use of a RFT. RFT’s should only be used when the buyer is clear on their requirements, and is also clear on the range of possible procurement solutions that might fit the buyer’s needs, giving the buyer a negotiation advantage. A RFT is often not a very time or cost efficient method for procurement to source supply due to its lack of defined business requirements and open invitation for suppliers to respond. Without proper procurement training however, too many buyers issue RFQ’s that are in reality RFT’s. Request For Proposal (RFP):- An RFP is procurement’s solicitation sent to potential suppliers with whom a creative relationship or partnership is being considered. Typically, the RFP leaves all or part of the precise structure and format of the response to the discretion of the suppliers. In fact, the creativity and innovation that suppliers choose to build into their proposals may be used to distinguish one from another. Later negotiations tend to take more time and be more wide reaching in their impact on the buyer’s business. Effective RFPs typically reflect the negotiation strategy and short/long-term business objectives, providing detailed insight upon which suppliers will be able to offer a perspective. If there are specific problems to be addressed in the RFP response, those are described along with whatever root cause assessment is available. With good procurement training your RFP and RFT should seek specific data, offerings and quotations, and also seek specific questions about the following to assist your later negotiations: - The specific items on which the suppliers are proposing - Business requirements - Performance measures - Instructions on how to reply - Due date - Technical and other training - How will we evaluate how feedback will work - Describe the process for selection - Request for cost breakdown (sometimes) - Communication: cover letter (sets the stage), calls in advance - Who to contact with questions - Addressee – chosen carefully
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9470111131668091, "language": "en", "url": "https://smartasset.com/financial-advisor/yield-vs-return", "token_count": 1020, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.01239013671875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3968faf2-1b5b-4aed-880c-9eb563cf116a>" }
People often use yield and return interchangeably, referring to what you’ll earn from a fixed investment. However, there are some important differences to note for yield vs return. Learn the basics of these two important concepts, plus some key differences to consider when looking at each of them. Yield vs Return Basics The yield of an investment is income it earns. An investment usually expresses its yield as a percentage. For example, the interest or dividends a security produces over a certain period of time can be its yield. The yield of an investment uses the investment’s face value, or what an investor originally paid for a stock. Also, yield factors in an investment’s liquidity, or its current market value. Yield isn’t as predictable as return. However sometimes investors can anticipate yield, depending on the security and its predictability. It’s easy to see how an investor might confuse yield and return. After all, both refer to the income earned on an investment. But there are several distinctions between the two. Yield refers to income earned on an investment, while its return references what an investor gained or lost on that investment. Yield expresses itself as a percentage, while the return is a dollar amount. An investment’s yield is a more forward-looking assessment. As a result, it represents what an investor stands to gain (or lose) on that investment. Yield takes into account current market value and face value but does not factor in capital gains. Meanwhile, its percentage is typically an annual percentage rate (APR). As with any investment, the higher the risk, the higher the potential yield. Alternately, an investment’s return focuses on the dollar amount of what an investment has earned in the past. Return focuses on paid dividends, or annual payments made to stock owners or investors by the company. It also looks at capital gains, which is the increase in the value of an asset. Capital gains can both be short and long-term. Do not confuse yield with rate of return. Both are percentages that anticipate an investment’s expected return over time. However, rate of return takes into account capital gains and yield does not. Yield can also be a means of expressing a bond’s future earning power. But it requires more than just calculating an investment’s earnings. That’s where a bond’s current yield and coupon yield come into play. A bond’s current yield divides a bond’s total income by its market price. Current yield (CY) is a percentage that fluctuates based on market conditions. The coupon yield of a bond is the amount of interest a bond earns. Institutions issue bonds with a predetermined coupon yield. The market doesn’t affect coupon yield. Meanwhile, a bond’s yield to maturity also determines its earnings. That’s the amount an investor stands to earn on a bond should they hold it to maturity. The Bottom Line Both yield and return refer to what an investor might earn on a fixed investment. People often confuse the terms, but there are a few important distinctions between the two. The more forward-thinking of the two concepts, yield expresses itself in a percentage form. Also, it refers to the income earned on an investment over time. Return, however, focuses on an investment’s past earning and expresses itself in a dollar amount. Be careful not to confuse yield with rate of return. Both are percentages that express what an investor stands to earn on a particular security, but rate of return takes into account capital gains and yield does not. - If you’re not sure how to diversify your portfolio, a financial advisor may be able to help. Finding the right financial advisor that fits your needs doesn’t have to be hard. SmartAsset’s free tool matches you with financial advisors in your area in 5 minutes. If you’re ready to meet local advisors who will help you achieve your financial goals, start now. - Do you know how much investment risk you can tolerate? How much will your investment grow over time? Will capital gains taxes take a chunk out of your earnings? How will inflation affect your overall returns? SmartAsset’s investing guide can help you with these initial questions and give you a foundation for working with a professional. - Retirement investment can be tricky. You may not know what you’ll need to retire. You may have no idea what your 401(k) will be worth once you stop working. Meanwhile, you may need to figure out how large a role Social Security will play in your retirement plans. If any of those questions or others are haunting your retirement planning, SmartAsset’s retirement guide may provide some answers. Photo credit: ©iStock.com/domoskanonos, ©iStock.com/gece33, ©iStock.com/Dmytro Varavin
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9434958100318909, "language": "en", "url": "https://www.babypips.com/news/what_is_a_central_bank_and_wha", "token_count": 619, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.054443359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c4c17c8e-8698-4bdb-ab1f-96ac1cd25e5d>" }
Many of us have wondered what a Central Bank is. The term suggests that it is some kind of bank which is central in nature. A Central Bank in reality is an apex organization in an economy, and is vested with the responsibility of managing the monetary system for a nation or a group of nations. The broad functions of a Central Bank include implementation of monetary policy, managing currency stability, low inflation and full employment. Key functions of a Central Bank Management of the monetary policy: Issuance of currency: As a part of its responsibility of managing the monetary system of a nation, the Central Bank is issued with the sole authority of issuing currency notes. In some nations, the governments issue currency notes of smaller denominations and coins, which act as the ultimate legal tender, while the Central Bank issues the larger currency denominations. Banker to the state: The Central Bank performs the all important function of being the banker to the government. It conducts all financial transactions for the government and also raises money for the government via instruments like bonds or T-Bills. This last function is also closely linked to the monetary management of the economy, whereby issuance or redemption of T-Bills impact the money supply in the economy. Setting of various rates: A Central Bank is usually vested with the authority of setting various rates, which constitute important monetary policy instruments with it. These rates include the interest rate and cash reserve ratio (CRR) amongst other instruments. The interest rate is managed by changing the discount rate at which the Central Bank refinances commercial banks. The CRR is the ratio of all deposits that commercial banks are mandated to keep with the Central Bank. By varying CRR, the Central Bank can automatically change the money supply in the economy. The Central Bank can also use its lever of interest rates to encourage or discourage investment and affect employment levels in the economy. Open market operations: This is a key function of a Central Bank, by which it maintains exchange rate stability. The Central Bank steps in to buy or sell foreign exchange such that huge fluctuations in the local currency are avoided. Usually, in mature markets like the US, Europe or Japan, it is rare for a Central Bank to perform this operation as the currency is usually stable. Central Banks can also use open market operations as a monetary policy instrument. They can sell foreign exchange to reduce money supply in the economy and vice versa. Management of inflation: A Central Bank uses its authority to tweak interest rates to manage the inflation rate in the economy. Management of the credit system in the economy: Banker to the banks: A Central Bank acts as banker to commercial banks. It refinances their debt at the prevailing discount rates. Central Banks also act as a clearing house for the commercial banking system. Another key function of a Central Bank is that it acts as a lender of the last resort. This becomes important, when commercial banks face a sudden financial crunch or become insolvent. The Central Bank can step in to restore confidence in the system via devising various bailout packages for the commercial bank or banks.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9508918523788452, "language": "en", "url": "https://www.pgpf.org/infographic/the-national-debt-is-now-more-than-26-trillion-what-does-that-mean", "token_count": 305, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06201171875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e1915bd2-5e69-4249-aeda-1abb76dd0853>" }
- The Fiscal - What We're - What You On June 9, the gross federal debt of the United States surpassed $26,000,000,000,000. Although the debt affects each of us, it may be difficult to put such a large number into perspective and fully understand its implications. The infographic below offers different ways of looking at the debt and its relationship to the economy, the budget and American families. The $26 trillion gross federal debt includes debt held by the public as well as debt held by federal trust funds and other government accounts. In very basic terms, this can be thought of as debt that the government owes to others plus debt that it owes to itself. America’s high and rising debt matters because it threatens our economic future. The interest that we pay on the federal debt is now the fastest growing part of the budget. In fact, we spend over $1 billion per day, just on interest. It took just one month to add another $1 trillion to the gross federal debt, in large part because of the response to the current coronavirus (COVID-19) pandemic. When the health crisis is under control, it will be important to understand where our nation stands in its fiscal outlook and work together to implement sustainable solutions that promote economic stability in the years to come. Read on to learn more. Want to share this image on your site? Copy and paste the embed code below: Feel free to share this infographic on Twitter.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9448001384735107, "language": "en", "url": "https://www.statista.com/topics/3154/brexit-and-the-uk-economy/", "token_count": 513, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2080078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:b9431667-cca9-4459-b893-933f391fd98f>" }
One expected major long-term effect of the United Kingdom's withdrawal from the European Union is broadly predicted to impact the British economy. Indeed, the British public are fully aware of the likely impact of the "Brexit" upon personal finances, households and the national economy, factors that are also mirrored by government forecasts. Yet despite warnings from many financial and economic institutions, the majority of electorates continued to vote for the UK to exit the EU in the UK referendum on EU membership. A major argument put forward by the Leave Campaign prior to the referendum was that in the event that the UK elects to exit EU membership, it can invest its membership contributions in the domestic economy. Yet despite this suggestion, forecasts predict that the UK is destined to experience some economic decline in the foreseeable future. According to HM Treasury, the UK will continue to make contributions to the EU budget while it remains an EU Member State. The UK’s budgetary contribution was estimated to be 10 billion British pounds in 2015, around 1 percent of the total public expenditure and equivalent to 0.5 percent of GDP. GDP is expected to drop somewhat in the years following the vote, before stabilizing and growing annually by at least 2 percent for the remainder of the period. A similar trend can be found in the forecasts for public sector net debt. In the fiscal years following the referendum it is expected that public sector debt will increase until the 2018/2019 fiscal year, at which point it's expected that such figures will stabilize and begin to decrease past pre-referendum levels. In addition the UK consumer price index (CPI) illustrates the fact that prices are expected to increase annually by at least 2 percent from 2017 until 2021. The effect of the UK losing its EU membership is also predicted to have a substantial negative impact on the nations' output gap. The UK's output gap isn't expected to recover until 2021, meaning that the UK is predicted to fall short of its potential annual output for at least four years until it stabilizes. Previous figures for net foreign direct investment in the UK experienced a downward trend. In light of Brexit, it's difficult to perceive how this will change in the coming years. This text provides general information. Statista assumes no liability for the information given being complete or correct. Due to varying update cycles, statistics can display more up-to-date data than referenced in the text. In the following 7 chapters, you will quickly find the 43 most important statistics relating to "Brexit and the UK economy ".
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9559885859489441, "language": "en", "url": "http://www.caroljcarter.com/in-school-banks-dispense-financial-sense/", "token_count": 542, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0220947265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:cdcf52e6-4abc-4d28-9ad8-aacaddbeff3b>" }
Our nation’s recession has put the spotlight on personal finances, and many schools across the country have opened their own banks to teach students lessons in financial planning. When students don’t learn these skills early, the consequences can follow them through college and the rest of their lives. According to and debt has even been linked to suicide among students.” Another study by the Project on Student Debt reports that “More than half of today’s college freshmen owe over $1,500 in credit card debt. In 1993, 1.3 percent of graduating seniors with , “84% of undergraduates had at least one credit card and the average was 4.6 credit cards per student. The average balance was $3,173. Despite the , students’ credit card debt continues to rise as more students rely on credit cards than ever before owed at least $40,000 (in 2004 dollars). In 2004, 7.7 percent owed $40,000 or more.” This is why LifeBound’s new book, MAKING THE MOST OF HIGH SCHOOL, includes financial literacy exercises in every chapter and one chapter devoted to this topic. We also help students create an 8-year plan starting the freshmen year which includes budgeting. To receive a review copy, call our toll free # or email [email protected] By Katharine Lackey, USA TODAYWhen students at Carter High School in Strawberry Plains, Tenn., forget their lunch money, they don’t have to worry about going hungry.Instead, they wander over to one of the five tellers who work at the student-run bank, where they can withdraw money from their savings accounts or fill out short applications for a $5 loan, all without leaving the building, says Lynn Raymond, a banking and finance teacher at the school.“We’re easing them into learning about borrowing money and the responsibilities that go along with that,” Raymond says of the experience students receive at the bank, which opened Feb. 16 in partnership with First Century Bank.“It’s just so important because so many people get in trouble financially,” she says.Students across the USA are increasingly getting hands-on experience about the financial sector through banks operating in high schools, and sometimes even in elementary schools.The first in-school bank opened in 2000 in Milwaukee and today there are several dozen, says Luke Reynolds, chief of outreach and program development at the Federal Deposit Insurance Corporation. To view the entire article visithttp://www.usatoday.com/news/education/2010-03-31-schoolbanks_N.htm
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9529048800468445, "language": "en", "url": "http://www.iswc07.org/the-interest-rate-on-subsidized-loans-may-increase/", "token_count": 371, "fin_int_score": 5, "fin_score_model": "en_fin_v0.1", "risk_score": 0.10498046875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c7e8e8dd-ef84-4c51-80e4-55e848167347>" }
As a result of a Federation proposal, the maximum interest rate on government-subsidized home loans may be raised to 130 percent of government bond yields, plus 3 percentage points, according to a document held by the World Economy. The interest rate on subsidized loans is currently under 10%, even in the worst case scenario (banks charge an average of 2-3% higher interest rate on unsecured home loans for housing loans). If the new regulation goes into effect Banks will be able to charge a 12 percent transaction rate on subsidized home loans over a one-year period. A two percent increase in interest on a $ 10,000,000 loan over a 20-year term will result in an increase of approximately $ 13-14,000 in installments. (Due to interest rate subsidies (the Treasury assumes 50-70% of interest in the first year, depending on the purpose of borrowing and the number of children, then the subsidy is reduced to 35-50% in the 5th year) even with the raised ceiling, they could be charged at an interest rate of 6-8 percent, and later the installment will increase due to the lower level of support.) The effect of this on the market may be that it may not make sense to use a subsidized one instead of a market home loan and meet a number of conditions (age, marital status, public debt relief, presentation of bills on a construction loan, etc.) From the banks’ point of view - or interest rates go up because the banks all decide that. Market equilibrium can be maintained if the current interest rate differential of 2-3 percent between subsidized and market loans remains unchanged. So if we think about borrowing, but we would wait to see if credit rates are going to turn out well, we can count on the above.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9843871593475342, "language": "en", "url": "http://www.professionalcredithelp.com/blog/millennials-shockingly-unaware-credit-works/", "token_count": 396, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1357421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0865bf83-53dd-4072-9add-001922b4060b>" }
Millennials Shockingly Unaware of How Credit Works Millennials is a term which refers to American citizens who were born between 1980 and 1996. This group of Americans is known for its adoption of technology – it grew up with the personal computer after all – and for being more aware of the world around them. Millennials are regarded as being more likely to be interested in such concepts as fair trade products and sustainable living than their parents. But, according to a new study, Millennials have relatively little knowledge relating to their own credit and credit scores. A recently published study by the Consumer Federation of America and Vantage Score Solutions indicates that Americans aged 18 to 34 do not make it a habit to stress over their credit reports and credit scores, and that only about 50% of the group has ever ordered a copy of their credit report. By contrast, the survey indicated that 75% of older Americans had checked their credit reports at least once. Additionally, Millennials were found to be more likely to believe that credit repair services can help fix one’s credit and that age plays a role in credits scoring. Perhaps not surprisingly, Millennials also indicated more willingness to believe that the government is involved in maintaining consumer credit information, which is untrue. Shockingly, only 18% of Millennials were aware that companies such as cell phone carriers and mortgage lenders can use a person’s credit in the course of doing business. Another interesting finding was that Millennials who had gone online and requested a free copy of their credit reports, which is available through annualcreditreport.com, tended to know far more about credit than their peers who had not. Even Millennials who were familiar with credit scoring did not have the same level of overall credit knowledge as those who had requested their credit reports online. For more from Time, click here. It is one thing to have bad credit in today’s wo ... › read more and get weekly money-savings tips!
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9418177008628845, "language": "en", "url": "https://greencleanguide.com/how-can-clean-energy-contribute-in-the-efficient-use-of-csr-budget/", "token_count": 1483, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1357421875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c551adc8-d003-4b57-ae4b-42740273c904>" }
The recently passed Companies Bill, 2012 mandates large organizations to spend part of their net profit on activities related to Corporate Social Responsibility. Renewable energy presents an opportunity for organizations to make investments in sustainability that will pay back quickly. Purchase of clean energy or investment in clean energy projects addresses sustainability goals in addition to meeting corporate obligations. Investment in Renewable Energy (RE) is therefore an investment in long term sustainability. In August 2013, the President of India gave assent to the new Companies Bill passed by Lok Sabha thus replacing the old Companies bill of 1956. The new Bill mandates large companies to spend at least two percent of at their average net profits made during the three immediately preceding financial years on Corporate Social Responsibility (CSR). Companies need to earmark this amount in their budgets for the financial year. The Bill also provides a list of activities on which companies may spend their CSR budget. Environmental Sustainability is one of the options for investing the CSR budget mandated by the new Companies Bill. This category aims to address the adverse social impact of wasteful energy use and carbon emissions. The cost of many clean energy technologies has now decreased enough to make investment in them viable for consumers. Clean energy sources can provide cost-effective power over the long term at predictable rates. Clean energy is also the most effective way to minimize Scope 2 GHG emissions (GHG/Carbon dioxide equivalent emissions from fossil fuel based electricity consumption). RE is an ideal venue for investors who seek stable, long term returns. For consumers, it provides a hedge against continuously increasing power prices. For companies with identified CSR budgets, RE provides a combination of cost saving and demonstrable positive impact on environment. Renewable Energy – Economic Returns on Environmental Investments India is dependent on thermal power for most of its electricity requirements. The availability and cost of thermal power depends on supply of conventional fuels (coal, gas). Apart from old long term contracts, the market for these fuels is increasingly being dictated by short term pressures of supply and demand. Thermal power producers now face a highly variable input scenario in contrast to demand for low cost and long term visibility from consumers – a gap that is hard to bridge today. Cost Savings – Immediate Reduction in Power Bills Wind and solar energy have no raw material cost, making them independent of availability of conventional fuels. Companies sourcing wind / solar power stand to save significant costs over the typical 20-25 year lifespan of such power plants. Visibility – Insulate Utility Budgets from Market Buyers and investors in RE can determine long term costs of power at the start of the sourcing period itself. They can lock in at identified prices, thereby hedging the rapidly rising costs of power supplied by DISCOMS. Financial Benefits – Shorter Project Payback Investors can avail benefits like accelerated depreciation, and ten year tax holiday that significantly reduce project payback time. This is often an important criteria for utility managers to justify investments in such projects. How “Green” is Renewable Energy? RE can help environment and sustainability managers address their goals efficiently within limited budgets. A large portion of carbon emissions from most companies comprises of consumption of grid electricity. Sourcing RE helps managers minimize organization Scope 2 emissions related to consumption of electricity from the grid and in some cases even Scope 1 emissions (where on-site solar power helps reduce use of diesel generators). Address Business Sustainability Goals Sourcing RE addresses important environmental targets set by organizations as part of their Business Sustainability Programs. Sourcing RE also diversifies input cost risk for energy intensive industries with substantial budgets for power purchase. It also pre-empts the risk of imposition of carbon tax through direct or indirect routes like Renewable Purchase Obligation (RPO). Strengthen Brand Image. The Carbon Disclosure Project (CDP) and Global Reporting Initiative (GRI) provide platforms for organizations to disclose their GHG and sustainability performance. CDP ranks leaders annually on the Carbon Disclosure Leadership Index (CDLI). GRI rates disclosure based on comprehensiveness of action taken by companies. Initiatives like Business Excellence and platforms for Industry wide recognition like the CII Sustainability Awards reward strong performers for their initiatives in sustainability and environment. Renewable energy can be sourced from on-site as well as off-site locations. Sourcing renewable energy through Open Access: The Electricity Act, 2003 allows consumers to source electricity from RE power plants located at a site distant from the consumption point. Many companies are sourcing electricity from such off-site plants that are either owned by them (captive) or by a third party (Know more about Open Access mechanism). Captive Renewable Energy: Consumers can source electricity through wind or solar power assets owned by them. Apart from tax benefits, they can avail exemption from some of the open access charges such as Cross Subsidy Surcharge and Additional Surcharge. Typical RE projects pay for themselves within five to six years. Buying Renewable Energy: Consumers wanting to avoid upfront capital expenditure or those who do not want to add to their balance sheet can buy power from third party developers. Sellers may offer discount on the tariff offered by the Discom. Contracts may be signed for long durations of up to 20 years. This option lets consumers avoid fixed costs and make their cost of compliance variable. On-Site Solar Power: Solar power plants are modular in nature and can be accommodated in limited spaces. Approximately 12 sq. m. is required per kW. A roof area of 1,200 sq. m. can accommodate about 100 kW which can generate approximately 1.5 lakh units of electricity per year. Consumers having unused flat roofs or south facing roofs with ample shadow free space can install solar plants. In terms of cost, energy from solar plants is at grid parity in a few States. On-site solar power plants avoid the need for extensive project implementation activities like land acquisition, evacuation infrastructure and security. Companies can invest in solar projects and avail additional benefits such as accelerated depreciation. Those not interested in investing may explore installation on Build-Own-Operate (BOO) / Build-Own- Operate-Transfer (BOOT) basis. Developers can offer consumers a fixed tariff or tariff with pre-determined escalation for the period of PPA. (Know more about rooftop solar) Progressive organizations treat corporate social responsibility as an investment in their eco-system. Renewable Energy is an area where environment friendly solutions meet key business needs. Money is scarce in this challenging business climate where business managers try to conserve capital and minimize variable costs. For managers looking at avenues for cost reduction, RE is already cost effective compared to conventional power prices. For those with a long term view, investment in RE promises to address electricity requirements without compromising on internal hurdle rates. Renewable Energy is one of the few areas today where local business needs coincide with larger, national interests in limiting climate change and mitigating energy risk. Disclaimer: This article is sourced from monthly newsletter of Agneya Carbon Ventures Private Limited. You can view original newsletter here. Biodegradable plastics seem like a preferable option to keep fossil fuel-based plastics at bay. Although we are still a long… Read More If you ask people what is that one thing they would like to have in their life. For many of… Read More
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9383472204208374, "language": "en", "url": "https://repp.energy/resource-centre/glossary-of-terms/", "token_count": 1189, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.007537841796875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:6348b665-7163-4312-be9d-97285f346e9e>" }
Glossary of terms Avoided greenhouse gas emissions The amount of emissions, in tonnes of carbon dioxide equivalent (tCO2e), which would have been created to generate the same amount of electricity produced by a REPP-financed renewable energy project if fossil fuels had been used. It is calculated by multiplying the number of MWh generated (or forecast) by the project with the country’s grid emissions factor, which is itself calculated as total tCO2e divided by total MWh generated. Local, national or transnational financing that is drawn from public, private and alternative sources of financing and which seeks to support mitigation and adaptation actions that will address climate change. The total value of funding committed by REPP to contracted projects. A single home or workplace that is served with electricity from an off-grid renewable energy project. For standalone systems, such as solar home systems, one installation equals one customer, whereas a mini-grid is connected to several customers. See also: New connections Energy that is generated away from the main grid and close to where it is used. Includes small-scale renewables such as solar, biomass, geothermal and wind. Defined by the International Energy Agency as “a household having reliable and affordable access to both clean cooking facilities and to electricity, which is enough to supply a basic bundle of energy services initially, and then an increasing level of electricity over time to reach the regional average”. Environmental and Social Impact Assessment (ESIA) A process of predicting and assessing a project’s potential environmental and social risks and impacts. Environmental and Social Management System (ESMS) A set of policies, procedures, tools and internal capacity to identify and manage a financial institution’s exposure to the environmental and social risks of its clients/investees. Financial resources committed by third parties to a project being supported by REPP. For grid-connected projects, refers to the stage when all the conditions precedent of the financing agreements enabling the construction of the project have been fulfilled prior to the initial availability of funds. For off-grid projects, it is the stage when all of the conditions precedent related to the construction or operation phase of the project that is receiving REPP support are fulfilled. First-time energy access Any person or business being connected to an electricity supply for the first time as a direct result of an off-grid renewable energy project. See also: New connections, Customer International Climate Finance (ICF) The UK government’s commitment to building resilience and catalysing low carbon transition in developing countries. In September 2019, the UK’s ICF was doubled from £5.8bn in the previous five years to at least £11.6bn from 2021-2025. Independent power producer (IPP) A private entity that generates electricity for sale to utilities and end users. The rated power output, in MW, of a power plant or other electricity generator when operational. Also known as nameplate capacity and rated capacity. A mini-grid with a capacity of over 1MW. The number of people connected to an off-grid renewable energy project. It is calculated as the number of customers served by the project multiplied by the average number of people per household, which is deemed to be five persons. See also: Customer A project is considered new when an investee enters into a support agreement with REPP. An existing project can be considered new if the scope of the project is extended in terms of installed capacity (≥ 20%), geographical scope (new country) or, in the case of off-grid projects, change in the principal nature of the business model in comparison to that in the original support agreement. See also: Project Not connected to a centralised high voltage electricity grid. A conversion of light into electricity using semiconducting materials, typically contained in solar panels. Power purchase agreement (PPA) A contract in which a purchaser agrees to purchase and a supplier agrees to supply electricity generated in the future, normally at a specified price for a defined period. A project or portfolio of projects owned by an investee (or affiliate) that has executed a REPP Support Agreement. On-grid projects are individual projects, whereas off-grid projects, such as solar home systems or mini-grids, typically combine a portfolio of installations together as one project. To be eligible for REPP support, a project needs to have an installed capacity range of 1-25MW, except for wind projects, which may be up to 50MW. See also: New project Financing from non-public sources, including private banks, private companies, private or company pension funds, insurance companies, private savings, family money, entrepreneurs’ own capital and sovereign wealth funds. It includes all types of funding such as equity, debt and guarantees. Financing from official (i.e. government) sources. Any entity approved by the Board as such. A REPP partner can be a finance provider, risk mitigation provider or technical assistance provider. Risk mitigation instruments Instruments, typically in the form of guarantees or insurance, that transfer specific risks from one party to another. A system of hydroelectric power generation through which running water is diverted from a river and guided along a channel, or “penstock” to a generating house, before being returned to the river downstream. Sustainable Development Goals (SDGs) A collection of 17 global goals adopted by all UN Member States in 2015 with a vision of ending poverty, protecting the planet and ensuring that all people enjoy peace and prosperity. The target year for achieving all SDGs is 2030. Various types of non-financial assistance, including instruction, skills training, transmission of working knowledge, and other consulting services.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.8702349066734314, "language": "en", "url": "https://searchoracle.techtarget.com/definition/autonomous-transaction", "token_count": 170, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.015869140625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:35d07a41-ad56-499e-9fae-a74466b8a310>" }
In Oracle's database products, an autonomous transaction is an independent transaction that is initiated by another transaction. It must contain at least one Structured Query Language (SQL) statement. Autonomous transactions allow a single transaction to be subdivided into multiple commit/rollback transactions, each of which will be tracked for auditing purposes.Content Continues Below When an autonomous transaction is called, the original transaction (calling transaction) is temporarily suspended. The autonomous transaction must commit or roll back before it returns control to the calling transaction. Once changes have been made by an autonomous transaction, those changes are visible to other transactions in the database. Autonomous transactions can be nested. That is, an autonomous transaction can operate as a calling transaction, initializing other autonomous transactions within itself. In theory, there is no limit to the possible number of nesting levels.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9365658760070801, "language": "en", "url": "https://www.dpn.com.au/learn/market-updates/how-to-avoid-emotional-spending", "token_count": 611, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.115234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:bc510022-add2-40c2-8e43-cfed775e04a3>" }
With over 18 years experience, Elissa is DPN’s Enterprise Strategy Leader and a passionate advocate for helping people to build financial independence. Emotional spending refers to the act of spending money to purchase goods or services in order to combat negative feelings. While this act is a common occurrence amongst many people, it can have detrimental effects on the financial wellbeing of those who practice it. In this article, you will be introduced to four ways that will help you to avoid emotional spending. Identify your triggers If you find yourself falling victim to emotional spending, then it is likely that you can identify certain situations, places, or people who elicit the negative emotions that lead to you to spend money to make yourself feel better again. In order to get this under control, it is important to take some time to introspect and understand why these stimuli have such control over you. Additionally, once armed with this information, you can try to avoid these triggers or consciously stop yourself from spending when getting triggered. Avoid saving credit card information on online vendors A lot of emotional spending happens over the Internet. This is made even easier if you already have your payment information saved on the websites you frequent. Make it a point to delete this information to help reduce emotional spending. "Emotional spending" is the spending of money to cope with negative feelings. Free - No Obligation Ask us for a home loan health check Remember the 24 hour rule Another major tool is the 24 hour buffer period. If you feel like a certain purchase will help you feel better, then ensure you hold off for at least 24 hours. If, after that time period has elapsed, you still want the item, then go ahead and purchase it. That way you can be sure you are not acting on a whim and your finances will not bear the brunt of unnecessary expenses. Mindfulness refers to the practice of training the mind to stay focused on the present in order to best process and deal with the events in one’s life. The methods people use to practice mindfulness include meditation, practicing gratitude as well as applying positive thinking when confronted with undesirable situations. These practices have been connected to an increased quality of life, higher self-esteem as well as better interpersonal relationships. Practicing mindfulness is likely to help you stay in control of your emotions and may help you channel them in positive ways that are better for you in the long run. This information is provided by DPN Pty Ltd ABN: 94 630 700 186 Australian Credit Licence 514759. DPN Finance Pty Ltd is an authorised credit representative 504129 and related entity of DPN. Credit for Dream Big 100% Offset and Work Smart 100% Offset is provided by Adelaide Bank a division of Bendigo and Adelaide Bank Ltd, ABN 11 068 049 178 and Australian Credit Licence 237879. Casa Capace Operations Pty Ltd, NDIS provider number 4050038018 trading as Casa Capace.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9554083347320557, "language": "en", "url": "https://www.investinluxembourg.tw/news/renewable-energies-historic-european-agreement-for-luxembourg/", "token_count": 348, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.035400390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ecdc0732-e196-46a8-a643-175d2a71b092>" }
Luxembourg has just signed an historic agreement, with Lithuania: for the first time ever, EU Member States have initialled a bilateral agreement on statistical transfers of energy produced from renewable sources. Under the terms of the national renewable energy action plan, approved by the government in 2010, Luxembourg has set itself a target of 11% of energy production from renewable and sustainable resources. As it is structurally not equipped to achieve this objective on its own, the country has counted on cooperation mechanisms to achieve this, offering the possibility for one Member State to transfer amounts of renewable energy to another Member State. Such mechanisms are provided for in a 2009 European directive developed following the Kyoto Protocol. Since 2011, Luxembourg has started negotiations with a number of Member States that are likely to exceed their own targets for 2020 and in February 2011 a memorandum of understanding was signed by Prime Minister Jean-Claude Juncker and his Lithuanian counterpart Andrius Kubilius. The agreement signed by Žygimantas Vaičiūnas, Minister of Energy of the Republic of Lithuania, and Étienne Schneider, Deputy Prime Minister and Minister of the Economy of Luxembourg, is for the period 2018-2020, when Lithuania expects to surpass its target of 23% renewable energy production. Lithuania’s contribution to Luxembourg will be primarily in the form of wind, solar and geothermal energy, and then by biomass obtained through sustainable forest management. “We are the first to show that real cooperation in the field of renewable energies is possible and benefits both the partners to the agreement and the objective pursued at European level”, asserted Prime Minister Étienne Schneider. (Photo: SIP / Charles Caratini)
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9579390287399292, "language": "en", "url": "https://www.libertycapitalgroup.com/small-business-advice/credit-score-misrepresentation-by-credit-monitoring-companies/", "token_count": 4821, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01153564453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a4d7aea8-f92c-453d-9554-9b92ba492ff5>" }
What Is A Credit Score? For adults who are just entering into bank affairs, let’s break this down. If you own a credit card, borrowed a loan or have a credit account, you most likely have a credit report. A credit report is a log of how you govern your money. This data is then processed and used to calculate your credit score. This credit score is utilized by lenders for them to judge how risky of a client you are. Although this may seem like a simple quantitative process, there is a lot more that goes into it. Why Is A Credit Score So Important? One would wonder why one small number is so important. But your credit determines a lot more than you think. It affects multiple aspects of your life. A good credit score is vital when you move and attempt to settle. A case like this would include buying a house. This is one of the biggest steps a person makes in their life, however, it can be a fairly difficult one if you do not have a steady credit score. With the recessions and failing markets, banks have been hesitant to lend money. Despite the housing market reviving, the banks have raised the qualifications and specifications for lenders. With the high demands of the bank, one would resort to renting a house for the time being. Well, that too requires a credit check, the landlords may demand a bigger deposit or not let you rent the apartment or house at all if you have a bad credit score. Similarly, purchasing a car also is an important investment. It is a very common purchase amongst the population as well. The money required for buying a car is much lesser than a house so although it might be easier to purchase one with a bad credit score, it also means you qualify for higher interest rates and larger down payment. Just because of a bad credit score, you will have to pay thousands more to purchase the same vehicle. Not only this but after the purchase of the vehicle, you will need an auto insurance policy. Most, if not all, insurance companies check your credit scores to calculate your payments. So, a bad credit score will cost you here as well. If you are contemplating starting a business and need to borrow money, your credit score will determine whether you qualify for small business financing or not. The purpose of your lending is irrelevant I.e. it does not matter if you are using the money to expand your already standing business or you are starting from scratch, your credit score is the only factor in determining your eligibility for the loan. Just like that, if you have been searching for a job, it is also very common for employers to run through background credit checks for potential employees. This is more applicable in the government and financial sectors. A bad score or history could prevent you from getting your ideal job. Now, by chance or good luck, if you do make a purchase even with your bad credit score you have qualified for higher interest rates. That means paying much more for an item than need be. This is how banks reduce the risk of lending you money. What Affects my Credit Scores? There are multiple factors affect your credit score, each varying in degree to their relevance. Most commonly the credit score is determined by 5 factors. Payment History covers about 35% of your credit score, outweighing all the 4 factors. The main problem is the late payments, which have a big effect on your credit score. Recent payments have a higher effect than others and hence, recent more late payment means a lower credit score rating. A low payment has the ability to drop your score as much as a 100-points. Current and due debt on your account is also a very vital factor in determining your score. It covers around 30% of your credit score report. It is a simple record of how much of your credit limit you have consumed. For instance, your balance is $1000 your credit limit is $5000, this means your balance to limit ratio is 1.5 to or 20%. This is measured in all revolving accounts as well as individual ones. Hence, the higher the balance to limit ratio, the more negatively it impacts your credit score. It is better if one reduces their credit card balances and that will allow the credit score to rise higher. Types of Credit This covers around 15% of your credit score. This is called credit diversity and is a record of the variations of accounts under your credit. If you have multiple loans, mortgage, student loans, and multiple credit cards then that implies that you are a good credit risk than someone who has, for example, only one credit card and a student loan. The more diverse you’re your card is, the better the credit score. Credit History/ Credit Age These factors for around 10% of your credit score. Credit History is simply a measure of the duration that you have had credit for. It is measured by calculating the average age of all of your credit accounts. The longer you have had credit, the higher is your score. New Credit Checks/ Credit Inquiries This accounts for a rough 10% of your score. This keeps into the check the new credit accounts that have been opened or the requests or inquiries for more. Each inquiry lowers your credit score and is recorded on your credit for the next 2 years. However, the inquiries only affect your score for a year. A way to fix this issue is to be careful and only apply for credit when it is necessary. Checking your points through NAV does not affect your scores. What does not Affect my Credit Score? Now, in contrast to what has been mentioned above, it is important to know what details do not affect your credit score. It is important to remember that your credit score is a representation of how well you handle your finances as an individual, but not your associations as an individual. For example, your status, race, gender have no connections to your credit score. Likewise, even your salary or your employer is irrelevant to your credit score. Will I have the Same Score in all Three Bureaus? There are three main bureaus that collect and analyze credit reports and scores. These are the Trans Union, Experian, and Equifax, these companies merge your data which is then used by mortgage companies, landlords, credit card companies, etc. It is very common and often confusing for an individual when your one score represents a high score yet your request for a loan is still denied. Now that is due to the fact that the bank company or financial institution is not using the same report as you. All institutions do not use the same Credit Agency and it’s a common issue that the information is different in the report from each bureau. How are my Scores Calculated? There are variations in scores as they differ between different bureau and lenders. This because of how they have been calculated. The 3 major bureaus are credit reporting agencies and credit data repositories. This means that collect credit information on Individuals as well as businesses from creditors across the country. The information that these agencies receive is then used to create credit reports and credit scores in your credit file. But if all the bureaus gather the same data, why is it that the scores differ between the three of them? This is because although the 3 bureaus receive the same information, for example, your cell phone bill, car utilities mortgage payment and car bills, they all hold a different proprietary system that comes up with your score. An example of this would your score at Experian might be lower than the one at TransUnion because TransUnion pays more value on your regular mortgage payments than your late car payment. Why are my Scores Different? Different Credit scoring systems are used by different financial and other risk calculating institutions. The most widely known is called the FICO score produced by the Fair Isaac company. It is the most commonly adopted amongst financial institutions and owns the gold standard in credit scores. In spite of their own FICO score, all credit bureaus have their own proprietary scores. The main credit bureaus have also invented the Vantage score as a joint venture which stirs competition with the FICO score. Each bureau also has their credit score ranges. - Vantage score 3.0 and 4.0 300-850 (781-850 is preferred) - FICO score 8 and 9 300-850 (800-850 is preferred) - Industry-specific FICO score 250-900 (800-900 is preferred) All scores would be different based on the importance weighed on different credit transactions. An example would be that FICO might consider your car payment to have more weight than your debt in comparison to VantageScore. Hence, both scores will be different. In addition to this, larger financial institutions have created their own proprietary scores which are unavailable to the common public. The purpose of each score is to calculate the risk of an individual or business based on their credit report. Furthermore, there are two other causes of your credit score being different. Credit scores are calculated in real-time. This means that when a purchase occurs, the bureaus use their algorithms which produce a credit score. But since this all happens in real-time, there is a possibility of frequent shifts on your score depending on what has been reported. For example, if you have paid your master card bill on a certain day, your score will differ greatly between the day before you paid versus the day after. Hence, the change will be significant depending on the portion of debt paid off. Varying Information Between Bureaus All credit companies do not provide data to all bureaus. This means that all bureaus do not hold the same amount of information. Since credit scoring companies are completely dependent on the information on your credit profile, if it is different between the bureaus then the overall score will be different as well. For instance, if your MasterCard reports to TransUnion and not to Equifax then your credit balance and credit limit will be higher on TransUnion than in Equifax. This will produce a different score at each bureau. Which Score to Use? The VantageScore came to being in2006 to play as a competitor to FICO which was the dominant Credit company since 1989. FICO scores are the most commonly used credit scores for reaching financial conclusions in terms of loans, interest rates, and other credit accounts. FICO claims that more than 90% of decision making done by financial institutions and data audited by third parties is based on FICO scores. On the other hand, VantageScore believes that over 2800 organizations, 2500 of them being financial institutions, utilized 10.5 billion of its scores between 2017 and 2018. The utilization mostly came from credit card companies that manage potential applications and existing accounts. VantageScore and FICO use identical if not similar data to conclude your score. Both of them include your outstanding debt, payment history and other financial information to calculate your risk. But most importantly, they both use a score that ranges between 300 to 850. Now, although they use the same date, the algorithms of each system differ. Both FICO and VantageScore use numerous editions which also present themselves as the differences you see in your scores. Not only this but each, FICO and VantageScore have different weights (weightage) assigned to different factors. The difference in the influence of each factor can also result in different scores. - Most weightage: Payment History of loans and credit cards - High weightage: Total amount of debt owed. - Moderate weightage: Length of credit history - Least weightage: New Credit and Credit Mix - Most weightage: Payment History - High weightage: Age of credit type of credit, credit to limit ratio - Moderate weightage: Total balance and debt - Least weightage: Recent credit behaviour and inquiries, available credit. An advantage that comes with VantageScore is that the scores can be accessed for free whereas, for FICO, you would have to pay between $20 to $40 a month, depending on the level of monitoring you want. And the money you spend could be a complete waste if the lending company decides to use another score to measure your risk. The Problem with Incorrect Scores On average, one in 5 individuals has an error in their credit scores which makes them look more unsafe than they actually are. The consequence of this is, of course, bigger deposits and higher interest rates by the lenders. One would assume that with the amount of money at stake, accuracy would be prioritized, but it is not. The speed and volume are. The return in investment for correcting the errors in the data is not greater than the cost for the bureaus. The three major bureaus have over 200 million credit files, each containing on average 13 past and current information. Hence, accounting for nearly 2.6 billion pieces of data. Every new month, new data in billions requires updates which call for a quick system. The amount of data that needs immediate process coming from a variety of sources makes it impossible to not have errors. Although it might seem that credit bureaus are to blame, they are not only at fault here. The customer is as well. It does not fall on the credit bureaus to negotiate the dispute. The job of the credit bureaus is to collect information from the creditors and develop a report from the provided numbers. Often the creditors themselves do not provide the right information, for example, more than half of the hospital bills are inaccurate and debts are never properly accounted for, for example, an unaccounted mortgage payment. If you reach out to the bureau with a complaint about an error, their legal duty is to consult the creditor, asking if they stand by their claim and if the credit is due. If the creditor says that the customer owes them money, the credit bureau has no choice but to account for it. This is when the responsibility falls on the consumer. There needs to be a demand to regularly check the credit reports to analyze the data and look out for errors. After a long and hard battle, Congress has allowed one free credit report for an individual per year. This quickly gained popularity and utilized by over 40 million customers. When they see an error on their reports, only then can customers take action against it. It is reported that for the three main bureaus, there are complains about inaccurate information at least eight million times a year. The financial institutions and lenders who have already given out money trust their credit proprietary systems. The credit reporting and collection is a large industry and requires a big financial investment alongside a halt in the current transactions. The bureaus do not have the incentive to pay the fees required to make the change. They trust their algorithms and believe their system produces accurate credit reports and scores. Since there are no other options, customers have no choice but to resort to these faulty systems. Given this, it is expected that consumers are frustrated with the credit report system. It is the second biggest complaint handled by the Consumer Financial Protection Bureau. The main reasons were incorrect information on a credit report which accounts for 74% of the complaints. The other complaints include the credit bureaus investigation of a complaint. This covers about 11% of the remaining complaints. Most of the consumers are now demanding new laws and regulations regarding the errors of the bureaus. Here are 3 possible solutions that policymakers can take Annual Access to Credit Reports Since the credit system is reliant on customers to search and point out errors in their own reports, it is crucial that they must have access to their reports. Whether through email or post, the credit bureaus need to make sure that all the reports have been sent to their customers. This would be a solution for customers who are unable to access their reports online due to security freezes or inaccurate information. Create Penalties for Bureaus and Customers No one is punished besides the customer if there is inaccurate information. The Law only demands the bureau to question the creditor’s claim. There needs to be a penalty on the creditors who repeatedly provide inaccurate information. This would then push the credit companies to conduct investigations and ensure accurate information in the future. Even if a few companies are investigated and penalized, the fear of defamation and fines would drive the others to modify their systems for more accuracy. For those customers who try to wipe out legitimate debts by filing incorrect appeals, penalize them as well. This would eliminate errors on both sides of the system and result in a cleaner and more accurate act in the future. Increase Competition and Lower Barriers to Entry An increase in the variety of information that credit scores utilize and credit reports contain could help the system. An addition to the already collected information could be rent and utility bills r remittances transferred to the family. This could reveal vital information regarding creditworthiness. The bigger three bureaus benefit from having an established name and standards in the industry hence, being more relevant. This doesn’t allow the new bureaus with newer standards to gain a market. Policymakers must encourage competition through pilot programs or limited guarantees that uses the newer alternative credit reports and data. The promotion of competition will lead to growth in innovation and efficiencies in the credit industry. This is why it is commonly recommended to monitor your reports annually to keep an eye out for errors. You should also utilize the advantage that you are granted which is one free annual credit report. Not only this but make sure you are utilizing a monitoring service. Trans Union has one of the most upgraded and inventive services like Instant Alerts and Credit Locks. Utilizing these services will save you from potential inaccuracies and fraudulent behaviour that could ruin your credit score. The Importance of Monitoring your Credit Score. Why Should you Monitor your Credit Score? Monitoring your score can be easy, quick and inexpensive. For all the above-mentioned reasons, it should be highly recommended that one should monitor their scores. It could just be that 2 of your scores are perfectly fine while one bad score ruins your chances of getting approved for a loan. This only isn’t important because your finances should be under check but also because identity theft is common and fraudulent activity and potential errors on your reports can drop your credit scores. There are 3 main options that can be used to monitor your scores. - You can do it yourself: Ask copies of your scores and go over them yourself. - Utilize a free service: There are numerous services that you can entail to have a monitoring service check your credit reports for free. - Pay for a service: There are services that you pay for that monitor your reports at any allotted time and inform you of any irregular or potential erroneous activity. Do It Yourself: This is a long and labour-intensive method. However, you can use one of the few free services available American Express Credit Guide. This will show your: - Current balances - Credit limit Utilization - Total Available Credit - Number of open accounts - Number of Recent Hard Credit Inquiries Capital one: Credit wise. This will show your: - Payment History - Oldest Credit Line (not average age of accounts) - Recent Inquiries - Credit Used - New Accounts - Available Credit - Accounts and Balances - Personal details Chase: Credit Journey. This will show your: - Account Summary - Personal Information - Public Records Paying for a Credit Monitoring Service Not everyone believes in their ability to find inaccuracies in their reports nor do many have time it requires. In that case, you can use a paid monitoring service. Paid services charge a monthly fee and go through the reports at requests and inform about any errors or potential fraud. They include some of the free services along with extra services as well. FICO Basics 1B - Only includes the Experian report - Monthly credit report updates - Includes FICO Score 8 Cost per month is $19.95 FICO Ultimate 3B (quarterly reports) - Includes all three credit reports - Quarterly credit report updates - 28 FICO scores included - Identity theft monitoring Cost per month is $29.95 FICO Ultimate 3B (monthly reports) - Includes all three credit reports - Monthly credit report updates - 28 FICO scores included - Identity theft monitoring Cost per month is $39.95 What to do in Case of an Error in Scores Since errors and mistakes are sadly common in credit scores and reports, what should you do if you come across one? Contact the Credit Bureau Once you come across an error on your reports, it is recommended by the Consumer Financial Protection bureau to contact the three major bureaus (TransUnion, Equifax, Experian) the error. You must dispute the errors on the credit reports either online or by mail Provide your contact information and explicitly explain where you believe the error has occurred and why it is wrong. Also, attach supporting documents supporting your dispute. It is also advised by the Consumer Financial Protection Bureau (CFPB) to keep copies of any letters exchanged and if you are using mail, make sure it is a certified one with a return receipt. Contact the Furnisher Furnishers are the companies that provide the bureaus with the information, for example, banks and credit card issuers. It is recommended by CFPB that you contact the furnishers as well. Mail your dispute to the address or contact information provided by the furnisher, depending on the error, it will be faster if you directly meet the furnisher before the credit bureau. This will save a step but again, this is based on the error. If it as an identity-based error made by the credit bureau then, of course, it is best if you contact them first. Wait 45 days for the Investigation The credit bureaus have around a month after they receive the dispute to dig in and solve your queries. After that, they have to deliver you the information within the next five days. If you have reported the dispute to the furnisher, they also have to investigate and report back within the next 30 days. But if the furnisher stands by its claim, they will not investigate further. Review the Results The bureau involved with the error will soon provide you with the results of the investigation. They will send it in writing alongside an updated credit report if your dispute has caused changes. The bureau will also provide contact information of the furnisher that provided the incorrect information. Check for Updates Updates take time to appear on your credit report. It is heavily dependent on the credit bureau’s update system and speed. It is also reliant on the furnisher and the time they take to send the updated report to the bureau. If your report hasn’t been updated in several months, then report to the relevant bureau and furnisher. The complexities regarding credit scores and the number of financial institutions involved make it a hard concept to grasp. While it is technical and daunting, it is very important to be careful with your score reports regardless of wherever you receive them. It is crucial to understand the data that is being reported and avail as many monitoring tools as you can. Alongside this, use the free services mentioned above as much as you can so you understand the information and can report errors as soon as possible before they negatively impact your credit score.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9491137862205505, "language": "en", "url": "https://www.meseuro.com/multiannual-financial-framework-2021-2027/?lang=en", "token_count": 628, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.03564453125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:23644216-0ec4-412f-8534-f92120f3394a>" }
In May 2019 the European Commission presented the Multiannual Financial Framework 2021-2027 (MFF), a seven-year budget for the European Union taking into account the consequences of Brexit and the new needs that has arisen after several changes in the resource distribution. What is the multi-annual financial framework The multiannual financial framework (MFF), also called financial perspective, is one of the three components of the EU system, together with the EU budget and the European Own Resources System. According to the Treaty on the Functioning of the European Union, the multiannual financial framework ensures the orderly evolution of the expenses of the Union within the limits of its own resources over a period of five years at least. MFF is a budgetary planning tool defining priorities, how much and in which areas the EU should invest over seven years, implementing the Union’s policies in key areas such as the environment, security, the single market and cohesion policies. The effects of MFF are important for the Member States and maybe the full impact of these funds on national budgets is not sufficiently taken into account. Very often the politician dialectic does not refer to the economic effects of European programs, which are essential for the European countries, especially for Italy, one of the countries that pays more into the European coffers than it receives through funding programs. What changes in the new financial perspective Going back to MFF, the budget is 1,135 billion euros, almost a third higher than the 2007-2014 austerity budget (which was between 900 and 1000 billion euros), but substantially in line with the budget of the previous seven years 2014 -2020, with European countries called to increase their contribution to 1.11% of GDP (today it is 1%). The picture that emerges is of a considerable increase in resources in the research, environment and youth sectors and a slight cut in traditional policies: less funds for agriculture and cohesion policies, more investment for research (for example the Horizon2020 program goes from 80 billion of assets to almost 100), immigration, defence and the environment. The European Parliament has criticized the small budget and the cuts to the Common Agricultural Policy (CAP) and the European Social Fund, while it has expressed its intention to raise spending for the environment to achieving the goal of reducing the 30% emissions, in line with the commitments on the Paris Agreement on climate change. Italian reactions to MFF Several member states, including Italy, have expressed opposition to the cuts to the Social Cohesion Fund and the CAP. On this point Confindustria also hopes for a budget that, although attentive to innovation and research, will not make cohesion policies lose their true contribution to ambitious economic growth in Europe. The proposals for the new financial framework are being examined, the hope is that Italy will find its way to play a leading role in the negotiation phase to achieve useful results both for economic growth (for example in the context of agricultural policies, transport and technological development) and the protection of national interests through the request for more resources to cooperate with the countries of origin and transit to facing e the migration crisis.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.935989260673523, "language": "en", "url": "https://www.proprofs.com/c/project/need-know-project-risk-assessment/", "token_count": 2006, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1337890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:962df04f-5928-4bca-beff-bea171d2ab4a>" }
If I were to give one project risk assessment example that ultimately led to project failure, it would be the launch of Samsung Galaxy Note 7. Do you remember that launch? The Samsung Galaxy Note 7 was launched on Aug. 2, 2016. It was said to be “the best smartphone the world will see” for that year. But, in just twenty it went from “the best” to “the most deadly” smartphone that year! The phone had a battery issue that led many products to burst when people used the phones. It was a disaster! This ordeal lasted for forty-seven days, after which Samsung completely stopped the production of the Note 7. The loss for this project was estimated to be $17 billion. Ouch! This is precisely the kind of damage companies tend to face if they don’t pay attention to project risks. They may seem petty or irrelevant, but in the long run - they bite! Before we get into risk analysis in project management, we first must know what it is. What Is Project Risk? Project risk can be defined as: Any uncertain or unfortunate event that can cripple a project, keeping team members from achieving the objectives. Such risks can be known or unknown as no project is completely immune to them. Other than that, it is also essential to know that project risks can be of various types, and that’s why it is difficult to predict the occurrence of risks. Let’s discuss the types of risks in project management. Types of Project Risks There are nine types of risks in project management, including: - Cost Risks - Schedule Risks - Performance Risks - Governance Risks - Strategic Risks - Operational Risks - Market Risks - Legal Risks - Other Miscellaneous Risks The miscellaneous risks are the ones that are associated with external hazards; a few examples are earthquakes, cyclones, storms, and floods; sabotage, terrorism, and vandalism; labor strikes; and civil unrest. While known risks can be anticipated and eliminated in advance with careful planning, unknown risks are hard to tackle. That’s why organizations leverage a risk assessment strategy to learn about the possible risks, to avoid them or minimize their impact. What is Project Risk Assessment? Project management risk assessment is one of the most important steps in the risk management process. Risk assessment can be defined as: The determination of the qualitative or quantitative value of the risk that may or may not crop up in a project. Like any process, project risk assessment has several stages, and each stage is crucial to eradicating project risks and its potential drawbacks completely. Let’s now discuss the different stages of project management risk assessment. Project Management Risk Assessment: An Overview Project risk assessment answers the question “how to identify project risks?” The process takes into account five things: - Event: What can happen that affects your project and cause a risk? - Timeframe: When can the risk occur? - Probability: What are the chances that the risk will actually occur? - Impact: How badly will it affect the project? - Factors: What are the triggers that could cause a risk to occur? With all these factors in mind, you must follow a project risk assessment process. Let’s discuss how you can go about project risk analysis and what are the steps you need to follow. Project Management Risk Analysis: 5 Steps to Follow # 1 Identification Identifying the risk is the first step and requires the active involvement of the team members, regardless of the role. Any person who is a part of the project lifecycle qualifies to be a part of the discussion and voice their opinion. The goal of this step is to identify the probable risks that may trouble any phase of the project later. As multiple factors can lead to project failure, you should comprehensively discuss key risks such as: When the entire team participates, you get more insights into possible problems, or else, you may end up missing a few. You can either discuss and pen down the risks, or better, document them on a simple project management tool for keeping a track. Initiate the discussion on past mistakes to ensure that they don’t impact the current and future projects. After that, you can brainstorm solutions to do away with the mistakes, so that they have no impact on your project lifecycle. To proceed effectively with this step, you can use a simple project management tool to take inputs from everyone on a single platform. - Probability Analysis When you have multiple risks to mitigate, analyzing the likelihood of their occurrence is crucial. You can take a cue from past projects or ask your team members to add inputs to gauge what risks can cripple the project. After that, decide what risk needs to be nailed down first and what can be addressed later. Categorize the risks as high, medium and low for better management and to stay on top of the project lifecycle. While deciding the probability of the risks, it’s crucial to rely on a data-driven approach than simply following your intuition. - Impact Analysis Different events can have varied repercussions in the project lifecycle. In other words, some undesirable events can have a larger impact on the project than the others. Until you gauge their impact, it’s impossible to come up with a mitigation plan. The ideal approach is to assess the impact of individual risks and focus on minimizing them. Since you are already done with probability analysis, you can apply the approach for segregating risks based on impact: - Low Probability - High Impact - High Probability - Low Impact - Low Probability - Low Impact - High probability - High Impact - High Probability - Low Impact Focus your energy more on the “high probability high impact” risks first, and then, you can deal with risks having a low impact. Moreover, you must also learn about the type of impact associated with every identified risk. The risk can be a threat to your budgeting, timelines, or any other factor. Now, you need to decide what you need to work on first. Is it timelines or budgeting or any other factor? Once you prioritize, setting realistic expectations for your client will become easier, which also helps in preventing burnout. # 3 Avoidance If you find a risk that can be avoided, consider doing away with them at the beginning of the project lifecycle. Avoiding a risk usually involves leveraging technical strategies that have a 100 percent success rate. All in all, you can breathe free as the probability of that risk is reduced to almost zero. Although some risks can be avoided, that’s not going to be the case all the time. There are risks you can’t avoid, no matter how hard you try. If your project, for instance, has a high rate of employee attrition, you can set aside buffer resources who can work in case the need arises. This way, you can ensure that such a risk doesn’t cripple the bottom line of your project. For such imminent and unavoidable risks, you need to have a mitigation plan. Having a mitigation plan ensures minimal impact on your project if any unfortunate event occurs during the project lifecycle. The real worth of a plan is realized, only after its successful implementation. Even if you draft the most comprehensive risk plan, the application is crucial. For that to happen, you need to have a risk monitoring plan. A risk monitoring plan helps in keeping track of whether teams are following what’s required or not. Risk monitoring helps in maintaining accountability since you can now determine whether a person is following the plan or not. Risk Assessment Project Management: Deploy the Right Tools No project manager can forgo risk assessment as it directly impacts the bottom line of the project. Even the old adage says, “Prevention is always better than cure.” Risk assessment can ensure that every stage of your project lifecycle is seamlessly executed, without experiencing any significant issues. Even if problems arise, you always have a mitigation plan to guide you on exactly what you need to do for averting damage. Drafting an assessment plan is not enough for multiple reasons. Until team members are on the same page, there is always a probability of miscommunications and conflicts. Adequate training for risk assessment can help in filling the gaps and further ensure that your members are not running from pillar to post, in case of an outage. At every stage, you are most likely to receive multiple inputs; causing you to lose track. To avoid such situations, you can consider using project risk assessment tools to stay on top of risk assessment. Not only a tool will help you in planning, but it will also promote collaboration among your teams. Now that you have a basic idea about risk assessment in project management, let’s cover a few FAQs on similar topics. Q1. What is risk assessment in project management? - Project risk evaluation or assessment can be defined as the process of identifying and eradicating the possible risks in a project. Q2. What are project risk factors? - A project risk factor is an event or situation that might give rise to one or more risks. Q3. What makes a project high risk? - Risk is all about the uncertainty of events that affects a project negatively or positively. A few examples of risks include safety risks in construction projects or IT project risks such as security risks. Do you want a free Project Management Software? We have the #1 Online Project Management Software starting at $0
{ "dump": "CC-MAIN-2020-29", "language_score": 0.955437421798706, "language": "en", "url": "https://www.wisegeek.com/what-are-the-benefits-of-comparative-advantage.htm", "token_count": 494, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.06884765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f0173164-8f6c-4061-9077-c248d610bdc3>" }
The concept of comparative advantage was first formulated by economist David Ricardo as an explanation of the benefits of international trade for countries. His theory concluded that a country could increase its income by specializing in certain products and services and selling these on the international market. Businesses also may have a comparative advantage over their competitors resulting from certain assets, skills or geographical and historical factors. For example, an industry may be in an area where the workforce is specialized in certain skills, or an agricultural business may be situated in an area of rich soil and favorable climate. The benefits of comparative advantage also may apply to people and provide a reason why they should specialize in certain skills rather than others. Ricardo’s theory of comparative advantage points out that, if a country is relatively efficient at producing certain products then it should specialize in these, even if it does not have an absolute advantage in their production. In other words, even though other countries might produce these goods more efficiently, a country should still specialize in certain goods if the opportunity cost of producing them is lower in that country. The opportunity cost is the cost of the next best use that could be made of the resources devoted to production of the goods. Opting to specialize in goods that it produces comparatively efficiently could help a country to sell more and increase its income. The benefits of comparative advantage are that, if the country specializes in those goods in which it is relatively most efficient, then the total national output and, therefore, the national income may be increased. The country can produce more of those goods than it needs and export them to other countries while using export proceeds to purchase imported goods and services that it does not produce. In economists' terms, the country is pushing its production possibility frontier outward and, therefore, increasing its national output. The benefits of comparative advantage may, therefore, result in greater national income. In the case of a trading company, the benefits of comparative advantage may explain how a company can increase its profits by concentrating on producing those goods and services for which it has a comparative advantage over its competitors. This may mean concentrating on core products and core competencies. The company may be more efficient than its competitors in producing certain items owing to the possession of certain advanced tangible assets or valuable intangible assets. For example, the company may possess certain patents or know-how enabling it to make its processes or products more efficient. Valuable intangible assets could include having experienced management or a skilled workforce in place.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9384591579437256, "language": "en", "url": "https://dollarsandsense.sg/beginners-guide-alternative-investments-singapore/", "token_count": 603, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.041259765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:ee826897-0cd1-40f8-9f98-023b081d59fb>" }
When you think about investing, the most common instruments you would think of are stocks, ETFs, REITs and bonds. However, there are alternative investments that exist beyond these conventional financial assets. What Are Alternative Investments? Alternative investments are defined as investments that perform differently from stocks and bonds in the public equity market. They are a means for investors to diversify their investments. As they perform differently from the public equity market, alternative investments give you a chance to reduce your exposure to market volatility. Before diving into the types of alternative investments, here are some articles to introduce reasons why some investors venture into alternative investments. Robo-advisors are an increasingly popular way of investing in ETFs. They are a digital financial advisor that constructs and rebalances a personalised portfolio for you based on strategic algorithms or mathematical rules. The logic behind Robo-advisors is to avoid the behavioural bias that sways retail investors to buy or sell stocks based on impulse or market changes. Automating the portfolio construction and rebalancing process increases the probability of higher long-term returns. Here are some articles you can read about Robo-Advisors, or using them: Overseas properties remain an attractive form of alternate investment for the investor looking to make recurring passive income. For property investors who are already heavily exposed to the Singapore market, investing in property overseas is a great way to diversify or increasing returns. Here are some articles you can read about overseas property investments: Peer-to-Peer (P2P) Lending Peer-to-Peer (P2P) lending is the lending of money to individuals or businesses through online platforms. Lenders are able to use their money to finance others’ loans in return for interest on the money lent out. This allows an individual or company to obtain a loan from other individuals rather than borrowing traditionally from a bank. For the lenders, this is an opportunity to gain higher returns and expand their investment portfolio. However, P2P lending comes at the risk of borrower defaulting, and being unable to return the promised yield. You can read the following articles to gain a better understanding of P2P lending: Another form of alternative investment is the investment of tangible assets such as wine, watches, rare coins, stamps or even precious metals. Here are some examples and articles about commodities investments you can read: More Beginners’ Guides To Check Out Enjoyed this beginners’ guide? Here are guides on other topics that you might also be interested to read: Find The Best ETFs On FSMOne.com Choosing the right ETF is crucial to your investment success. Distilled from over 2,000 ETFs available on FSMOne.com, the 2020 edition of the ETF Focus List brings you the best in class ETFs that will help you invest globally and profitably. Click here to find out more!
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9275239109992981, "language": "en", "url": "https://erg.berkeley.edu/taking-account-of-green-technology/", "token_count": 176, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.49609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c3d37dd7-f16b-4afb-b303-1abd7e851871>" }
What if it’s not that “green”? Professor Dan Kammen comments on Bloom’s carbon promises. The California Public Utilities Commission gives rebates to companies that use “Bloom boxes.” Bloom Energy claims that their boxes reduce carbon emissions and increase energy efficiency. In reality, however, the carbon reductions have not been up to Bloom’s projections. While some call this “greenwashing,” Dr. Kammen emphasizes that companies have an obligation to explain why new technologies do not meet expectations: “If you are receiving public money or private investor money you need to own up to your bottom line. You do have a responsibility to explain that difference. It’s not being irresponsible—it’s being an ambassador for an emerging technology.” See the full story on NBC here.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9561790227890015, "language": "en", "url": "https://forbes.ge/news/3175/Three-Countries-With-the-Largest-Number-of-Bitcoin-Miners", "token_count": 614, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.4765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4151676e-c3a1-49c9-b133-8d9a40d46ff7>" }
Three Countries With the Largest Number of Bitcoin Miners Bitcoin mining is a process that both adds transactions to the cryptocurrency blockchain ledger but also unlocks new Bitcoins into the system. The process involves using computer power to solve complex mathematical puzzles. Bitcoin mining is an essential process in Bitcoin security – the complexity and effort makes the cryptocurrency system less vulnerable to attack. What is a Mining Pool? Mining is now predominantly executed in large, specialised warehouses with huge amounts of mining hardware – hashing power is typically directed towards mining pools in these venues. Mining pools are groups of miners who work together to mine, under the agreement that they will share block rewards when they are unlocked. The rewards are allocated in proportion to the contributed mining hash power of each member of the pool. Which countries have the most Bitcoin miners? The process of mining for Bitcoin has become so complex that the amount of computer power, and as such electricity, required in mining has become sizeable. Therefore, the largest numbers of bitcoin miners, geographically speaking, can be found in countries where not only is new technology highly prevalent but also where access to cheaper energy resources is available. Therefore, mining tends to gravitate towards countries with cheaper electricity. Further to this, in each country, the top handful of mining companies have harnessed a large amount of network hash power in their mining efforts – creating a more centralised structure of the mining process. There are only a few countries that can boast of a concentrated mining effort. As such, only a few countries are able to sizeably export bitcoins. Based in Georgia, BitFury is a company known for being one of the largest players in the Bitcoin mining business segment – developing and selling efficiency streaming hardware to Bitcoin users and businesses. Bitfury is one of leading full service Blockchain technology companies and one of the largest private infrastructure providers in the Blockchain ecosystem. As such, as of 2016, BitFury was mining about 15% of all bitcoins. China mines the most bitcoins of any nation and therefore also exports the largest number of bitcoins. This has been driven, in part, by cheap electricity – Chinese Bitcoin miners have gained an advantage by capturing a large percentage of Bitcoin’s hash power. Most, and indeed some of the world’ largest, mining pools are based in China. As home to 21 Inc., one of the largest Bitcoin mining companies across the globe, the United States has become one of the highest bitcoin mining countries. Based in California, 21 Inc runs a huge mining effort whilst selling low powered bitcoin miner solutions as part of their 21 Bitcoin computer product. The majority of 21 Inc’s hash power is pointed towards 21 Inc’s mining pool and as of 2016, 21 Inc mines about 3% of all bitcoins. Smaller mining pools are operating the world over – which cumulatively make up a far smaller part of the Bitcoin mining endeavour. The remaining hash power is spread across the globe – with only approximately 20% of all bitcoins mined taking place outside of the countries listed above.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.961762011051178, "language": "en", "url": "https://mystiquechalet.com/tag/paneles-solares-guatemala/", "token_count": 591, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0830078125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2002d6f6-eb0e-4463-a2b1-1246926b47c8>" }
The shift to utilizing solar energy is becoming a new phenomenon that is catching on very quickly. The installation of a solar energy generation system allows for the energy of the sun being changed into usable electrical energy that can further be stored. Given all of this, it may be argued that the most important part of the system is the solar panel. It is via that the energy conversion is made attainable and ultimately, the level of independence that is available to consumers of solar energy is made available. Hence, it is very important select the best quality solar panels for your needs. What is the cost of a solar panel? To determine the price of various factors have to be taken into consideration. In fact, the worth can also be vastly influenced by the kind of subsidiaries set in place to be used of renewable energy by the federal government for the promotion of different energy sources. There is usually priced based mostly on how much capability it generates (measured in Watts), the physical size of each panel, the durability or warranty interval offered, the standard of supplies used in it as well because the types of certifications that the panels have. The worth might also fluctuate depending on how many panels are being purchased as part of the package deal – the overall rule showing the price lowering with a rise in a number of panels in the package. Nonetheless, it is to be kept in thoughts that worth is rarely the primary factor to bear in mind when buying a solar panel. The panel will have to match the aim perfectly as a way to give its most performance. Factors to Look Out for When Purchasing Solar Panels It is at all times a good idea to search for the perfect place to purchase solar panels before making a purchase. This would usually make sure that all the factors that add as much as give an environment friendly and high-efficiency product are taken care of. One of the essential things to ensure is that the temperature coefficient has a low percentage per degree Celsius. Provided that the conversion efficiency is a measure of how much solar energy a panel can convert into electrical energy, it is best to search for a panel with a high conversion efficiency. A very good solar panel may also have little or no potential induced degradation (PID). Along with all of these, it is usually a good suggestion to research the warranty interval granted by the company because it displays the boldness the company shows in the panels. Take into accout the fact that the light-induced degradation (LID) of an excellent solar panel can also be little to none because an elevated LID signifies that the quantity of power produced by the panel is less. While it’s mostly considered as an environmental value, paneles solares guatemala looking on the embodied value of the panel can also be generally a good idea for an individual since it’s a measure of how shortly the funding on the panels might be paid back by its energy production intensity.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9173446893692017, "language": "en", "url": "http://www.rroij.com/open-access/are-generic-drugs-as-safe-as-branded-drugs-.php?aid=77617", "token_count": 2534, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.337890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4df7c3a8-3c36-47d8-80ad-3418a1fbb561>" }
Lalitha school of pharmacy, Hyderabad, India Received date: 04-06-2016; Accepted date: 22-06-2016; Published date: 28-06-2016 Visit for more related articles at Research & Reviews in Pharmacy and Pharmaceutical Sciences Generic Drugs, Branded Drugs, Pharmaceuticals, Medications, Abbreviated New Drug Application (ANDA), FDA. Generic medicine contains equivalent active ingredients, within the exact same strength, as brand-name medicine. Once a drug is initial developed, the drug company that discovers and markets it receives a patent on its new drug. The patent sometimes lasts for twenty years, to allow the originating company an opportunity to recoup its analysis investment. Once the patent expires, a generic version of the drug might become on the market. Generics ar marketed underneath the drug's chemical, or generic, name and meet an equivalent FDA quality and effectiveness standards because the original [1-5]. Is generic medication as effective as brand-name drugs? A drug is that the same as a drug in dose, safety, strength, quality, the approach it works, and the approach it's taken and therefore the approach it ought to be used. FDA needs generic medication has constant top quality, strength, purity and stability as brand-name medication. Not each drug contains a drug [6-10]. Once new medication area unit 1st created they need drug patents. Most drug patents area unit protected for twenty years. The patent, that protects the corporate that created the drug 1st, does not permit anyone else to form and sell the drug. Once the patent expires, alternative drug firms will begin commerce a generic version of the drug. But, first, they need to take a look at the drug and therefore the agency must approve it. Creating a drug prices numerous cash. Since drug manufacturers don't develop a drug from scratch, the prices to bring the drug to plug area unit less; so, generic medication area unit sometimes more cost-effective than brand-name medication. But, drug manufacturers should show that their product performs within the same approach because the drug . How generic medication approved? Drug corporations should submit abbreviated new drug application (ANDA) , for approval to promote a generic product. Drug corporations gained bigger access to the marketplace for pharmaceuticals, and originator corporations gained restoration of patent lifetime of their product lost throughout FDA's approval method [11-16]. New drugs, like alternative new product, square measure developed beneath patent protection. The patent protects the investment within the drug's development by giving the corporate the only right to sell the drug whereas the patent is in result. Once patents or alternative periods of exclusivity expire, makers will apply to the authority to sell generic versions. The ANDA , method doesn't need the drug sponsor to repeat expensive animal and clinical analysis on ingredients or indefinite quantity forms already approved for safety and effectiveness . Standards for a generic drug Health professionals and customers may be assured that agency approved generic medicine have met an equivalent rigid standards because the pioneer drug. It should contain an equivalent active ingredients because the pioneer drug(inactive ingredients could vary) be identical in strength, dose kind, and route of administration, have equivalent use indications, be bioequivalent, meet an equivalent batch necessities for identity, strength, purity, and quality, be factory-made underneath an equivalent strict standards of FDA's smart producing observe laws needed for pioneer product [16-20]. When a drug product is approved, it's met rigorous standards established by the agency with relation to identity, strength, quality, purity, and efficiency. However, some variability will and will occur throughout producing, for each brand and generic medicine. Once a drug, generic or brand, is factory-made, terribly tiny variations in purity, size, strength, and different parameters area unit allowable. Agency limits what proportion variability is suitable. Generic medicine area unit needed to possess an equivalent active ingredient, strength, dose kind, and route of administration because the brand product [21-25]. Generic medicine ought not to contain an equivalent inactive ingredients because the brand product. The drug manufacturer should prove its drug is that the same as (bioequivalent) the brand drug. For instance, once the patient takes the drug, the number of drug within the blood is measured. If the amount of the drug within the blood area unit an equivalent because the levels found once the brand product is employed, the drug can work an equivalent. Through review of bioequivalence knowledge, agency ensures that the generic product performs an equivalent as its various brand product. This customary applies to all or any generic medicine, whether or not immediate or controlled unharness. All generic producing, packaging, and testing sites should pass an equivalent quality standards as those of name medicine, and also the generic product should meet an equivalent exacting specifications as any brand product. In fact, several generic medicine area units created within the same producing plants as brand drug product . Generics drugs works same as brand drugs A study evaluated the results of 42 published clinical trials that compared diabetic generic drugs to their brand name counterparts. There was no evidence that brand name heart drugs worked any better than generic drugs . The Food and Drug Administration (FDA) requires generic drugs to have the same quality, strength, purity and stability as their brand-name versions. Generic drugs are thoroughly tested to make sure their performance and ingredients meet the FDA’s standards for equivalency [26-30]. Generic drugs work in your body in the same way and in the same amount of time as brand-name drugs. Both brand-name and drug facilities should meet identical standards; the bureau won’t allow medicine to be created in substandard facilities. The bureau conducts regarding three, 500 inspections a year to make sure standards are met. In fact, brand-name companies are connected to associate degree calculable five hundredth of drug production. They often build generic copies of their own or alternative brand-name medicine, and then sell them with a generic name. The science of bioequivalence evaluations for generics has been in place in most countries for more than 20 years with an established track record of therapeutic equivalence. These evaluation methods have been so successful in establishing generic drug standards that they are largely consistent between all of the major drug regulators worldwide. Consumers and health professionals alike can be reassured that generic drugs approved under these regulatory frameworks are indeed bioequivalent, and therefore, interchangeable with brand name products . Why generic drugs are cheaper? Cheaper doesn't mean lower quality it was a misconception. Actually a generic drug is cheap compared to originals because the generic manufacture not needed to repeat the expensive clinical trials, expensive advertising, marketing, and promotion. Generic drug companies don’t have the expense of researching and developing a new chemical entity. There is usually competition among generic drug manufacturers [30-35]. Most of the generic drug manufactures switch to the same product this creates competition within the market place, typically leading to lower costs (Table 1). |Used as||Generic name||price||Branded drug||price| |Painkiller||Paracetamol||Rs 2.45||Crocin||Rs 11| |Paracetamol syrup||Rs 9.00||Crocin (syrup)||Rs 15| |Diclofenac sodium + Paracetamol||Rs 4.4||Diclogesic||Rs 19.40| |Antibiotic||Amoxicillin||Rs 13.2||LMX||Rs 40| |Azithromycin||Rs 41.8||Azee||Rs 107| |Anti-TB||Ethambutol||Rs 14.8||Myambutol||Rs 15.3| |Vitamins||Folic acid||Rs 2.8||Folivite||Rs 11.8| |B-complex||Rs 1.8||Becosul||Rs 11.0| |Cardiovascular (Blood Pressure) drug||Atenolol||Rs 7.0||Aten||Rs 23.8| Table 1: price list of generic & branded drugs When a corporation develops a brand new drug and submits it for Food and Drug Administration approval, a 20-year patent is issued, preventing different corporations from mercantilism the drug throughout the lifetime of the patent. As a drug patent nears expiration, any drug manufacturer will apply to the Food and Drug Administration to sell its generic version. As a result of these makers didn’t have an equivalent development prices (such as years of high-priced research), they will sell the drug at a reduction. Once generics square measure allowed, the competition keeps the worth down. On average, the price of a drug is 80 to 90% not up to the brand product. In 2011 alone, the employment of FDA-approved generics saved $158 billion, a median of $3 billion weekly [36-40]. FDA will take care of the adverse reactions of the drug and regularly monitor the generic products. FDA is stepping in to take control of generic drugs and applying stringent rules for the manufacture regarding the quality of the product. FDA is encouraging the generic industry to investigate whether, and under what circumstances, such problems occur . The monitoring of adverse events for all drug products, including generic drugs, is one aspect of the overall FDA effort to evaluate the safety of drugs after approval. Many times, reports of adverse events describe a known reaction to the active drug ingredient. Reports are monitored and investigated, when appropriate. The investigations may lead to changes in how a product (brand name and generic counterparts) is used or manufactured . Safety of generic drugs There is misconception about the safety of the generic drug that it won’t work as the branded product. Generic and brand name drugs have identical active ingredients, and generic drugs must meet standards for bioequivalence.so people can take the generic drug without any fear [41-45]. FDA doesn't permit a 45% distinction within the effectiveness of the drug product. When it involves worth, there's a giant distinction between generic and brand medicine. U.S. trademark laws do not permit generic medication to appear precisely the same as another drug already on the market. For that reason, the colour and form of a generic pill could also be completely different than the brand-name. Typically it'll have a distinct coating or flavor. Variations in style or look don't have an effect on the drug's safety or effectiveness . You can use generics with confidence. Although they may look different from their brand-name versions, generics are safe and effective. As always, any medication changes must be discussed with your physician and pharmacist The savings related to policies that encourage the employment of cheap generic prescribed drugs create them a plain selection within the struggle to contain health care prices. However, policymakers and researchers ought to address the queries close the therapeutic equivalence of generic medication; develop ways of encouraging generic utilization among all consumers; and make a statutory pathway for the approval of generic biological drugs. Additionally, given the inherent complexness of the health care system, it's probably variety of generic utilization policies ought to be combined in an exceedingly a lot of comprehensive approach before drug utilization are often maximized [46-50]. It is the government's task to supply health professionals with correct info concerning generic medicines. Completely different initiatives, like visits from government representatives, audit and feedback on prescribing knowledge or pharmacotherapeutic discussion teams (with physicians solely or physicians and pharmacists) have, therefore, been enforced in some European countries. These activities ought to inform physicians concerning the benefits of generic medicines and purpose them at the cost–consequences of their prescribing behavior. Health care professionals' information and perception of generic medicines were absolutely influenced by these activities. This demonstrates the necessity of continuous medical education for each physicians and pharmacists . People ought to remember of each the brand-name and generic versions of a medicine. So as to assist them perceive that the drugs share a similar active ingredients. As a result of generic medicine usually take issue in look or packaging from their brand-name equivalents, suppliers ought to additionally clear up any confusion patients could have by reminding them that these visual details don't have any impact on a drug’s safety or effectiveness [18,21].
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9323798418045044, "language": "en", "url": "https://anwaar.squ.edu.om/en/2018/05/07/project-examines-population-structures-of-spiny-lobster-along-omans-coastline/", "token_count": 920, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.1220703125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3608320b-cbae-4ad9-ba93-9c884463397d>" }
The government of the Sultanate of Oman in its ninth Five-Year National Development Plan (2016-2020) has identified fisheries as a key sector for contributing to the country’s gross domestic product (GDP). In support of future fishery- and aquaculture-development in Oman, the Ministry of Agriculture and Fisheries Wealth (MoAFW) has conducted extensive surveys of potential sites, developed a comprehensive guide to better management practices for Oman fisheries and aquaculture, and developed a national strategic plan. With about 3100 km of coastline providing a home to more than 900 species of fish and crustaceans, the sultanate has long been the largest seafood supplier in the Gulf. The landing was around 280,000 tons in 2016, valued at more than 240 million Omani rials (R.O.). Ensuring a sustainable and long-term viable fishery of marine species will contribute to creating diversity in economy, in addition to generating wealth and foreign currency earnings via exports and increases in gross domestic product and employment. All these objectives are in line with the national ambitions of the government of Oman and will be of long-term benefit to its citizens. In Oman, the major commercial species, the scalloped spiny lobster, Panulirus homarus (P. homarus), inhabits the Arabian Sea coast between Ras Al-Hadd and Dalkut. It is a reef dwelling species, most abundant on coral and coastal fringing rocky reefs and the areas surrounding these. Spiny lobster fisheries have a long tradition in Oman and are currently experiencing a long-term decline in catch. The annual harvest of Omani lobsters has declined dramatically from 2000 tons in the 1980s to only about 485 tons in 2016. By contrast, the gross unit revenue from lobster fishery has increased from around 3000 R.O. per ton in the 1980s to more than 5000 R.O. per ton in 2016. This has accelerated the high demand for lobster and contributes to the current over exploitation that lobster fishery is experiencing. To counteract this breakdown, Oman’s fishery managers have announced targeted regulations and recommendations that are primarily based on data from growth, mortality and catch. However, to achieve comprehensive fishery management guidelines for the species, a wide range of biological aspects, including demographic interactions of individuals and genetic structuring of the whole population, should be considered. This will contribute to understand variations within and between lobster stocks, and will enable ministry officials to introduce regional management and improve possible discrete stocks. In a project funded by the Research Council of Oman (TRC), scientists at the Center of Excellence in Marine Biotechnology (CEMB) at Sultan Qaboos University (SQU) have used state-of-the-art genetic tools to examine population structures of spiny lobster along the entire coastline of Oman. The project is led by Dr. Madjid Delghandi, Senior Scientist, at the Center of Excellence in Marine Biotechnology. This work is unique and describes a collaborative effort between domestic scientists from the CEMB, the MoAFW, and the College of Agriculture and Marine Science at SQU, along with international collaborators from Australia and South Africa. The generated knowledge will strongly contribute to sustainable fishery management and subsequently protect spiny lobster stock in Oman. It also supports future activities related to commercial aquaculture developments of this species in Oman and in other tropical regions. The results from this study indicate the presence of two major stocks of scalloped spiny lobster in Oman, one consisting of stock from Al-Sharqiyah and Al-Wusta governorates, while the second one includes spiny lobsters from Dhofar. Findings support the idea of regional fishery management measures for these genetically different stocks. During this research project, one PhD and three Omani MSc students were educated and training has been given to three other graduate students. The results of the project have received awards in an international conference in South Africa. Findings from the project have been published in seven scientific manuscripts in international journals with high impact factors. The other members of the team involved in this project are Rufaida Al Breiki, PhD student, Abdul-Aziz Said Al-Marzouqi, General Director of Fisheries Development, Ministry of Agriculture & Fisheries, Dr. Hussein Samh Al-Masroori, Assistant Professor, Department of Marine Science & Fisheries, and Mohammed Al Abri, Assistant Professor, Department of Animal and Veterinary Sciences College of Agriculture and Marine Sciences, SQU.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9429404735565186, "language": "en", "url": "https://instituteoftrade.org/african-continental-free-trade-area/", "token_count": 777, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.07666015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e54c52f7-321d-4664-b28c-2a745a344aee>" }
African Continental Free Trade Area The AFCFTA agreement was signed in March 2018 by52 membercountries out of 55. Three countries did not sign the agreement these were; Eritrea, Nigeria and Tanzania. This agreement will come into force once 22 countries ratify. By end of February 2019, there were 15 countries (Ghana, Kenya, Rwanda, Niger, Chad, Congo Republic, Djibouti, Guinea, eSwatini (former Swaziland), Mali, Mauritania, Namibia, South Africa, Uganda, Ivory Coast (Côte d’Ivoire) had deposited their instrumentsof ratificationwith the African Union and 4 countries (Sierra Leone, Senegal, Togo, and Egypt) had parliamentary approval making a total of 19. This progress isexpected to put the AFCFTA into force before July 2019. The African Continental Free Trade Area (AfCFTA) will cover a market of 1.2 billion people and a gross domestic product (GDP) of $2.5 trillion, across all 55 member States of the African Union. In terms of numbers of participating countries, AfCFTA will be the world’s largest free trade area since the formation of the World Trade Organization. It is also a highly dynamic market. The population of Africa is projected to reach 2.5 billion by 2050, at which point it will comprise 26 per cent of what is projected to be the world’s working age population, with an economy that is estimated to grow twice as rapidly as that of the developed world. With average tariffs of 6.1 per cent, businesses currently face higher tariffs when they export within Africa than when they export outside it. AfCFTA will progressively eliminate tariffs on intra-African trade, making it easier for African businesses to trade within the continent and cater to and benefit from the growing African market. Consolidating this continent into one trade area provides great opportunities for trading enterprises, businesses and consumers across Africa and the chance to support sustainable development in the world’s least developed region. ECA estimates that AfCFTA has the potential both to boost intra-African trade by 52.3 per cent by eliminating import duties, and to double this trade if non-tariff barriers are also reduced. Chinese inspectors are set to visit Nairobi this month for certification checks on agricultural produce, putting Kenya on the path to fresh produce export to the expansive Asian market. Nairobi and Beijing last November inked a Sanitary and Phytosanitary (SPS) deal which will see Kenyan exporters sell their farm produce to the populous China upon meeting set health standards and requirements. The agreement, which followed week-long intense negotiations in Shanghai during the inaugural China International Import Expo, covered more than a dozen of fresh produce where Nairobi has traditionally relied on Europe for market. “We have a team coming in from China on March 27 for the final certification of our produce and then we are good to go,” JaswinderBedi, chairman of state-run Export Promotion Council (EPC), said by telephone. “We will start seeing a difference (growth in exports) because market expansion is now happening. It takes time to negotiate with some of these countries because they use technical barriers of trade to stop your exports.” They include cut flowers, vegetables, avocados, French beans, legumes (such as peas, beans and green grams), herbs, mangoes, peanuts and macadamia. Other produce in line for sale in Chinese markets are meat, hides and skins, bixa, gum arabica and myrr as well as Asian vegetables such as chilli and karela, according to a November statement by State House’s press office.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9836260676383972, "language": "en", "url": "https://mebelnaya-fabrika.net/heating-oil-prices-to-go-up-this-heating-season-despite-lower-crude-oil-prices/", "token_count": 593, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.4765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4737683b-0f45-4e2d-a348-3ce9ac6278bd>" }
Many people in the United States heat their homes with heating oil. This is a liquid petroleum product that is burned in either boilers or furnaces and is stored on-site in large metal containers. It produces 1385,000 BTUs per US gallon and so is pretty efficient. The Wikipedia entry on heating oil states that it is pretty similar to diesel fuel. They are often actually the same exact thing and are sold out of the same truck which delivers diesel fuel to gas stations and heating oil to people’s homes. The only real difference between the two is legally based which is that diesel fuel for vehicles and equipment needs to have much less sulfur content. They are also taxed differently as government’s want to take vehicle fuel at a higher rate and the fuel people use to heat their homes at a lower rate. People and companies that heat their buildings with heating oil need to order it for delivery. It comes in a tank truck that hooks a hose up to their storage tank. These storage tanks are usually located in a home’s basement or garage. Businesses often have their storage tanks buried in the ground. There are a lot of regulations in place surrounding heating oils, so that it doesn’t leak and pollute the environment, including the local water supply. It needs to be properly transported, stored, and burned in order to minimize negative impacts on the air, water, and ground. It is treated as a hazardous material by the federal government. In the United States, heating oil is the second most common way to heat a home. People have access to the regular blend as well as blends of biodiesel which burns cleaner than straight heating oil, causing less impact on the environment. The Department of Energy keeps track of the amount of money that homeowners spend on heating oil and makes this information available to the public. Due to this people can easily compare providers of heating oil and see what their neighbors are paying. As crude oil prices have dropped since September 2018 some homeowners thought they would be able to save money on heating oil this winter season. This article shares that this won’t be the case. While there is a lot of crude oil floating around there isn’t nearly as much refined product on the market. That means that a heating oil wayne nj company won’t see any reduction in the cost of heating their homes. The federal government had been projecting that heating oil will actually be 20% more than it was during last year’s heating season. This compares to natural gas going up by 5% and the cost of electricity nationally going up by 3%. They have since revised these numbers and believe that the cost of heating oil will still be going up but by 14% instead of 20%. One industry expert called it a supply and demand issue where gas prices are going down but there’s too much demand for heating oil for it to do the same.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9646857976913452, "language": "en", "url": "https://www.biljohnson.com/the-blast--blog/echoes-in-the-hallway", "token_count": 1679, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.5, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:72f1bc1b-4793-40cb-983b-e4277071a4a2>" }
History Echoing in Our Hallway While the whirlwind of the Impeachment Inquiry swirls around us, it might be interesting (and instructive?) to time-travel back almost a century, to look at the Harding/Coolidge Administrations (1921-1929) and take note of the parallels to our current Administration. To set the context, Warren Gameliel Harding, a Republican, was nominated on the 10th ballot during the 1920 party convention and defeated the Democratic nominee, James M. Cox. The Democrats had held the Presidency from 1913 through 1921, with Woodrow Wilson at the helm. Harding had promised “a return to normalcy” (a word his campaign had created, I believe) and campaigned from his front porch in Marion, Ohio. He was the first seated U.S. Senator to ascend to the Presidency (there’s a Jeopardy fact for you!). Harding appointed some noteworthy Cabinet members (Herbert Hoover, Andrew Mellon, and Charles Evans Hughes) but also several who ultimately ran afoul of the Law (Secretary of the Interior Albert Fall, Attorney General Harry Daugherty). In fact, Harding’s administration is remembered most for scandal, historically (including an extramarital affair!), and for its extreme laissez-faire approach to foreign and domestic policy. The parallels we might find interesting (and instructive?) to what we are witnessing today are related to tariffs, immigration, “socialism,” race, economics, and foreign policy. In other words, almost every aspect of Harding’s (and later Coolidge’s) administration(s) provide what historian Barbara Tuchman once called “a distant mirror.” As a Republican in the 1920’s Harding (and Coolidge) had some clear goals we can recognize. “The undisputed goal of the Harding administration was to use governmental powers to assist American business and industry to prosper — a trend that had begun during World War I and accelerated during the New Era of the 1920s . In September 1922, Harding enthusiastically signed the Fordney–McCumber Tariff Act. The protectionist legislation . . . The act increased the tariff rates contained in the previous Underwood-Simmons Tariff Act of 1913, to the highest level in the nation's history. Harding became concerned when the agriculture business suffered economic hardship from the high tariffs . . . The high tariffs established under Harding, Coolidge, and Hoover have historically been viewed as a contributing factor to the Wall Street Crash of 1929.” (wiki) Indeed, “As part of Harding's belief in limiting the government's role in the economy, he sought to undercut the power of the regulatory agencies that had been created or strengthened during the Progressive Era. Among the agencies in existence when Harding came to office were the Federal Reserve (charged with regulating banks), the Interstate Commerce Commission (charged with regulating railroads) and the Federal Trade Commission (charged with regulating other business activities, especially trusts). Harding staffed the agencies with individuals sympathetic to business concerns and hostile to regulation.” (wiki – italics, mine) Certainly we are seeing the same basic actions occurring today, not only to benefit business but at the expense of our environment! We are currently hearing a great deal about the Democratic Party being driven by “socialists” (despite most people not really understanding what a “socialist” is). The use of the term “socialist” and “communist” has been used as a political scare term since the Harding and Coolidge Administrations. This began during Wilson’s administration when the Attorney General, A. Mitchell Palmer, began “raids” to root out (and deport!) “socialists” and “communists.” This concept, the “Red Scare” --- often used for political leverage to disparage opponents --- was resurrected by Joe McCarty in the early 1950’s and later exercised by Nixon and other Republicans, including the current occupant of the White House, who is demonizing Democratic candidates as “socialists” on an almost daily basis. Echoes in the hallway. Regarding immigration we can see our current situation is very similar to what we saw happen in the United States in the 1920s. Here’s what you could read in Wikipedia: The Per Centum Act of 1921, signed by Harding on May 19, 1921, reduced the numbers of immigrants to 3 percent of a country's represented population based on the 1910 Census. The act, which had been vetoed by President Wilson in the previous Congress, also allowed unauthorized immigrants to be deported. The Immigration Act of 1924, or Johnson–Reed Act, including the Asian Exclusion Act and National Origins Act was a United States federal law that prevented immigration from Asia, set quotas on the number of immigrants from the Eastern Hemisphere, and provided funding and an enforcement mechanism to carry out the longstanding ban on other immigrants. (The Act) set a total immigration quota of 165,000 for countries outside the Western Hemisphere, an 80% reduction from the pre- World War I average. Quotas for specific countries were based on 2% of the U.S. population from that country as recorded in 1890. As a result, populations poorly represented in 1890 were prevented from immigrating in proportionate numbers—especially affecting Italians, Jews, Greeks, Poles and other Slavs. According to the U.S. (N.B. The Palmer Raids preceded the Immigration Act of 1924, which also targeted Southern European and Eastern Europe immigrants on not just political grounds but also mostly ethnic and racial grounds.) The current Administration’s demonizing of Muslim-Americans, and attempted ban on all Muslim immigrants, as well as the hideous Southern Border Wall policies are not very far from those 1920s strictures. Hallway echoes. The final parallel we’ll listen to echoing down that historical hallway has to do with RACISM. Above and beyond the underlying racism of the current administration’s immigration and border policies (not to mention invoking disdain for “shithole” countries), we have heard that there were “good people on both sides” in Charlottesville--- when one of those “sides” was comprised of self-professed White Nationalist Nazis! Back in the 1920s “Harding also disappointed black supporters by not abolishing segregation in federal offices, and through his failure to comment publicly on the Ku Klux Klan.” (wiki) In fact, during the 1920s the KKK flourished under the Harding and Coolidge administrations. According to Wikipedia: Beginning in 1921, it (the KKK) adopted a modern business system of using full-time paid recruiters and appealed to new members as a fraternal organization, of which many examples were flourishing at the time. At its peak in the mid-1920s, the organization claimed to include about 15% of the nation's eligible population, approximately 4–5 million men. The second KKK preached "One Hundred Percent Americanism" and demanded the purification of politics, calling for strict morality and better enforcement of Prohibition. Its official rhetoric focused on the threat of the Catholic Church, using anti-Catholicism and nativism. Its appeal was directed exclusively at white Protestants; it opposed Jews, blacks, Catholics, and newly arriving Southern and Eastern European immigrants such as Italians, Russians, and Lithuanians, many of whom were Jewish or Catholics themselves Certainly we have seen the current President endorse his brand of “100% Americanism” (“Make America Great Again;” “Keep America Great”). The recent revelations of Stephen Miller’s White Nationalist emails clearly illustrate the tenor of the advice Trump is receiving on a daily basis. All things being equal, if you listen carefully, stretching your ability to go back a century, you can hear the Harding/Coolidge legacy echoing in the hallway and, while our economy is rolling along (defying years of Boom/Bust cycles), we can only hope that this Administration does not end on the same note as theirs --- a cascading crash into economic chaos.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9376726746559143, "language": "en", "url": "https://www.nsenergybusiness.com/pressreleases/companies/transparency-market-research/presscogeneration-equipment-market-is-estimated-to-reach-us-33543-mn-by-2025-stringent-mandates-to-curb-carbon-emissions-make-europe-a-key-market-transparency-market-research/", "token_count": 522, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0025634765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:f8ea009f-5a21-4f27-ae00-d86ba110eeae>" }
Cogeneration is also called as combined heat and power or combine heat and power. The cogeneration is a concept of generating two different form of energy by a single source of fuel. In the two forms, one is heat or thermal energy and other is electrical or mechanical energy. Based on the technology the cogeneration plants are classified into gas turbine, combined cycle gas turbine, steam turbine, and reciprocating engine. Cogeneration nowadays is one of the attractive option for developed and developing countries in order to meet their base and peak demand. The increasing global environmental concerns, and stringent emission norms by governing bodies has led to the development of the new technologies in cogeneration equipment. The governments of different countries are promoting cogeneration technology through long term policy and incentives for the decentralized generation which is primarily due to the advantages like operational efficiency, energy independence, reducing energy wastage, and improving the sustainability goals. The cogeneration equipment market is facing a hurdle of the high initial investment. Although the revenue realization in the long term scenario is high, beneficial, and economical as compared to the conventional systems. By application, the cogeneration equipment market has been analyzed for residential, commercial and industrial. In terms of revenue, the industrial segment constituted more than 58% market share in 2016. Based on fuel used, the cogeneration equipment market has been segregated into biomass, coal and natural gas and others such as wood, oil. In terms of revenue, the natural gas segment constituted more than 52% market share in 2016. In terms of region, the cogeneration equipment market has been classified into North America, Europe, Asia Pacific, Middle East, and Africa, and Latin America. In 2016, Asia Pacific dominated the cogeneration equipment market with more than 42% share and is expected to dominate the market during the forecast period. Based on technology, the cogeneration equipment market has been segregated into steam turbine, gas turbine, combine cycle gas turbine, reciprocating engine, and others. In terms of revenue, the gas turbine segment constituted more than 22.5% market share in 2016. Prominent players in the cogeneration equipment market are BDR Thermea, Siemens AG, Mitsubishi’s heavy industries Ltd., Clarke Energy, Innovate steam technologies, Foster wheeler AG, ANDRITZ Energy & Environment GmbH, 2G Energy, ABB, Aegis Energy Services, Inc., Rolls Royce Plc., Innovate Steam Technologies, Kawasaki Heavy Industries Ltd. and others.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9450367093086243, "language": "en", "url": "https://www.scopulus.co.uk/business-jargon/definition.php?title=Market%20share", "token_count": 260, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0419921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:7790a692-9ffd-4aa0-af20-c10d87072338>" }
Business Terms and Jargon Explained What is Market share This is a business share of the market it sells in, which is expressed as a proportion of the market. For example, Vodafone own 60% of the mobile phone market. This means that all its competitors make up only 40% of the market. This means all the people that have a mobile phone, 60% of all customers are with Vodafone. One of the factors involved is the number of business in that industry. But when markets are large then there will always be big players that dominate the market. <- Go Back Business Terms Home page Search Jargon and Terms Database A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Term created / updated 2008-08-21 14:47:33 Knowledge is the key to success. That is why we have gone to great lengths to get you these business terms and jargon, and explain them in Plain English. Its very easy to comprehend. Learn to understanding and know your business jargon. This will keep you informed among your peers. Bookmark Your business dictionary. Copyright © 2004-2020 Scopulus Limited. All rights reserved.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9563256502151489, "language": "en", "url": "https://bdma.org.uk/climate-reporting-inform-insurance-industry/", "token_count": 561, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.11474609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e80d868e-1e92-483a-8011-4eb1a7738e29>" }
How does climate reporting inform the insurance industry? The current climate is constantly and dramatically changing, and we are seeing the widespread effects of it. From the uproar of wildfires around the world and the surge in stormy weather – many businesses are at risk, including insurance companies facing large pay outs to policyholders for the damage caused by the rapid change in unexpected weather. Since the introduction of the 2008 Climate Change Act, the need for companies to incorporate environmental risks and opportunities into their strategies and provide climate reports has also quickly increased. Companies can even now face fines for failing to disclose climate-related risks into their business reports. Here, we take a look at what climate reports offer the insurance industry and how they contribute to fair customer treatment too. When a company investigates and establishes how climate change could affect its business operation, performance and risks it provides insights for all stakeholders involved. Climate reporting allows everyone involved and affected, to be aware and prepare for the potential risks of climate change. It helps businesses measure their risks compared to competitors, allows investors and corporate stakeholders to make better informed decisions, and underwriters to establish policy terms and conditions subject to the risks and financial impacts. Meanwhile, by gaining an overview of business risks from climate reports, insurers can see the predicted upcoming trends and make necessary changes to protect its customers, its company and the wider industry. Re-insurance companies can also assess trends to evaluate how and where insurers need to collaborate financially. Whilst climate reporting can seemingly bring out the negative attributes of climate change, it also presents businesses with the opportunities to bring positive change to the industry. By measuring its risks and the impact of its business operation on the environment, a company can declare its opportunities to make improvements to initiatives. By introducing more sustainable actions for the future, there is a demonstration of actively wanting to decrease the effects of climate change – which will be positive for stakeholders to see. The businesses can then measure continuous improvements and report on this. Treating Customers Fairly The fair treatment of policyholders is a priority to insurers and the wider industry. By identifying the assessed risks, insurers can offer the correct knowledge and support the business customer in question, as well as other policyholders, and help them prepare for environmental changes and risks. Whilst in partnership with insurers, underwriters should analyse company valuations fairly. Policyholders need the right solutions for them, accurate premiums for their risks, and in the event of a claim – the application of fair pay outs. These factors are all components of the Treating Customers Fairly (TCF) policy, and they can arguably help to prevent unexpected surprises for policyholders and undeservedly outrageous premiums. Knowledge, data collection and analysis aligned with improved education and communication is the key to success’across the wider insurance industry.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9727138876914978, "language": "en", "url": "https://blog.turbotax.intuit.com/tax-deductions-and-credits-2/i-claimed-exempt-can-i-still-get-a-tax-refund-9999/", "token_count": 689, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.08154296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:3546e86f-40de-4972-8b8a-10905b341898>" }
The point of a tax refund is for the government to return some of the money that you have overpaid. When you receive a tax refund, it means that you have paid more money than you actually owe. One of the most common reasons that a taxpayer receives a tax refund is because extra money has been withheld from their paycheck. Additionally, tax deductions and credits can also lower your tax liability and result in a situation in which you are entitled to a refund. However, what happens if you don’t have to have money withheld from your paycheck? Can you still get a tax refund if you are considered exempt? What Qualifies You as Exempt? When you fill out your W-4 from your employer, you add your withholding allowances. Normally, there is a standard deduction (In 2018, it’s $12,000 for single, $24,000 married filing jointly). If your income is less than your standard deduction, then you are exempt – you don’t have to pay taxes. However, if you had any tax liability at all in the previous year, or you expect to owe for the current year, you can’t be considered exempt. Those who are exempt, though, won’t have taxes taken from their paychecks. And, normally, since you didn’t pay taxes, you aren’t eligible for a tax refund. But there are conditions that can result in being able to receive a tax refund, even if you are exempt from paying taxes. Refundable Tax Credits Even if you are exempt, you can still receive a tax refund if you qualify for a refundable tax credit. Some tax credits are only applied up to the point that you zero out the taxes owed. Refundable tax credits, on the other hand, can result in cash back. These are tax credits that can create negative tax liability resulting in a tax refund, even if you haven’t paid taxes. One of the most common refundable tax credits is the Earned Income Tax Credit. This is a tax credit you receive for working and earning low to moderate income. If you have earned any income at all, even if you are exempt, you can claim this credit if you qualify. And, it can result in receiving a tax refund – even if you didn’t have taxes withheld from your paycheck. Another popular credit is the American Opportunity Credit. This education credit is available to help offset certain costs of higher education and is 40 percent refundable. Before you think that you won’t get any back from the government, double check your eligibility for refundable tax credits. There are instances where even being exempt from paying taxes can still result in a tax refund, but you won’t get your money unless you file a tax return. Don’t worry about knowing these tax rules. TurboTax will ask simple questions about you and give you the tax deductions and credits you’re eligible for based on your answers. If you have questions, you can connect live via one-way video to a TurboTax Live CPA or Enrolled Agent with an average 15 years’ experience to get your tax questions answered from the comfort of your home. TurboTax Live CPAs and Enrolled Agents are available in English and Spanish and can also review, sign, and file your tax return.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9546462893486023, "language": "en", "url": "https://dubaipolicyreview.ae/why-smart-cities-fail-how-understanding-context-can-save-your-citys-future/", "token_count": 3650, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0322265625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:8889efd0-299e-4f65-bc6f-b04f711ac16a>" }
Cities throughout the world are changing. To respond to these changes, Mayors and other urban policy makers are investing heavily in innovations aimed at generating value for those who live and work in their cities; in a word, they are working to make their cities "smarter". Increasing pressures to make a city smarter as a way to respond to these changes are causing many city leaders to look to the world’s "smartest cities" for best practice solutions. Many of these solutions are based on "smart city technologies" and are driving significant investments. According to the International Data Corporation, a consultancy, smart city technology spend in 2016 was $80 billion, with $135 billion projected by 2021. The less told story is that many of these investments go to waste. So why do many of the smart city investments and solutions that prove widely successful in some cities, fail in others? Unfortunately, many city leaders are finding that the direct adoption of an innovation from one city to another does not guarantee the creation of public value. They are finding that the high-pressure conditions to make their cities smarter quickly make it difficult for them to fully take into account the fact that no two cities are alike, and that what works in one city, may or may not be feasible in another. In other words, it is all about context. How does context matter in smart cities development? According to Merriam-Webster, context is "the interrelated conditions in which something exists or occurs". Understanding the context of a city means unpacking and studying the challenges, capabilities, characteristics and successes of that particular city. It means investing in processes that examine the nature of the problems facing a city, the resources that a city has to draw on to solve its problems, and what stakeholders are looking for as solutions. It also means finding out why a smart city investment has or has not worked in other cities, and then using insight about how context has interacted with similar innovation efforts as input for decision making. Investment choices are being made, in many cases, based on little understanding of why something worked in another city and what must happen in their own city for that innovation to create value. Cities where decisions are informed by a nuanced understanding of their own context are creating value. In other cities, the lack of attention to context is limiting the potential of smart city investments. The questions before us are why are such important decisions about the future of the world’s cities still being made without critical consideration of context? And, what can urban leaders of tomorrow do differently to ensure that future investments create value for their cities? According to the US-based nonprofit, Livable City, there are five fundamental aspects of great, livable cities: "robust and complete neighborhoods, accessibility and sustainable mobility, a diverse and resilient local economy, vibrant public spaces, and affordability". City leaders around the world are working to identify and close the livability gaps in their cities. Closing those gaps requires a variety of highly interdependent policy, management and technology innovations. How successful city leaders are responding to changing pressures on the livability of their cities relies to a great extent on their ability, or in many cases, their willingness, to invest in building an understanding of the specific context of their city. Unfortunately, building nuanced understanding of context can only happen by unpacking the complicated ways that the context of a city interacts with the problems a city is facing, the innovations expected to solve those problems, and the capability required to be successful. Taking the time to build that understanding is often at odds with the pressure on city leaders for quick decision-making. How can policymakers change this reality? Growing Cities and Shrinking Cities – How Context Matters Urbanization and de-urbanization alike are impacting the livability of the world’s cities. Yet most writing about changes in the size of the world’s cities and the impact of those changes on livability are about growth. Almost any article on the global trend of urbanization presents one or another well-supported prediction about how by 2050 three quarters of the world’s rapidly growing population will live in cities, up twenty-five percent from 2010 and completely reversing the population distribution between urban and rural environments. This rapid growth, researchers and practitioners agree, is increasing pressure on the physical and social infrastructure of cities and straining the basic services that make a city livable. Social problems in such cities are recognized as increasingly complex and intertwined and their solutions require the collaboration of multiple city agencies, levels of government, nonprofit organizations, business, and society at large. Such growth-induced livability gaps have become the focus of Mayors and urban policy makers the world over and have been at the heart of smart city and urbanization research. In large part, smart city solutions put forward by industry are aimed at relieving the strain of growth on the world’s cities. But there is another trend worth paying attention to. De-urbanization is also changing the context of many of the world’s cities and reducing livability in very different ways. While people are moving to cities, generally, to improve their quality of life, other factors are impacting cities, causing them to shrink. Three main trends are leading to de-urbanization: declining fertility rates, as being experienced in Japan, declining manufacturing and mining, as being experienced in the US, and resource depletion and technological change, as being experienced in China. While growing cities are struggling to modernize and expand their straining infrastructures, shrinking cities are struggling, to downsize their infrastructures and to find sustainable "financial models for operation and maintenance". Cities such as Sheffield, Iowa in the US and Ostrava in the Czech Republic are increasingly focused on how to "shrink smart". Policy makers must be clear about which strategies are right for shrinking cities and which for growing cities. Can Cities "Shrink Smart" As They Deal with Urban Decay? In the US, the city of Schenectady, New York, was known throughout the 19th and 20th centuries as the city that "lights the world". Today, Schenectady, the home to General Electric and an award winning smart city, struggles with the problems of a shrinking city, in particular, with urban blight. On average, a single blighted property can cost a municipality tens of thousands per year in direct and indirect costs. Direct costs include code violation enforcement and engineering and property maintenance. Indirect costs include uncollected taxes, devaluation of adjacent properties and impact on city services such as police and fire calls. Increasingly, studies of urban blight are looking at the impact of blight on other social and economic issues, such as public health and economic opportunity. Schenectady’s Mayor, Gary McCarthy, a former president of the New York Conference of Mayors, has a vision for fighting urban blight in his award winning, but shrinking, city. His vision is well-informed and has proven to be value generating in other cities. Unfortunately, his efforts are challenged with a shrinking budget, legacy systems, a lack of a strategic IT leader and no city-wide data management strategy or workforce. His commitment to closing the livability gap in Schenectady is driving a range of innovative public private partnerships and grant funded research designed to generate the nuanced understanding he needs to make context-specific investment decisions. He recognizes it is necessary to fully appreciate Schenectady’s context and capability and to invest not only in technical solutions to make the city smarter, but in policy and management capabilities that leverage the strengths of Schenectady, but that also reflect the changing context of the city. Cities throughout the world, whether growing or shrinking, large or small, are looking to technologies such as sensors and IoT networks as a way to capture data about city programs and services. The idea is that new data, captured by these sensors, shared across networks and used to drive analytics will help inform policy decisions in that city. Sensors collecting data on water consumption in cities, for example, are being used to help inform routine water management operations as well as more policies. In many cases, this is possible because those cities have the capability to collect and manage large volumes of data and they have sophisticated data management capabilities. Further, they have a history of evidence-based decision making, or at the very least, they already have systematized decision-making about water resources management and can now integrate the use of new analytics into those decision-making processes. In other cities, while the deployment of sensors is technically possible, capability to organize and manage the resulting data and to make it ready and available for use in sophisticated analytics in support of decision-making is at best limited, and in many cases missing. Many shrinking cities, particularly small cities are finding that while they see the potential of machine learning to provide automated decision-making support to a shrinking workforce, for example, they don’t have the requisite volume of data or the capabilities to manage data or develop necessary technologies. Unfortunately, some cities are finding this out after significant investments are made. Money is being spent, but value creation in such cases is limited. How policy makers in shrinking cities such as Schenectady, Sheffield and Ostrava will continue to make their cities livable is unclear. What is clear is that Mayors and urban policy makers need a more nuanced understanding of the issues facing both growing and shrinking cities and of the range of contextual conditions that influence the success of a smart city. They must use that new understanding proactively to make decisions that keep cities viable and livable. Beyond "Universal Patterns": From Understanding Context to Generating Impact Scholars and practitioners from a range of disciplines and professions have been working to understand what makes a city "smart". They have developed models and theories, conceptualizations and frameworks, ranking systems, strategies, solutions and checklists. Early characterizations of a smart city focused on technical aspects, such as smart buildings, energy and connectivity. Smarter cities were those that made use of these technologies to save money and deliver higher quality infrastructure. These efforts, from both a research and practice perspective, generally relied on what Albert Meijer, a leading smart city researcher from the Netherlands calls, "universal patterns". One city after another, successfully adopted highly technical strategies focused on solving very technical problems. Over time however, as smart city strategies moved beyond strictly technical innovations to become more socio-technical, and encompassing such things as the delivery of social services, the more it became clear that context matters. Unfortunately, while leading researchers and practitioners are calling for more emphasis on context, they acknowledge that we don’t know enough about how context impacts outcomes. Our knowledge of the relationship between context and approaches to making cities smarter is "underdeveloped". It lacks the sophistication required to provide guidance to policy makers based on nuanced understanding about how the context of their particular city is interrelated with the challenges of change. So, today, we find ourselves in the situation where we know that context matters in smart city investment decision-making, but we also know that we need to know more about how and why it matters. We need to know how to get that insight and use it to guide the very expensive decisions that cities are making. Over time, researchers and practitioners have evolved their understanding of what it means for a city to be smart and what it takes for a city to be smart. Today’s smart city frameworks are multi-dimensional and integrative. They are reflective of the rapid technical, social and organizational innovations that have occurred during these years. They have benefitted from continued testing and reconsideration within a wider range of contexts. A leading framework proposed in 2015 by Gil-Garcia et al, for example, offers a comprehensive view of smart city components and elements. Over time and through continued consideration of this framework in different contexts, and by multiple research teams, the framework has been expanded to reflect the changing view of what makes a city smart. One of the original dimensions of that framework, "environment", for example, was originally envisioned as the city government’s ability to manage and monitor environmentally related systems and actions, but today, as a consequence of changing views and new research, this framework now incorporates the extent to which a city is considered environmentally friendly. This framework, and others, now includes human capital, creativity and the knowledge economy, among others, as components of smart cities. More recently, smart governance and sustainability are receiving attention as well. These efforts show that what we understand to be a smart city reflects a range of social, institutional and organizational as well as technical innovations. Building Smart Cities in Context-specific Ways – The Guiding Principles Mayors and urban policy makers must build an understanding of the context of their cities and of the cities they look to for best practices. If they find a strategy in another city that they believe will be help make their city smarter, they must ask hard questions of those cities; they must understand the interrelated conditions in which that success occurred. They must look closely at the policy, management and technology capabilities that made that strategy a success and then they must look closely at their own city’s capabilities. They must answer difficult questions about whether they have the conditions necessary to make their city more livable. They must know if they have the resources to close the livability gaps in their cities, if they will adapt the innovation to their context, or if they will have to continue to look for new ideas. To transform the world’s cities into smart cities that are ready to meet changing realities caused by population increases and decreases, mayors and urban policy makers must follow these three steps: 1) Recognize the importance of context, and acknowledge that what worked elsewhere may not work here, 2) Create capability to understand context, by assessing the available knowledge and skills and building capacity where needed, 3) Use that understanding to make investments that are relevant to, and create value within, that specific context. In other words, they must focus on the unique needs and capabilities of their cities. The process of becoming a smarter city might need to start with asking questions about the current government workforce. Is the workforce shrinking or does it need to grow? Does the workforce have the knowledge and skills necessary to adapt innovative ideas that have worked elsewhere? The point isn’t to avoid innovation, but rather to increase readiness to innovate successfully by building understanding of the particular livability gaps facing a city, the source of those gaps, and how the context of a city will interact with strategies designed to close those gaps. Policy makers must look internally to see what problems matter more and make decisions that are locally relevant and context specific. To make a city smarter, to create value for those who live and work there, urban policy makers must understand the relationships among the context of the city, the characteristics of the problems in that city, and the characteristics of the innovations that are being considered as solutions to those problems. They must use that understanding to inform smart city investment decisions. To put nuanced understanding of context at the center of smart city investment decisions, urban policy makers must: - Look globally for inspiration; but look locally to see what matters most to the people in your city. - Look globally to see what is doable; but look locally to see what’s reasonable in your city. - Look locally to understand the source of and nature of pressures on your city. What is the context of your city and how is it impacting and interacting with the pressures on your city? - Look globally to see how the world’s leading cities are responding to new pressures from urbanization and de-urbanization, but look closely at what makes those smart city innovations possible in that place and at that time. Ask yourself, will that work here? If not, what must you change to make it work in your city? Finally, you must decide, is it worth it? - Build partnerships with smart cities researchers to help build an understanding of the impact of context how to understand your local context better. As the world around us changes and those changes create new and complex problems, policy makers must recognize the need for a deeper understanding of the specific context of cities and resist the temptation to fast track by defaulting to generic, universal patterns of innovation. Urban policy makers seeking to increase public value through innovation must work in partnership with researchers and practitioners to shed new light on how context matters for their cities and citizens. These leaders must learn how a lack of attention to context is making it difficult for their cities and their partners to respond quickly and effectively to the increasing challenges caused by massive shifts in the distribution of the world’s population from rural to urban areas and from small urban to large urban areas. The cost of not learning these lessons is that investments in innovations of all kinds are being made even when the context necessary for value to be created through those investments is missing. Theresa Pardo is Director of the Center for Technology in Government and a Full Research Professor of Public Administration, University at Albany, SUNY. Read full bio here. "In an urbanizing world, shrinking cities are a forgotten problem", Biswas, Tortajada and Stavenhagen: https://www.weforum.org/agenda/2018/03/managing-shrinking-cities-in-an-expanding-world As Rural Towns Lose Population, They Can Learn To ‘Shrink Smart’: https://www.npr.org/2018/06/19/618848050/as-rural-towns-lose-population-they-can-learn-to-shrink-smart "What makes a city smart?" Gil-Garcia, Pardo, and Nam, 2015: https://content.iospress.com/articles/information-polity/ip354
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9473888874053955, "language": "en", "url": "https://wastemanagementreview.com.au/nsw-targets-zero-organics-in-landfill-by-2030/", "token_count": 539, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.00201416015625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4498c810-355a-4d8f-b18c-65f10b24cc75>" }
The NSW Government’s Net Zero Plan Stage One: 2020-2030 seeks to achieve net zero emissions from organic waste in landfill by 2030, with targeted actions to support councils improve services and product quality. “Organic waste, such as food scraps and garden trimmings, makes up about 40 per cent of red-lidded kerbside bins. When sent to landfill, the decomposing material releases methane that may not be captured,” the plan reads. “However, when this waste is managed effectively, through proper composting and recycling processes, methane emissions can be substantially reduced, soils can be regenerated to store carbon and biogas can be created to generate electricity.” The plan outlines specific actions including supporting best-practice food and garden waste management infrastructure, and ensuring compost or other organic soils are of the highest quality for land application. Furthermore, the state government will facilitate the development of waste-to-energy facilities in locations with strong community support, and update regulatory settings to ensure residual emissions from the organic waste industry are offset. The NSW economy will see over $11.6 billion in private investment and 2400 new jobs as a result of the plan, according to Environment Minister Matt Kean. “Where there are technologies that can reduce both our emissions and costs for households and businesses, we want to roll them out across the state. Where these technologies are not yet commercial, we want to invest in their development so they will be available in the decades to come,” Mr Kean said. The plan outlines four key priorities: drive uptake of proven emissions reduction technologies, empower consumers and businesses to make sustainable choices, invest in the next wave of emissions reduction innovation and ensure the NSW Government leads by example. Mr Kean said roughly two-thirds of the plan’s private investment will be directed at regional and rural NSW, “diversifying local economies that are doing it tough after the drought and devastating bushfire season.” “Global markets are rapidly changing in response to climate change, with many of the world’s biggest economies and companies committed to reach net zero emissions by 2050. NSW already leads the nation with its economic and investment plans and from today, NSW will lead the nation with its Net Zero Plan,” Mr Kean said. “Our actions are firmly grounded in science and economics, not ideology, to give our workers and businesses the best opportunity to thrive in a low-carbon world.” The plan is financially supported by a $2 billion bilateral agreement between the Federal and NSW Government, announced in January 2020.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9684577584266663, "language": "en", "url": "https://whowhatwhy.org/2018/11/13/who-pays-the-price-of-oklahomas-man-made-earthquakes/", "token_count": 1874, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.384765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5bb6107e-23f8-4949-b663-eaf5c97d5325>" }
In 2014, 15 percent of Oklahoma homeowners had earthquake insurance. That seems remarkable for a state that, from 1973 to 2008, had a mere 21 magnitude-3 earthquakes. But it makes sense considering that, since then, the state has become a hotbed of induced seismicity — which is really just a fancy term for man-made earthquakes. In 2016, Oklahoma experienced 500 magnitude-3 earthquakes. So the state’s residents, forced to adapt to these new, unnatural disasters, began to purchase earthquake insurance. “There is a higher take-up rate in earthquake insurance given the last few years, [because of] the increased seismicity our state has seen,” John Doak, Oklahoma’s insurance commissioner, told WhoWhatWhy. The waste products of hydraulic oil fracking, otherwise known as toxic wastewater, cause these earthquakes. The wastewater gets pumped to the surface and injected back into the earth at various depths. “Wastewater injection can induce small earthquakes to occur, leading to larger ones, in a cascading effect,” said Jacob Walter, the Oklahoma Geological Survey’s lead seismologist. He noted that the area in which the wastewater is injected has no clear correlation to where the earthquake will occur, because of the countless earthquakes triggering each other in unpredictable patterns. These unpredictable earthquakes can have devastating effects on homes, buildings, and infrastructure. Among the most quake-prone buildings: brick structures and those made of unreinforced masonry. “The larger earthquakes — magnitude 4.0 and larger — have the potential to damage structures, in particular unreinforced masonry walls,” Scott Harvey, assistant professor at the Oklahoma University School of Civil Engineering, told WhoWhatWhy. That means the tremors often affect buildings that provide indispensable services after earthquakes, such as police stations, fire stations, and city halls; these tend to be built of unreinforced masonry. For instance, the Cushing Police Station is made of brick, as is the Pawnee City Hall. Both places experienced powerful earthquakes in 2016. To make matters worse for the citizens of Oklahoma, insurance companies know how vulnerable unreinforced masonry is. The Oklahoma Department of Insurance says on its website that houses built with brick or rock are not usually covered under standard earthquake insurance, or sometimes not at all. Two Oklahoma insurance companies, Lynnae Insurance Group and ECI Insurance, openly state that unreinforced masonry is an expensive add-on to existing earthquake insurance. The cheapest insurance costs between $30 and $50 annually. However, when it covers unreinforced masonry, it can range from $300 to $400 a year. This spike in price can accumulate over the years into a punishingly large amount of money, especially when earthquake insurance is such a necessity. In addition, earthquake insurance is typically subject to large deductibles. And, even though they are not the ones who induced the tremors, regular people end up holding the bag because, as of now, the oil companies causing the quakes refuse to pay for any of the costs, insurance or otherwise. That doesn’t sit well with Oklahomans who are experiencing this new strain on their wallets. “I think the oil and gas industry should pay for the damage they caused,” Sharon Wilson, a senior organizer for Earthworks and a former oil and gas worker, told WhoWhatWhy. “Homes built in Texas and Oklahoma were built to withstand wind, not earthquakes.” Earthworks, a non-profit organization formed in 2005, strives to protect the earth and its inhabitants from the destructive effects of oil and mineral extraction. Fellow environmentalist and lawyer Erin Brockovich, who became known worldwide through the eponymous movie about her, told KOCO 5 News in Oklahoma City in 2017 that, “The communities definitely [are] feeling frustrated and voiceless and helpless and are not sure where to turn.” The citizens of Oklahoma are currently struggling with two giants: the raw power and damage of earthquakes, and the large corporations that refuse to provide aid or take responsibility for their actions. Meanwhile, Brockovich is assisting one community that refuses to be silent — the Pawnee Nation of Oklahoma. The money and power that the oil and natural gas corporations wield — and spend on influencing politicians and hiring the best lawyers — gives rise to a perception that they can continue their operations without being called to account for Oklahoma’s induced earthquakes. Now the citizens of the Pawnee Nation are using tribal law to push back against having to pay for earthquakes caused by fracking. On September 3, 2016, a 5.8-magnitude earthquake struck the Pawnee Nation. It damaged every historical building at the Pawnee headquarters — all made of unreinforced masonry and many over one hundred years old. That includes buildings in the national register of historic places. “The stone cracked, and plaster walls cracked and ceilings collapsed and those kinds of things,” Andrew Knife Chief, executive director of the Pawnee Nation told WhoWhatWhy. “It shut us down for a little bit, we were closed as a nation for four days, which makes it very difficult for our citizens because we provide services to our tribal members and then we weren’t able to do that and then construction was happening, and it took about a year.” The Pawnee Nation had the foresight to obtain earthquake insurance in 2009, when the rise in induced earthquakes began. So, one might think the nation would be spared the worst consequences of the man-made of disaster. This was not the case. “The sticker price for just fixing the buildings here from the damage was half-a-million dollars,” said Andrew Knife Chief. “But the manpower and the effort that it took to go through the buildings and to hire the structural engineers we needed to come out and make sure it was safe, [and] getting together of the emergency personnel that we have here — it cost us quite a bit of money.” Knife Chief added, “Insurance didn’t cover everything and so it had to come out of pocket, so it’s the Pawnee Nation that is paying for it.” Not only did the historic buildings suffer damage, so too did the houses of many Pawnee citizens. After the 2016 earthquake hit, more than 30 tribal members reported damage to their homes. To make matters worse, not many tribal members can afford earthquake insurance, so the affected homeowners had to bear all costs out of pocket. But the damage done was not just material. “I think the thing that hurt the Pawnee Nation most is just the damage to our citizens’ psyche,” Knife Chief said. Unlike the city of Pawnee — which also suffered damage — the Pawnee Nation has the ability to control who uses its land and for what. Within the nation’s sphere of influence, it has considerable power. So the nation passed the Energy Resource Protection Act in 2017. This act established “the requirements of notification, reporting, and monitoring for exploration, extraction, and marketing of the energy resources within the Pawnee Nation.” “We have a responsibility under our constitution to preserve the natural resources and protect our citizens, so that is what we are doing,” Knife Chief said. With Brockovich as its lawyer, the Pawnee Nation of Oklahoma sued the 27 oil companies that caused the induced quakes in the Pawnee Nation area (which covers 595 square miles). Knife Chief states that while the nation is not anti-oil and -gas, it is “just trying to push for responsible energy production here in our jurisdiction, and we feel that there is some activity that is being engaged in that causes an undue risk to the Pawnee Nation, and its tribal members. So that is really what we are fighting against, irresponsible actions and really ultra-hazardous activity in the underground injection control wells.” The oil companies attempted to argue that the case should be brought in a state court. However, Judge Dianne Barker Harrold ruled in favor of the Pawnee Nation, stating that this issue was to be resolved under the Pawnee Nation court of law. The trial was moved on to the discovery phase on October 27, 2017. Eagle Road Oil LLC and Cummings Oil Company, the two companies named in the lawsuit (the others were unnamed), did not respond when WhoWhatWhy reached out to them for comment. Related front page panorama photo credit: Adapted by WhoWhatWhy from crack by Pixabay (CC0) Where else do you see journalism of this quality and value? Please help us do more. Make a tax-deductible contribution now. Our Comment Policy Keep it civilized, keep it relevant, keep it clear, keep it short. Please do not post links or promotional material. We reserve the right to edit and to delete comments where necessary.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9280359148979187, "language": "en", "url": "https://www.greencarcongress.com/2019/06/20190617-liu.html", "token_count": 597, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.05517578125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:a7244320-1524-471a-a0e1-d899d1965ee1>" }
Tsinghua study: overall impact of autonomous vehicle deployment on GHGs not significant in the near-to mid-term A study by researchers at Tsinghua University in China has concluded that the overall impact of autonomous vehicle deployment on greenhouse gas emissions is not significant in the near-to mid-term. In the study, published in the journal Energy Policy, the researchers found that autonomous vehicles potentially affect the total greenhouse gas emissions in multiple ways, including reducing vehicle ownership, increasing vehicle use intensity, and changing the vehicle fuel consumption rate. These impacts are mostly internally offset—thus resulting in insignificant impact in the near-to mid-term. In their study, the researchers used China’s passenger vehicle fleet as an example for evaluating the effects of autonomous vehicle deployment on greenhouse gas emissions in different scenarios of autonomous vehicle penetration rates and fuel consumption changes. They conducted a comprehensive literature review to support the study. Sensitivity analysis of the changes in the fully AV VMTs, AV fuel consumption rates and GHG emissions. Liu et al. As more companies join the AV technology competition, AVs are considered a new way to change people’s lives and mobility choices. Meanwhile, the energy consumption levels and GHG emissions of passenger vehicles have drawn much attention. AVs will bring changes in vehicle ownership, travel distance, fuel economy, etc. Though AVs will provide a convenient mobility choice for people, as one fully AV may replace several cars, it will still take a long time to eliminate car ownership, especially in China. Therefore, a conservative prediction is assumed in this article. The assumptions of the changes in vehicle sales and travel distances are not aggressive. The travel distance per vehicle and fuel economy will play important roles in the final results. … Based on the existing research, passenger vehicle fleet GHG emissions are calculated in China with AV deployment. By aligning the changes caused by AVs and passenger vehicle fleet GHG emissions, the results indicate that the introduction of AVs does not lead to GHG emission reductions before 2050. With higher fuel consumption rates, AVs may even lead to more emissions in most cases. Although the increase is not large in absolute terms, in terms of the relative magnitude, the increase can even reach 14.1%. With a better fuel economy, after 2045, a larger share of AVs in the passenger vehicle fleet can begin to show the advantages of AVs in GHG emission reduction. A long-term plan for AVs may have a better result. Passenger vehicle fleets with different deployments of partially AVs will not show significantly different results.—Liu et al. Feiqi Liu, Fuquan Zhao, Zongwei Liu, Han Hao (2019) “Can autonomous vehicle reduce greenhouse gas emissions? A country-level evaluation,” Energy Policy, Volume 132, Pages 462-473 doi: 10.1016/j.enpol.2019.06.013
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9489768743515015, "language": "en", "url": "https://www.investopedia.com/terms/f/fundamentalanalysis.asp", "token_count": 2791, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": -0.01068115234375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:9b91b0dc-c109-400e-b666-57a9067b83a9>" }
What Is Fundamental Analysis? Fundamental analysis (FA) is a method of measuring a security's intrinsic value by examining related economic and financial factors. Fundamental analysts study anything that can affect the security's value, from macroeconomic factors such as the state of the economy and industry conditions to microeconomic factors like the effectiveness of the company's management. The end goal is to arrive at a number that an investor can compare with a security's current price in order to see whether the security is undervalued or overvalued. This method of stock analysis is considered to be in contrast to technical analysis, which forecasts the direction of prices through an analysis of historical market data such as price and volume. - Fundamental analysis is a method of determining a stock's real or "fair market" value. - Fundamental analysts search for stocks that are currently trading at prices that are higher or lower than their real value. - If the fair market value is higher than the market price, the stock is deemed to be undervalued and a buy recommendation is given. - In contrast, technical analysts ignore the fundamentals in favor of studying the historical price trends of the stock. Understanding Fundamental Vs. Technical Analysis Understanding Fundamental Analysis All stock analysis tries to determine whether a security is correctly valued within the broader market. Fundamental analysis is usually done from a macro to micro perspective in order to identify securities that are not correctly priced by the market. Analysts typically study, in order, the overall state of the economy and then the strength of the specific industry before concentrating on individual company performance to arrive at a fair market value for the stock. Fundamental analysis uses public data to evaluate the value of a stock or any other type of security. For example, an investor can perform fundamental analysis on a bond's value by looking at economic factors such as interest rates and the overall state of the economy, then studying information about the bond issuer, such as potential changes in its credit rating. For stocks, fundamental analysis uses revenues, earnings, future growth, return on equity, profit margins, and other data to determine a company's underlying value and potential for future growth. All of this data is available in a company's financial statements (more on that below). Fundamental analysis is used most often for stocks, but it is useful for evaluating any security, from a bond to a derivative. If you consider the fundamentals, from the broader economy to the company details, you are doing fundamental analysis. Investing and Fundamental Analysis An analyst uses works to create a model for determining the estimated value of a company's share price based on publicly available data. This value is only an estimate, the analyst's educated opinion, of what the company's share price should be worth compared to the currently trading market price. Some analysts may refer to their estimated price as the company's intrinsic value. If an analyst calculates that the stock's value should be significantly higher than the stock's current market price, they may publish a buy or overweight rating for the stock. This acts as a recommendation to investors who follow that analyst. If the analyst calculates a lower intrinsic value than the current market price, the stock is considered overvalued and a sell or underweight recommendation is issued. Investors who follow these recommendations will expect that they can buy stocks with favorable recommendations because such stocks should have a higher probability of rising over time. Likewise stocks with unfavorable ratings are expected to have a higher probability of falling in price. Such stocks are candidates for being removed from existing portfolios or added as "short positions. This method of stock analysis is considered to be the opposite of technical analysis, which forecasts the direction of prices through an analysis of historical market data such as price and volume. Quantitative and Qualitative Fundamental Analysis The problem with defining the word fundamentals is that it can cover anything related to the economic well-being of a company. They obviously include numbers like revenue and profit, but they can also include anything from a company's market share to the quality of its management. The various fundamental factors can be grouped into two categories: quantitative and qualitative. The financial meaning of these terms isn't much different from their standard definitions. Here is how a dictionary defines the terms: - Quantitative – capable of being measured or expressed in numerical terms. - Qualitative – related to or based on the quality or character of something, often as opposed to its size or quantity. In this context, quantitative fundamentals are hard numbers. They are the measurable characteristics of a business. That's why the biggest source of quantitative data is financial statements. Revenue, profit, assets, and more can be measured with great precision. Neither qualitative nor quantitative analysis is inherently better. Many analysts consider them together. Qualitative Fundamentals to Consider There are four key fundamentals that analysts always consider when regarding a company. All are qualitative rather than quantitative. They include: - The business model: What exactly does the company do? This isn't as straightforward as it seems. If a company's business model is based on selling fast-food chicken, is it making its money that way? Or is it just coasting on royalty and franchise fees? - Competitive advantage: A company's long-term success is driven largely by its ability to maintain a competitive advantage—and keep it. Powerful competitive advantages, such as Coca Cola's brand name and Microsoft's domination of the personal computer operating system, create a moat around a business allowing it to keep competitors at bay and enjoy growth and profits. When a company can achieve a competitive advantage, its shareholders can be well rewarded for decades. - Management: Some believe that management is the most important criterion for investing in a company. It makes sense: Even the best business model is doomed if the leaders of the company fail to properly execute the plan. While it's hard for retail investors to meet and truly evaluate managers, you can look at the corporate website and check the resumes of the top brass and the board members. How well did they perform in prior jobs? Have they been unloading a lot of their stock shares lately? - Corporate Governance: Corporate governance describes the policies in place within an organization denoting the relationships and responsibilities between management, directors and stakeholders. These policies are defined and determined in the company charter and its bylaws, along with corporate laws and regulations. You want to do business with a company that is run ethically, fairly, transparently, and efficiently. Particularly note whether management respects shareholder rights and shareholder interests. Make sure their communications to shareholders are transparent, clear and understandable. If you don't get it, it's probably because they don't want you to. It's also important to consider a company's industry: customer base, market share among firms, industry-wide growth, competition, regulation, and business cycles. Learning about how the industry works will give an investor a deeper understanding of a company's financial health. Financial Statements: Quantitative Fundamentals to Consider Financial statements are the medium by which a company discloses information concerning its financial performance. Followers of fundamental analysis use quantitative information gleaned from financial statements to make investment decisions. The three most important financial statements are income statements, balance sheets, and cash flow statements. The Balance Sheet The balance sheet represents a record of a company's assets, liabilities and equity at a particular point in time. The balance sheet is named by the fact that a business's financial structure balances in the following manner: Assets = Liabilities + Shareholders' Equity Assets represent the resources that the business owns or controls at a given point in time. This includes items such as cash, inventory, machinery and buildings. The other side of the equation represents the total value of the financing the company has used to acquire those assets. Financing comes as a result of liabilities or equity. Liabilities represent debt (which of course must be paid back), while equity represents the total value of money that the owners have contributed to the business - including retained earnings, which is the profit made in previous years. The Income Statement While the balance sheet takes a snapshot approach in examining a business, the income statement measures a company's performance over a specific time frame. Technically, you could have a balance sheet for a month or even a day, but you'll only see public companies report quarterly and annually. The income statement presents information about revenues, expenses and profit that was generated as a result of the business' operations for that period. Statement of Cash Flows The statement of cash flows represents a record of a business' cash inflows and outflows over a period of time. Typically, a statement of cash flows focuses on the following cash-related activities: - Cash from investing (CFI): Cash used for investing in assets, as well as the proceeds from the sale of other businesses, equipment or long-term assets - Cash from financing (CFF): Cash paid or received from the issuing and borrowing of funds - Operating Cash Flow (OCF): Cash generated from day-to-day business operations The cash flow statement is important because it's very difficult for a business to manipulate its cash situation. There is plenty that aggressive accountants can do to manipulate earnings, but it's tough to fake cash in the bank. For this reason, some investors use the cash flow statement as a more conservative measure of a company's performance. Fundamental analysis relies on the use of financial ratios drawn from data on corporate financial statements to make inferences about a company's value and prospects. The Concept of Intrinsic Value One of the primary assumptions of fundamental analysis is that the currently price from the stock market often does not fully reflect a value of the company supported by the publicly available data. A second assumption is that the value reflected from the company's fundamental data is more likely to be closer to a true value of the stock. Analysts often refer to this hypothetical true value as the intrinsic value. However, it should be noted that this usage of the phrase intrinsic value means something different in stock valuation than what it means in other contexts such as options trading. Option pricing uses a standard calculation for intrinsic value, however analysts use a various complex models to arrive at their intrinsic value for a stock. There is not a single, generally accepted formula for arriving at the intrinsic value of a stock. For example, say that a company's stock was trading at $20, and after extensive research on the company, an analyst determines that it ought to be worth $24. Another analyst does equal research but determines that it ought to be worth $26. Many investors will consider the average of such estimates and assume that intrinsic value of the stock may be near $25. Often investors consider these estimates highly relevant information because they want to buy stocks that are trading at prices significantly below these intrinsic values. This leads to a third major assumption of fundamental analysis: In the long run, the stock market will reflect the fundamentals. The problem is, nobody knows how long "the long run" really is. It could be days or years. This is what fundamental analysis is all about. By focusing on a particular business, an investor can estimate the intrinsic value of a firm and find opportunities to buy at a discount. The investment will pay off when the market catches up to the fundamentals. One of the most famous and successful fundamental analysts is the so-called "Oracle of Omaha," Warren Buffett, who champions the technique in picking stocks. Criticisms of Fundamental Analysis Technical analysis is the other primary form of security analysis. Put simply, technical analysts base their investments (or, more precisely, their trades) solely on the price and volume movements of stocks. Using charts and other tools, they trade on momentum and ignore the fundamentals. One of the basic tenets of technical analysis is that the market discounts everything. All news about a company is already priced into the stock. Therefore, the stock's price movements give more insight than the underlying fundamentals of the business itself. The Efficient Market Hypothesis Followers of the efficient market hypothesis (EMH), however, are usually in disagreement with both fundamental and technical analysts. The efficient market hypothesis contends that it is essentially impossible to beat the market through either fundamental or technical analysis. Since the market efficiently prices all stocks on an ongoing basis, any opportunities for excess returns are almost immediately whittled away by the market's many participants, making it impossible for anyone to meaningfully outperform the market over the long term. Examples of Fundamental Analysis Take the Coca-Cola Company, for example. When examining its stock, an analyst must look at the stock's annual dividend payout, earnings per share, P/E ratio, and many other quantitative factors. However, no analysis of Coca-Cola is complete without taking into account its brand recognition. Anybody can start a company that sells sugar and water, but few companies are known to billions of people. It's tough to put a finger on exactly what the Coke brand is worth, but you can be sure that it's an essential ingredient contributing to the company's ongoing success. Even the market as a whole can be evaluated using fundamental analysis. For example, analysts looked at fundamental indicators of the S&P 500 from July 4 to July 8, 2016. During this time, the S&P rose to 2129.90 after the release of a positive jobs' report in the United States. In fact, the market just missed a new record high, coming in just under the May 2015 high of 2132.80. The economic surprise of an additional 287,000 jobs for the month of June specifically increased the value of the stock market on July 8, 2016.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9461230635643005, "language": "en", "url": "https://www.weforum.org/agenda/2018/05/this-forgotten-element-could-be-the-key-to-our-green-energy-future-heres-why/", "token_count": 1281, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.0224609375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:beee86a2-2721-4e16-ae3d-7751f14d301d>" }
From the way we power and heat our homes to the fuel we use in our vehicles, the energy sources on which we depend release harmful carbon dioxide into the atmosphere. Given the scale of the decarbonisation challenge, we need to use many technological solutions in tandem. But one element has so far been forgotten: hydrogen. Our demand for energy keeps growing. Analysts forecast our energy demand in 2050 will be 30-40% higher than today, even assuming we become much more energy-efficient. Increases on this scale are not unprecedented. Over the past 30 years, worldwide energy demand has more than doubled. What is unprecedented is the transformation needed in how we generate that energy. Renewables are getting cheaper, and have received more than $2 trillion of investment globally in the past decade. Yet the share of our energy obtained from fossil fuels has hardly budged. Since 1980, renewables have increased from less than 1% of the primary energy mix to just over 1% today. In contrast, fossil fuels have remained at a stubborn 81% of the primary energy mix. We need to scale up existing low-carbon technologies at a much faster rate – otherwise population growth will continue to outpace investment in renewables, and fossil fuels will continue to dominate. We cannot, however, keep asking for more from technologies that have proved successful to-date. The International Energy Agency (IEA) highlights that only three of twenty-six low carbon innovation areas - solar PV and onshore wind, energy storage and electric vehicles (EV) - are mature, commercially competitive and on track to deliver their share of the climate objectives set out at the 2015 Paris Climate Conference. It is unlikely we can squeeze more out of these three technology areas than is currently projected. Solar PV and onshore wind are intermittent, so need to be used in conjunction with energy storage or other forms of power generation. The high-energy-density batteries that are used for both storage and EVs are causing concern around whether the supply of raw materials needed to manufacture them will be able to keep pace with their rapid uptake. According to BNEF, graphite demand is predicted to skyrocket from just 13,000 tons a year in 2015 to 852,000 tons in 2030, and the production of lithium, cobalt and manganese will increase more than 100-fold. This is already creating pressure on supply chains and prices - and on the people working in these mines, often in incredibly poor conditions. So what other options are available to us? The World Economic Forum’s latest white paper proposes some bold ideas to significantly accelerate sustainable energy innovation and support the uptake of future energy sources. One energy vector mentioned there that is often forgotten is hydrogen. Hydrogen has the potential to decarbonise electricity generation, transport and heat. That’s because when produced by electrolysis - using electricity to split water (H2O) into hydrogen and oxygen - hydrogen does not produce any pollutants. Perhaps the best-known use for hydrogen currently is in transportation. With electric vehicles, drivers are often concerned about their range and the time it takes to recharge. Fuel cell electric vehicles, which run on hydrogen, avoid these concerns, as they have a longer range, a much faster refuelling time and require few behavioural changes. Hydrogen can also be used to heat our homes. It can be blended with natural gas or burned on its own. The existing gas infrastructure could be used to transport it, which would avoid the grid costs associated with greater electrification of heat. Once produced, hydrogen could also act as both a short and long‐term energy store. Proponents suggest that surplus renewable power – produced, for example, when the wind blows at night – can be harnessed and the hydrogen produced using this electricity can be stored in salt caverns or high-pressure tanks. Earlier this month a report by the Institution of Mechanical Engineers called for more demonstration sites and a forum in which to discuss hydrogen’s long-term storage potential. Hydrogen clearly has several potential uses, but more research, particularly in production and safety, is needed before we can use it at scale. Currently, almost all of global hydrogen (96%) is produced by reforming methane (CH4), a process which ultimately produces carbon dioxide. To be sustainable, this production method would need to be deployed with carbon capture and storage, which is itself in need of further development. Electrolysis produces no carbon emissions. Yet the amount of hydrogen that can be produced using this method depends on the cost and availability of electricity from renewable sources. A report by the Royal Society suggests that electrolysis may be better suited for vehicle refuelling and off-grid deployment rather than for large-scale, centralised hydrogen production. Concerns about the safety of using hydrogen also need to be addressed. A report by the UK’s National Physical Laboratory noted two priority safety issues when transporting hydrogen in the grid and combusting it for heat. When hydrogen is combusted, you can’t see the flame, so there needs to be a way of detecting whether it is lit. Hydrogen would be transported and stored at high pressures, so we need to find an odorant that works with hydrogen so that people can detect leaks. Have you read? On the horizon The appetite to explore hydrogen as an energy vector is growing at pace, but reports need to be followed up with action. The research challenges that hydrogen poses are not unique to one country or company, so collaboration in developing and trialling technologies will be critical. Both businesses and governments seem to recognise this. Last year the Hydrogen Council, a group of multinational companies with a ‘with a united vision and ambition for hydrogen to foster the energy transition’, was launched at the World Economic Forum in Davos. And earlier this year governments have also agreed to collaborate on the topic, launching a new theme under the Mission Innovation partnership focussed on bringing hydrogen technologies closer to market. Hydrogen is not the panacea - but then neither is solar PV, offshore wind or battery storage. We need several and varied technologies if we are to decarbonise successfully. Hydrogen looks very likely to be one of them. Our new report, Accelerating Sustainable Energy Innovation, is available here.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9496424794197083, "language": "en", "url": "http://www.ipsnews.net/2019/03/saving-rainy-day-takes-new-meaning-caribbean/", "token_count": 1199, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.2158203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:64dca63d-41d8-48ab-bb1d-afc800fb0b06>" }
- Development & Aid - Economy & Trade - Human Rights - Global Governance - Civil Society Tuesday, July 7, 2020 KINGSTOWN, Mar 4 2019 (IPS) - In the tiny eastern Caribbean nation of St. Vincent and the Grenadines, proverbs relating to the weather are very common. Everyone knows that “Who has cocoa outside must look out for rain”, has nothing to do with the drying of the bean from which chocolate is made or the sudden downpours common in this tropical nation. So when the government of St. Vincent and the Grenadines announced in 2018 that there was a need to put aside some money for “a rainy day” because of climate change, citizens knew that the expression was both figurative and literal. In this country, highly dependent on tourism, visitors stay in hotel and other rented accommodation have to contribute 3 dollars per night to the climate change fund. They join residents who had been contributing to the Climate Resilience Levy, for over one year, paying a one percent consumption charge. The funds go into the Contingency Fund. As with many other small island developing states, St. Vincent and the Grenadines has had to struggle to finance mitigation and adaptation for climate change. In the year since the Climate Resilience levy was established, 4.7 million dollars has been saved for the next “rainy day”. The savings represents a minuscule portion of the scores of million of dollars in damage and loss wrought by climate change in this archipelagic nation over the last few years. In just under six hours in 2013, a trough system left damage and loss amounting to 20 percent of the GDP and extreme rainfall has left millions of dollars in damage and loss almost annually since then. The 4.7 million dollars in the climate fund is mere 18 percent of the 25 million dollars that lawmakers have budgeted for “environmental protection” in 2019, including climate change adaptation and mitigation. However, it is a start and shows what poorer nations can do, locally, amidst the struggle to get developed nations to stand by their commitments to help finance climate change adaptation and mitigation. “Never before in the history of independent St. Vincent and the Grenadines have we managed to explicitly set aside such resources for a rainy day,” Minister of Finance Camillo Gonsalves told lawmakers this month as he reported on the performance of the fund in its first year. He said that in 2019, the contingency fund is expected to receive an additional 4.7 million dollars. “While this number remains small in the face of the multi-billion potential of a major natural disaster, it is nonetheless significant. If we are blessed with continued good fortune, in the near term, the Contingency Fund will be a reliable, home-grown cushion against natural disasters,” Gonsalves told legislators. He said the fund will also stand as an important signal to the international community that St. Vincent and the Grenadines is committed to playing a leading role in its own disaster preparation and recovery. Dr. Reynold Murray, a Vincentian environmentalist, welcomes the initiative, but has some reservations. “I am worried about levies because very often, the monies generally get collected and go into sources that don’t reach where it is supposed to go,” he told IPS. “That’s why I am more for the idea of the funding being in the project itself, whatever the initiative is, that that initiative addresses the climate issues. “For example, if you are building a road, there should be the climate adaption monies in that project so that people build proper drains, that they look at the slope stabilisation, that they look at run off and all that; not just pave the road surface. That’s a waste of time, because the water is going to come the next storm and wash it away.” Murray told IPS he believes climate change adaptation and mitigation would be best addressed if the international community stands by its expressed commitments to the developing world. “My honest opinion is that a lot of that financing has to come from the developed countries that are the real contributors to the greenhouse problem,” he told IPS. “That is not to say that the countries themselves have no obligation. We have to protect ourselves. So there must be a programme at the national level, where funds are somehow channelled into addressing adaptation and mitigation. The mitigation is more with the large, industrialised countries, but small countries like us, especially the Windward Islands, mitigation is our big issues…” St. Vincent and the Grenadines is making small strides as a time when the finance minister said the 437 million dollar budget that lawmakers approved for 2019 and the nation’s long-term developmental plans, must squarely confront the reality of climate change. “This involves recovery and rehabilitation of damaged infrastructure, investing in resilience and adaptation, setting aside resources to prepare for natural disasters, adopting renewable energy and clean energy technologies, and strengthening our laws and practices related to environmental protection,” the finance minister said. This story includes downloadable print-quality images -- Copyright IPS, to be used exclusively with this story. IPS is an international communication institution with a global news agency at its core, raising the voices of the South and civil society on issues of development, globalisation, human rights and the environment Copyright © 2020 IPS-Inter Press Service. All rights reserved. - Terms & Conditions You have the Power to Make a Difference Would you consider a $20.00 contribution today that will help to keep the IPS news wire active? Your contribution will make a huge difference.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.936444878578186, "language": "en", "url": "https://businessecon.org/bookkeeping-debits-and-credits-in-expense-accounts-lesson-8/", "token_count": 1045, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12109375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:c6b98a77-143c-4e6f-a08c-d4de2463cd3a>" }
Expense types of accounts are the easiest to understand with bookkeeping. In general, only debits are entered in expense types of accounts. Before delving into the debits and credits for expense accounts, there is some accounting terminology to understand. Terminology related to expense types of accounts. There are several different terms used to describe this section of the income statement (profit and loss statement). The following is a short list of the different names used to say expense accounts: - General and Administrative - Operating Expenses In more than 90% of all business operations, the most expensive overhead item is the cost of the management team. This is referring to their compensation which includes salaries, benefits, payroll taxes etc. In addition, the front office administration costs are also included, even the wages paid to the bookkeeper. The second most expensive line item with expenses is facility costs. Facility costs comprise rent, maintenance, real estate taxes and others. Other forms of expenses include: - Sales and Marketing - Insurance – general liability, auto, property, umbrella etc. - Transportation – often this expense is a function of cost of sales - Communications – phone, cell phones, internet, radio, GPS systems - Office – supplies, office technology, software - Utilities – water, sewer, electricity, gas (sometimes theses expenses are included with facilities) - Professional Fees – legal, outside accounting, consulting - Taxes – property, revenue, licenses - Depreciation – sometimes depreciation is included in cost of sales types of accounts depending on the nature of the business - Other – banking, meals and entertainment, travel, training & miscellaneous The goal for the reader is to understand that these expenses can be grouped under the various terms described above. Now let’s get back to debits and credits. Debits and Credits As I explained in Lesson 2, the dual entry system used in bookkeeping uses debits and credits to ensure balance in the books. Expense accounts receive their debits mostly from two respective journals. If you are unsure of what this is referring to here, then please read Lesson 3 explaining ledgers and journals. From above, the primary expense in the overhead section (expense types of accounts) is management payroll. Therefore, the payroll journal is one of the primary sources of the debits that are posted to the expense ledgers. The secondary journal is of course the purchases journal. For those of you following this series of lessons, you should immediately realize that journals can feed information to different types of accounts. For example, the purchases journal feeds information to Cost of Sales and to Expenses. JOURNALS ARE USED TO RECORD ECONOMIC TRANSACTIONS IN CHRONOLOGICAL DATE ORDER. THE JOURNALS RECORD BOTH THE DEBIT AND THE CREDIT. BOTH SIDES OF THE ENTRY ARE THEN TRANSFERRED TO THE RESPECTIVE LEDGER (ACCOUNT) FOR FINAL POSTING. JOURNALS ACT AS SOURCES OF INFORMATION AND FEED DEBITS AND CREDITS TO MORE THAN ONE TYPE OF AN ACCOUNT. AS AN EXAMPLE, THE PAYROLL JOURNAL FEEDS ENTRIES TO COST OF SALES, EXPENSES, LIABILITIES, AND TO THE ASSET TYPE OF ACCOUNTS. With regard to expense accounts, they will always end in debit balances. It is rare, very rare for even a credit entry to be posted to an expense account. Credits do happen and are most often a function of some type of purchase return to a supplier or a vendor providing a credit related to services rendered. The following are some examples of credits posted to expense accounts: 1) Often banks will subtract or take back a fee charged to their client for relationship purposes. In this case the cash account increases via a debit and the expense account – banking fees – is issued a credit reducing the overall total bank fees. 2) Another common credit posted to expense accounts are refunded over-payments for different types of expenses. A good example of this are tax over-payments. The government returns the over-payment to the business and just as in the banking example above, cash is debited for the value and the respective tax expense is decreased via a credit to that account. 3) A third, and also common credit for expense accounts, is a simple error made in recording the original transaction. Most bookkeepers use a credit entry to fix the problem. In summation, if there is any concern or possibility of having an ending credit balance for an expense type of account, the answer is ‘YES’ it can happen. This is an advanced bookkeeping function which is covered in future lessons. For now, you want to think that expense accounts should only have debit ending balances. This is still in the early stages of learning about bookkeeping so for you it is still straight forward – Expense Accounts Should Have Debit Balances and Entries, Credit Entries Can Exist; But, Are Rare. ACT ON KNOWLEDGE.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9315679669380188, "language": "en", "url": "https://rd.springer.com/book/10.1007%2F978-3-319-69799-4", "token_count": 169, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.15625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:4546097e-822d-4ed1-b8a6-d4856d8c2870>" }
About this book This book explores the relationships between financial inclusion, poverty and inclusive development from Islamic perspectives. Financial inclusion has become an important global agenda and priority for policymakers and regulators in many Muslim countries for sustainable long-term economic growth. It has also become an integral part of many development institutions and multilateral development banks in efforts to promote inclusive growth. Many studies in economic development and poverty reduction suggest that financial inclusion matters. Financial inclusion, within the broader context of inclusive development, is viewed as an important means to tackle poverty and inequality and to address the sustainable development goals (SDGs). This book contributes to the literature on these topics and will be of interest to researchers and academics interested in Islamic finance and financial inclusion. Financial inclusion Inclusive development OIC Islamic microfinance Zakah Takaful system Zakat institution Islamic finance Waqf
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9526669979095459, "language": "en", "url": "https://www.allotsego.com/tag/electric-cars/", "token_count": 469, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.12158203125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e0805e1f-f938-48ee-baba-9d52c9ed1bb3>" }
To the Editor: In his Jan. 24-25 column, former DEC Commissioner Mike Zagata makes an argument that is theoretically interesting but falls apart when you look at the actual numbers behind it. Zagata compares electric cars to conventional gas-powered vehicles and points out that, while electric cars are responsible for lower carbon emissions during the driving part of their life cycle, it’s more energy intensive to manufacture them. This sets up a kind of decision that’s familiar to business people or households: Should I go for Option A that’s more expensive to buy but cheaper to operate, or Option B that’s cheaper off the shelf but costlier to use? It’s a good question to ask, and most people would then want to know how much cheaper is Option B to buy, and how much more expensive to operate? Mr. Zagata doesn’t ask that, but instead jumps right to his preferred conclusion: Electric cars are a bad idea. It turns out people have run the numbers, and Mr. Zagata’s claim is wrong. The higher carbon emissions during manufacture are easily made up for, and more, by the lower carbon emissions while driving. And that’s true even if you don’t recycle the battery, so recycling makes the case for the electric car even stronger. And it’s true even if your electricity is from coal. An electric car is 80 percent to 90 percent efficient in terms of turning the electricity in the battery into the car’s motion. A gasoline-powered car ranges from 0 percent (when it’s idling) to 30 percent. By the time you figure in additional considerations, like the energy lost in generating the electricity (assuming it’s from a coal- or gas-fired plant), or the energy spent pumping, shipping, and refining the oil that powers a conventional car, the “well-to-wheels” efficiency of the electric car is about 28 percent, while a gasoline car comes in around 14 percent. That difference is what allows electric cars to make up for the slightly larger impact they have during manufacturing. And if the electricity comes from cleaner sources than coal or gas, so much the better.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9668791890144348, "language": "en", "url": "https://www.coin-report.net/en/deflation/", "token_count": 413, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.462890625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:dab0850d-829b-45d5-807d-916924c17d6f>" }
What is deflation? Deflation refers to the phenomenon whereby more and more goods can be purchased with one unit of a currency. In other words, the value of money within a given territory continues to increase over time. It has been around for many centuries, and it occurs repeatedly in different forms. It has a negative impact on the export economy of a country, as it becomes increasingly difficult to sell products abroad. On the other hand, the inhabitants of the respective country can travel more cheaply during this phenomenon, or buy foreign products. Japan is an example of a country that has been struggling with deflation for many years. The issue of new money by the central bank may curb deflation. Deflation and cryptocurrencies Even though many people are unaware of the fact, even cryptocurrencies can be deflated. A single unit of the money can be used to purchase more goods than before. Deflation is always given in a percentage, which is projected for the calendar year. If a deflation of 5 percent is indicated, 5 percent more goods can be purchased for the same amount of money as in the previous year. Of course, even during high levels, individual goods may become more expensive, for example, because of a shortage. Therewell it is, therefore, always calculated using an average basket. Since crypto money is a global currency, the effects of it are slightly different than in the case of central bank money. However, owners of the respective coins stand to profit significantly from it. Not only can you buy more goods for the same amount, you can also exchange your crypto money for more euros or dollars. Alexander Weipprecht is the managing partner of Provimedia GmbH. As a trained IT specialist for application development, he has been advising leading companies on the following topics for more than 10 years: online marketing, SEO and software. Cryptocurrency is becoming increasingly important to businesses and investors. Through Coin Report and Krypto Magazin Germany, Alexander wants to give all people easy access to the subject matter.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9621406197547913, "language": "en", "url": "https://www.eliomotors.com/a-nation-of-innovation-the-importance-of-american-entrepreneurs-trending-topics/", "token_count": 590, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.04443359375, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:5c7a49af-93ee-411d-8387-3cff89dfb290>" }
Entrepreneurs are not ordinary people. They have both a vision that can change the world and the drive and tenacity to make that vision come to fruition. At its core, the United States is a nation created and driven by entrepreneurs and that innovative spirit is ingrained in our national DNA. INC.com believes that the American entrepreneurial spirit is disappearing. For the United States to remain a country that drives innovation, we need to reverse this trend. The history of the United States is full of examples that showcase the American entrepreneurial spirit. The founding fathers, in their way, were entrepreneurs. They possessed an uncompromising vision of what a nation should be and they had the resolve to found a country that would forever transform the world. Entrepreneurs such as Thomas Edison, Henry Ford, Bill Gates, and Steve Jobs have all exhibited this innovative spirit and our country would look much different without their drive to make a difference. Entrepreneurs often have humble beginnings. For example, Steve Jobs famously started Apple Computers in a garage. While the road to success for a startup can be perilous, their importance cannot be overstated. Startups create more jobs at a quicker pace than established companies. According to Forbes, “65% of the net new jobs are created by small businesses, while 1 million jobs are cut each year by large corporations.” Additionally, a study conducted by the Kauffman Foundation found that “net job growth occurs in the U.S. economy only through startup firms.” Not only are American entrepreneurs crucial to the United States, the Financial Times believes they are essential to the global economy. According to the Financial Times, “Entrepreneurs drive innovation — often much more quickly than established competitors. Successful entrepreneurs, by definition, have figured out a way to do things better. They have challenged the status quo, asked tough questions and competed with established businesses. When an entirely new industry is created, the odds are that an entrepreneur is responsible.” The need for American entrepreneurs has never before been so crucial. Forbes has found that the United States’ role in the global economy is shrinking at an alarming pace. The question becomes, what can we do to fix it. Forbes believes “The first step is to improve our economy. We have to create an environment that promotes the entrepreneurial spirit.” One way to cultivate this spirit is through the revitalization of American manufacturing. By increasing American manufacturing output, not only do we improve the trade deficit, but we re-establish the United States as the thought leaders in the global economy. The product of the American entrepreneurial spirit is ubiquitous in everyday life. The car you drive, the device you are using to access this article, and the freedoms you enjoy are all the end result of this spirit. The American entrepreneurial spirit has changed the world, and for that to continue, we need to actively support innovators and entrepreneurs.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.948720395565033, "language": "en", "url": "https://www.gstindia.com/a-dummys-guide-to-gst/", "token_count": 733, "fin_int_score": 4, "fin_score_model": "en_fin_v0.1", "risk_score": 0.4296875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:46bbb5ae-2651-4236-98ba-6243d3cdfdca>" }
(The writer is a popular blogger and current affairs commentator.) The introduction of Goods and Services Tax, or GST, is arguably the most ambitious tax reform attempted by India since Independence. It aims at transforming the country into a common market, dissolving artificial economic barriers that create differential tax regimes for similar products and services across States. It is a destination-based tax system that levies tax at the consumer end rather than at source and various stages of value-addition – which adds layers of taxation ultimately inflating the cumulative tax impact. Recognising the importance of Indirect Taxation in a country like India, where the Direct Taxation base is small, GST casts the net at the point of consumption or delivery of service while rationalising the total levy, merging multiple taxes such as Excise Duty, VAT, Service Tax, etc into one composite rate. GST is designed to give a boost to manufacturing, not only by reducing the incidence of tax (from a current compounded level of 25-26% to 18%) but also increasing the physical and bureaucratic ease of inter-State movement. At the same time there are concerns about States that are high on manufacturing losing out to States that are essentially consumption centres. This is addressed by guaranteeing States compensation for revenue loss for up to five years. GST will also radically change the way distribution and transportation of goods happen that should benefit both consumers and manufacturers, ultimately also expanding markets and reach. Costs of maintaining warehouses in each State and non value-adding transhipment will practically disappear, adding to the profitability of manufacturers while making it worthwhile for organised logistics companies to invest in more efficient vehicles and systems of transportation. However, in the process of harmonisation, taxes on services may marginally increase from current levels, which would pinch the middle-classes more palpably, as eating out, travel and mobile bills become more expensive. Similarly, some sectors – like textiles and branded jewellery that enjoy a lower tax rate today will get more expensive. Therefore, whether it will be perceived as “Acche Din” by the common man is doubtful as they are more prone to notice the taxes on service bills (example restaurants) while taxes get easily hidden in the MRP (cost of goods). However, what the Government is banking upon is the overall boost to the economy. If indeed the expected 1-2% spurt in GDP does materialise, all will be forgiven. While GST would be a major political victory for the Narendra Modi Government and take away its image about not being able to push through economic reforms, its implementation is not going to be a painless process. Therefore, putting in place a grievance redress mechanism will be as important as careful chaperoning of the process of implementation. Any setback will be a bigger embarrassment than not being able to pass the Bill. GST will be one achievement that would vindicate Narendra Modi’s unshakeable trust in Arun Jaitley whose singular contribution it would be for getting the main Opposition to the negotiating table. But, it will also need all of Jaitley’s legal acumen to overcome the technical minefields that are strewn in the path of the GST’s roll-out. Interestingly, States have insisted on exemption of alcohol for “human consumption” for understandable reasons. So there is no relief for the wicked. Disclaimer: The opinions, beliefs and views expressed by the various authors and forum participants on this website are personal and do not reflect the opinions, beliefs and views of ABP News Network Pvt Ltd.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9535587430000305, "language": "en", "url": "https://www.trainingsadda.in/blockchain-the-strengths-weaknesses/", "token_count": 1656, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.018310546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:17f6e23a-9e5f-4592-abd4-9cc22ef90db5>" }
Blockchain in Scientific and Medical Research: The creation of the Sophia robot series is a good example of the usefulness of the blockchain. The start-up singularity NET has been able to develop these articulated artificial intelligences with a human face thanks to a blockchain system. No information was lost on the way and the researchers were able to work on the project from several corners of the globe. This lack of fragmentation of work in the blockchain also interested the European Union, which sees applications in the field of health, the management of personal data or in the processing of logistical issues. What are the strengths and weaknesses of the blockchain? During the presentation of Bitcoin in October 2008 by Satoshi Nakamoto, the most emblematic point of the project was the Blockchain. Even if at the time we did not imagine all the implications that could have, it was a revolution. After a dozen years of existence (to be exact we should have written “put into practice” because the blockchain technology existed before the appearance of Bitcoin), the blockchain begins gradually to show its full potential. Unfortunately, there are also still a number of problems related to the youth of this technology. As in everything, there are positive points and other negatives, which we will address in turn. It should be kept in mind that this article is based on the current state of technology in November 2018. Since the evolution in the world of blockchain is fast, some disadvantages can quickly become part of the old history. In the same way, new unsuspected advantages so far can appear. Advantages of the Blockchain: Advantage Number 1: Tamper-proof Data, Traceability and Ownership Once an information is written on a decentralized blockchain, it becomes in practice tamper-proof. Indeed, the principle of the blockchain is to make any data accepted by minors impossible to modify afterwards. However, this is only valid when the blockchain in question is truly decentralized, that is to say, it is not controlled by a single entity but by a large number of people independent of each other. This will have a huge impact in a lot of areas where the irrefutability of information is paramount. Here are some examples : - In the event of a dispute on any subject of which an element has been entered on the blockchain, a judge could easily consult the blockchain and use it as evidence to make his decision. - It is possible to register the owner of such or in the blockchain. For example, who owns which building, who owns such a work of art, etc. The property would then be guaranteed. Advantage Number 2: Remove Intermediaries Thanks to the appearance of Bitcoin, the first concrete case of the use of blockchain technology, it has been possible to carry out “dematerialized” peer-to-peer transactions. It is now possible to exchange value in a network (on the internet) without a trusted third party, and this is revolutionary! We make a shortcut, but that means we no longer need the banks. The fact that a computer protocol organizes exchanges reduces transaction costs: - no verification, control fees - automation of tasks - no mistakes Advantage Number 3: Protocol Security and Speed If we take the example of bitcoin, it has never been hacked. All the scandals related to a theft of bitcoin took place either at the level of trading sites, or via scams. Indeed, to be able to compromise with certainty the integrity of a protocol such as bitcoin, it would be necessary to be able to control 51% of the computing power dedicated to the mining of this crypto-active. This is called a 51% attack. In addition to maximum security, transaction times are extremely low. When comparing a Bitcoin transaction to a SEPA transfer, the first is much faster. A transaction in Bitcoin is of the order of a few minutes while a SEPA transfer takes several hours or even days. Some exceptions are worth noting. When the Bitcoin network is clogged, transactions can take several days and fees can be significant. This was the case at the end of 2017 with the explosion of the price of Bitcoin and cryptocurrencies. Advantage Number 4: Creating a New Decentralized Digital Economy Even if the blockchain does not allow a total elimination of the intermediaries, it can play on the decentralization and allow the emergence of a more decentralized economy. If we take the internet, only a few actors control almost the entire web: Google, Facebook, Amazon, Apple … The web is now centralized. You want to create an application, it will have to be created under iOS (Apple) and / or Android (Google). Your business is entirely related to the goodwill of Google. If the latter does not refer to your website or your application, you may as well say that you have not the slightest chance. The advantage of this decentralization is that creators no longer have control over the companies / organizations they create. Disadvantages of the blockchain: Disadvantage # 1: Few people trained in this technology The main flaw of this technology is that few people are still able to master this technology in a professional way. There is almost no institution yet to study crypto currencies. Some more particular elements explain this lack of professionals: - The programming of smart contracts requires in some cases the use of a specific programming language. To program an Ethereum contract, you have to master the Solidity language. Some new blockchain projects attempt to use more traditional programming languages such as C++, Java, or Python, but they are usually still in the project stage. - The blockchain is not yet something usual in the corporate world (despite many tests mainly in large companies). Very few companies use it and have employees who can manage this component. This is especially true in small and medium-sized structures. - In the long term, this technology will undoubtedly replace certain professions or considerably reduce the number of some of them. There will be resistance from those who risk losing their jobs. This will significantly slow down the training of people who can use this tool. Disadvantage # 2: Recent and imperfect technology Blockchain technology is still recent. As a result, not everything is perfect and there is still a lot to do to have something easier to use. Here are some problems that currently exist with classic blockchains: - It is not possible to go back. Once a transaction is completed, it is irrevocable. If this seems normal when transferring money to someone, it is a problem when one is mistaken when sending a crypto-active. These tokens are lost and no one will be able to use them. Only way, a hard-fork like that of the Ethereum Classic. - Some crypto currency transfer systems are difficult to understand. For example, a beginner will always have trouble understanding the “gas” system of the Ethereum network. It is difficult to know what it is, or what amounts to register. As we have seen, the existing disadvantages are mainly due to the novelty of this technology, which must continue its development in order to gain in efficiency. The benefits enjoyed by the blockchain are absolutely phenomenal. This can significantly improve some aspects of our life. This can revolutionize the way we approach other things now. While there is still a long way to go before technology is adopted in everyday life, it is undeniably something that can significantly improve our lives in the long run. At present, many world-famous companies are starting to take an interest in blockchain. They analyse how this can improve their operation or that of the goods and services they produce. The same is true for many governments. Some have specific legislation in place to properly regulate this sector, while others have created their own crypto-assets (although they are centralized). Sundar working in mippin.com, [mainly writes for Zero turn mowers] as a manager. He is a writer for more than a year. And also working as a freelancer SEO analyst. He helps his clients to grow their business by advising them, how to advertise and market.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.937754213809967, "language": "en", "url": "https://www.uni-ulm.de/en/mawi/rwwp/accounting-auditing/school-programme/", "token_count": 599, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0751953125, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:271c6e6c-535f-43db-b664-52612be9d4b3>" }
The department "Accounting and Auditing" The accounting system is an important part of the information system of a company. In external financial accounting (bookkeeping and accounting) all transactions are displayed, while internal accounting (cost accounting) records operation-related issues. Both systems are in a close relation. The external audit of financial accounting is the responsibility of the auditors, who have the task to perform the statutory annual audits in this context. The auditor checks the legality of the financial statements and also takes note of a number of additional audit-related services. The importance of accounting, particularly external accounting, will be greater in regards of the increasing internationalization and the ever-increasing focus of the corporate management on maximizing the corporate value (shareholder value) in the future. Therefore, it represents an important sector of the economy. The training course provides the students the opportunity to gain insight into the world of accounting and auditing. Moreover the students get the possibility to research and work on a topic independently. Possible topics are: • External and internal accounting in a company: The accounting system is used for systematically identifying, preparation, analysis and presentation of future and current facts and processes in the operation. While external accounting especially deals with the annual financial statement and the profit and loss statement and thus mainly provides information for external users, the primary function of internal accounting is the planning within the company. It provides information which is only useful to internal users. The tasks and functions of the respective parts of the accounting system should be developed and presented in this session. • The profession of auditor: The job description of an auditor has considerably developed in recent years. The tasks of auditors mainly include the audit of annual and the consolidated financial statements of companies. They have a central and important role in the economic world, their work is connected to diverse and varied activities, great responsibility and high standards. This task of this topic is to develop a job description of the auditor and to explain his role in the economic world. • The legal forms of companies and their different taxation: This topic deals with the choice of the legal form, which determines the legal framework of a company. Decision criteria are e.g. capital procurement, participation, accounting, liability and tax issues. Here, in particular, the differences in taxation of individual companies, partnerships and corporations are developed and possible consequences are discussed. • Management and supervisory boards in the stock company: The management board is the management body of a stock company, they represent the company both externally and internally and are moreover entrusted with the management of the business. They are controlled by the supervisory board, who has to ensure an appropriate and correct behaviour of the management board. Moreover, the supervisory board has an advisory function. This topic deals with the duties and requirements of the institutions and the interdependencies in the exercise of their respective activities.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9671363234519958, "language": "en", "url": "https://borgenproject.org/tag/hunger-in-san-marino/", "token_count": 437, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.275390625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:e5494669-77bb-469f-876f-3569d2d6d6a3>" }
San Marino, a microstate surrounded by Italy, is known as the world’s oldest republic, dating back to the year 301. Its economy runs off of tourism, banking and the manufacturing and export of ceramics, clothing, fabrics, furniture, paints, spirits, tiles and wine. This accounts for more than half of San Marino’s GDP. With a population of less than 33,000, hunger in San Marino is not seen as a problem. The recession of 2009 had a major impact on tourism as an economic stimulus. However, by 2016, the unemployment rate dropped from 9.3 percent to 8.5 percent by the end of the year. In comparison, the United States had a rate of 4.7 percent at the end of 2016. San Marino, like many European countries, uses the euro as its currency. As of August 2017, the exchange rate is one euro to 1.19 USD. This means that the costs of everyday items are almost equivalent to the prices in the United States. Hunger in San Marino could be affected by the cost of living, but that is not the case. The average monthly salary after taxes is about 2,445 euros. The cost of living in San Marino is relatively inexpensive, with prices averaging about 600 euros per month for apartment rent, while grocery costs remain low. The cost of living can be compared to that of small cities in the U.S. In 2010, it was reported that the percentage of overweight females in San Marino was 67.4 percent, while the male percentage was 60.5. This data shows that hunger in San Marino is not a problem; rather, overeating and unhealthy diets are more of a problem for the country. On Monday, October 16th, the Republic of San Marino is going to celebrate World Food Day, which is organized yearly by the Food and Agriculture Organization of the United Nations (FAO). It celebrates the anniversary of its founding and raises awareness on the world hunger problem. Despite not having hunger problems of its own, San Marino makes sure to advocate for other countries which do deal with severe hunger. – Stefanie Podosek
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9502511024475098, "language": "en", "url": "https://consumergoods.indiabizclub.com/info/consumer_protection_act/weights_and_measures", "token_count": 646, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0054931640625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:84ccb53a-5631-4691-aa23-48b9fd46335f>" }
Weights and Measures Uniform standards of weights and measures, based on the metric system, were established in the country, under the Standards of Weights and Measures Act, 1956.In order to establish the international system of units and to align Indian laws with international practices as well as to remove certain deficiencies, a comprehensive legislation, namely, the Standards of Weights and Measures Act, 1976 was enacted, replacing the 1956 Act. The 1976 Act contains among other things, provisions for regulation of pre acked commodities sold to consumers so as to establish fair trading practices. Provisions of the Act relating to packaged commodities and the relevant rules, namely, the Standards of Weights and Measures (Packaged Commodities) Rules, 1977 were brought into force, since September 1977. According to these provisions every package intended for retail sale is required to carry information as regards the name of the commodity, name and address of manufacturer or packer, net quantity, month and year of manufacture/packing and retail price. Mandatory declaration of retail sale price is inclusive of all taxes. The Rules also have similar provisions for regulation of packaged commodities imported into India. Under the provisions of the 1976 Act, the models of all weighing and measuring instruments should be approved before commencement of their production. Under the relevant rules, namely, the Standards of Weights and Measures (Approval of Models) Rules 1987 recognised laboratories examine the models for their conformity to the standards. The forty-second Amendment of the Constitution brought the subject of 'Enforcement of Weights and Measures' from the 'State List' to the 'Concurrent List'. To ensure uniformity in the matter of enforcement in the Country, a Central Act, namely, the Standards of Weights and Measures (Enforcement) Act, 1985 was brought into force. It contains provisions for effective legal control on weights, measures and weighing/measuring instruments used in commercial transaction, industrial production and in protection involving public health and safety. India is a member of the International Organisation of Legal Metrology. This Organisation was set up in order to realise worldwide uniformity in laws relating to legal metrology (weights and measures) and to make international trade smooth and practical. Legal standards of weights and measures of the States and Union Territories are calibrated in the four Regional Reference Standard Laboratories (RRSL) located at Ahmedabad, Bhubaneswar, Bangalore and Faridabad. These laboratories also provide calibration services to the industries in their respective regions and are among the recognized laboratories for conducting the model approval tests on weights and measuring instruments. The scheme for establishing a permanent premise for RRSL, Guwahati to cater to the needs of North-Eastern States, commenced in the Ninth Plan and is underway. The Indian Institute of Legal Metrology, Ranchi, under the administrative control of the Ministry of Consumer Affairs, Food and Public Distribution imparts training in legal metrology and allied subjects. Apart from the enforcement officials of States, nominees from African, Asian and Latin American countries also attend the programme run by the Institute. The Institute has also recently started imparting training to the non-judicial members of the Consumer Disputes Redressal Agencies of the states.
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9223029613494873, "language": "en", "url": "https://hillnotes.ca/2018/03/08/international-womens-day-understanding-gender-responsive-budgeting/", "token_count": 1569, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.419921875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:2bcb66b8-6ef2-4e73-bbb4-3367e995f62a>" }
You may also wish to consult other HillNotes in honour of International Women’s Day (Disponible en français : Journée internationale des femmes : Comprendre la budgétisation sensible à la sexospécificité) International Women’s Day – observed every year on March 8th – provides Canadians with an opportunity to reflect on women’s achievements and to promote the advancement of gender equality. This year, Canada’s theme is #MyFeminism, “inspired by the role feminism continues to play in shaping Canada and countries around the world.” At the international level, the UN Women’s theme is “Time is Now: Rural and urban activists transforming women’s lives.” Beyond International Women’s Day, many governments and parliamentarians have committed to advancing gender equality by implementing gender-sensitive policies and initiatives. Gender-responsive budgeting (GRB) is one such initiative. What is Gender-Responsive Budgeting? According to the Council of Europe, GRB can be understood as a “gender based assessment of budgets incorporating a gender perspective at all levels of the budgetary process and restructuring revenues and expenditures in order to promote gender equality.” The Organisation for Economic Co-operation and Development (OECD) notes that there is no standard model of GRB. The scope and quality of GRB can vary significantly. As well, GRB can be enacted by a country through a number of means, such as the establishment of government policies or the passing of legislation. National Budgets are Often Gender-Blind A national budget is a government’s most significant economic policy statement, signalling social and economic priorities by outlining revenues and spending for a financial year. A number of GRB stakeholders – for example, the Commonwealth Secretariat and the International Monetary Fund (IMF) – explain that national budgets are often assumed to be gender-neutral, but they are actually “gender-blind,” because they do not recognize the differing effects of government spending on women and men, and often inadvertently reinforce existing gender inequalities. GRB initiatives can expand their analysis to include additional identity factors such as socio-economic class, gender identity, age, geography and ethnicity. The Canadian Experience with Gender-Responsive Budgeting (i) Initiatives at the Federal Government Level In 1995, Canada signed the Beijing Declaration and Platform for Action, which included a call for “the integration of a gender perspective in budgetary decisions on policies and programmes.” As a response to the Platform for Action, the Government of Canada developed The Federal Plan for Gender Equality in 1995, which committed the federal government to implementing Gender-based Analysis (GBA) throughout federal departments and agencies. The Federal Plan also stated that “federal departments and agencies will monitor the impact of fiscal restraint and budget cuts over the next three years to ensure that they do not disproportionately or adversely affect women and members of other designated groups.” While the Government of Canada officially adopted GBA – which evolved into Gender-based Analysis Plus (GBA+) – no GRB initiatives were initially established at the federal level. In 2005, the federal government’s Expert Panel on Accountability Mechanisms for Gender Equality, in its report Equality for Women: Beyond the Illusion, examined GRB and issued the following recommendation: Let the Minister of Finance set the example. We believe that drawing from international lessons, the Minister of Finance could apply gender-based analysis rigorously to one key area of the 2006 Budget. In March 2017, Budget 2017 included a Gender Statement, which: Sets out how decisions made in the current budget were informed by gender considerations, with the ultimate goal of delivering the best possible outcomes for Canadians in all their diversity. This first attempt will inform and improve the process used for future statements. On 27 February 2018, the federal government released Budget 2018, which stated that “no budget decision was taken without being informed by Gender-based Analysis Plus.” Furthermore, in Budget 2018 the federal government announced that it will “introduce new GBA+ legislation to make gender budgeting a permanent part of the federal budget-making process” with the goal of “extending the reach of GBA+ to examine tax expenditures, federal transfers and the existing spending base, including the Estimates.” (ii) Parliamentary Initiatives In 2008, the House of Commons Standing Committee on the Status of Women’s report, entitled Towards Gender-Responsive Budgeting: Rising to the Challenge of Achieving Gender Equality, included a list of 27 recommendations on implementing GRB at the federal level. For instance, it recommended that: - Status of Women Canada take the lead on the initial design and delivery of the GRB process; and - The federal government table legislation specific to GRB, with the creation of an Office of the Commissioner for Gender Equality which would gain responsibility for the implementation and accountability of the GRB process. International Examples of Gender-Responsive Budgeting For over a decade, international organizations such as the United Nations, the Commonwealth Secretariat, and the Organisation for Economic Co-operation and Development have encouraged governments to implement GRB. Goal 5 of the Sustainable Development Goals – outlined in the UN’s 2030 Agenda for Sustainable Development – is to achieve gender equality, and it includes as one of its indicators of success, the “proportion of countries with systems to track and make public allocations for gender equality and women’s empowerment,” including budget allocations. According to the International Monetary Fund’s 2016 report more than 80 countries have implemented some form of GRB. While many GRB initiatives are launched and run for a few years, they often do not continue on a permanent basis. According to the European Institute for Gender Equality, the successful implementation of GRB depends on a number of factors: - political will and political leadership; - high-level commitment of public administrative institutions; - improved technical capacity of civil servants; - civil society involvement; and - [availability of] sex-disaggregated data. Selected international examples of GRB include: - Australia: In 1984, the Australian federal government undertook the first nationwide GRB initiative in the world, which ended in 2014. - Belgium: Belgium’s Gender Mainstreaming Act of 2007 provides the legal basis for the compulsory identification of government funds earmarked for the promotion of gender equality and imposes a “gender test” for every new policy measure. The Institute for Equality of Women and Men is responsible for administering this law. - Sweden: Sweden’s GRB initiative is upheld by “high level political commitment.” The government of Sweden says that it implements GRB in the following manner: “The gender equality effects of budget policy are evaluated, […] a gender perspective is continuously applied in the process and […] revenue and expenditure are to be redistributed to promote gender equality.” Centre Hubertine Auclert, “La budgétisation sensible au genre,” Guide pratique, 2015 [available in French only]. International Monetary Fund, “Gender Budgeting in G7 Countries,” Policy Papers, 19 April 2017. Ronnie Downes et al., “Gender Budgeting in OECD Countries,” Organisation for Economic Co-operation and Development: Public Governance and Territorial Development Directorate, 2017. Author: Laura Munn-Rivard, Library of Parliament
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9659019112586975, "language": "en", "url": "https://peakoil.blogspot.com/2008/06/oil-shortage-myth-says-industry-insider.html", "token_count": 752, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": 0.259765625, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:85e8c9f3-8c60-442e-a69a-1de94b06c283>" }
Oil shortage a myth, says industry insider By Steve Connor, Science Editor There is more than twice as much oil in the ground as major producers say, according to a former industry adviser who claims there is widespread misunderstanding of the way proven reserves are calculated. Although it is widely assumed that the world has reached a point where oil production has peaked and proven reserves have sunk to roughly half of original amounts, this idea is based on flawed thinking, said Richard Pike, a former oil industry man who is now chief executive of the Royal Society of Chemistry. Current estimates suggest there are 1,200 billion barrels of proven global reserves, but the industry's internal figures suggest this amounts to less than half of what actually exists. The misconception has helped boost oil prices to an all-time high, sending jitters through the market and prompting calls for oil-producing nations to increase supply to push down costs. Flying into Japan for a summit two days after prices reached a record $139 a barrel, energy ministers from the G8 countries yesterday discussed an action plan to ease the crisis. Explaining why the published estimates of proven global reserves are less than half the true amount, Dr Pike said there was anecdotal evidence that big oil producers were glad to go along with under-reporting of proven reserves to help maintain oil's high price. "Part of the oil industry is perfectly familiar with the way oil reserves are underestimated, but the decision makers in both the companies and the countries are not exposed to the reasons why proven oil reserves are bigger than they are said to be," he said. Dr Pike's assessment does not include unexplored oilfields, those yet to be discovered or those deemed too uneconomic to exploit. The environmental implications of his analysis, based on more than 30 years inside the industry, will alarm environmentalists who have exploited the concept of peak oil to press the urgency of the need to find greener alternatives. "The bad news is that by underestimating proven oil reserves we have been lulled into a false sense of security in terms of environmental issues, because it suggests we will have to find alternatives to fossil fuels in a few decades," said Dr Pike. "We should not be surprised if oil dominates well into the twenty-second century. It highlights a major error in energy and environmental planning – we are dramatically underestimating the challenge facing us," he said. Proven oil reserves are likely to be far larger than reported because of the way the capacity of oilfields is estimated and how those estimates are added to form the proven reserves of a company or a country. Companies add the estimated capacity of oil fields in a simple arithmetic manner to get proven oil reserves. This gives a deliberately conservative total deemed suitable for shareholders who do not want proven reserves hyped, Dr Pike said. However, mathematically it is more accurate to add the proven oil capacity of individual fields in a probabilistic manner based on the bell-shaped statistical curve used to estimate the proven, probable and possible reserves of each field. This way, the final capacity is typically more than twice that of simple, arithmetic addition, Dr Pike said. "The same also goes for natural gas because these fields are being estimated in much the same way. The world is understating the environmental challenge and appears unprepared for the difficult compromises that will have to be made." Jeremy Leggett, author of Half Gone, a book on peak oil, is not convinced that Dr Pike is right. "The flow rates from the existing projects are the key. Capacity coming on stream falls fast beyond 2011," Dr Leggett said. "On top of that, if the big old fields begin collapsing, the descent in supply will hit the world very hard."
{ "dump": "CC-MAIN-2020-29", "language_score": 0.9595437049865723, "language": "en", "url": "https://premium-quality-essays.com/essays/business/finance-and-accounting-in-business.html", "token_count": 1884, "fin_int_score": 3, "fin_score_model": "en_fin_v0.1", "risk_score": -0.0252685546875, "risk_score_model": "en_risk_v0.1", "id": "<urn:uuid:0fa50bdb-3fc0-4393-9443-1f836418412e>" }
Accounting and Finance in Business Processes The main task of modern business administration is to manage the company in a successful way using new and creative approaches. Some of them are regarded to be extremely interesting and challenging to investigate. Niccolo Machiavelli, an Italian statesman, writer and political scientist once said: “He who has not first laid his foundations may be able with great ability to lay them afterwards, but they will be laid with trouble to the architect and danger to the building.” According to Niven, these words give the best explanation for the nature of Balanced Scorecard, which also needs to have a strong foundation above. Recent research studies suggest that approximately 80% of values created in the modern world were generated by a virtue of knowledgeable and talented people. Besides, there are three main objectives for successful implementation of the learning and growth perspective in the business administration. They are human capital, information capital and organizational capital and, undoubtedly, their perfect combination will help every company to reach standards. Nevertheless, it is quite difficult to estimate the contribution of these integral parts to the development of the enterprise. The founders of Balanced Scorecard, Kaplan and Norton, have dedicated their scientific work to this problem. The other prominent person, Susan Johnson, is also fully devoted to learning and growth perspective and builds her business on these principles. Jonson believes that not solely modern equipment contributes to the success of the organisation but great people who are working for it. She is deeply convinced “learning, innovation and growth quadrant is the most significant dimension in the Balanced Scorecard”. Therefore, the people who are respected, trusted and talented should be the foundation for the enterprise. According to the results and statistical data, her choice brings benefits and prosperity for her company. We should also remember about such a weighty thing as accounting. While speaking about business success, it is also of immense importance. Now we shall analyse this item in detail. Accounting is a complex and dynamic field that is provided in every organization to manage the specific business processes. Every accountant is generally responsible for the company’s financial operations. Due to the fact that this field is flexible and complicated, we should mention that accounting engulfs both financial transactions and decision making. However, in smaller organizations, accountants usually play roles of data-entry managers, whereas the huge enterprises need accountants to deal with third-party contracts, customers and financial institutions. Generally, the functions of an accountant can vary in accordance with the size of the organization. Sometimes one can meet an accountant, who works as the independent financial professor. At other times, accountants can perform the role of the legal compliance officers in correspondence with laws. When dealing with the manufacturing industry, we can notice that an accountant is usually in charge of cost accounting and assists the manufacturing process. Moreover, accountants are responsible for making proper records and accurate transactions classification. On the other hand, the accountants have to come up with the results of their work in time, and in the way that is easy to understand. To such results we can enlist avoiding interest and penalties, obtaining good payer discounts, and paying correct sums of money. The healthy environment of each company depends on accounting because it represents the company’s efficiency and value on the market. Generally, modern accounting differs from the respective outdated profession, when they sat in their offices and had a lot of paperwork to be done immediately. Nowadays, the accounting is directly linked to IT and software that help the accountants lead their records and store them in databases. The modern accountants have to report the financial information in the form, which may include weighty decisions that can aid the management. Moreover, the professional accountants are usually engaged in tax management and total quality management, where their roles are fundamental. Having clarified the role of accountant, we should also pay attention to the main objectives of accounting. The first one is to keep the systematic records of transactions. In this case, the accountants have to renew the financial information in databases at least every three days. This objective generally implies that accountants do not have to rest on their ‘human memory’, but add the financial details in time. The second objective can be formulated as the business purposes protection. This implies that an accountant has to supply the necessary information to the proprietor and manager. Thirdly, an accountant has to ascertain the operational profit or loss. This is exercised through the electronic database that stores the records of revenues and expenses. Moreover, the modern accountant has to make the analysis, make calculations, and make sure the company received the profits or suffered a loss. This is usually quite challenging as the accountant has to be psychologically ready to face such cases and find the appropriate outcome. Additionally, the accountant has to deal with some business objectives. He has to ascertain the financial position of business. Basically, a businessman has to know his financial role and position precisely. To deal with this certain objective, accountants have to take a notice of the balance sheet and income statements. Finally, an accountant is not isolated from the rest of the employees as he has to interact with all the other departments of the company. In terms of this interaction, an accountant is expected to facilitate the rational decision making as he has to introduce the new ideas for the managers, who afterwards will consider them when managing the processes within their respective departments. Moreover, nowadays accounting is observed as the complex phenomenon that can aid the business development. To be more precise, the accountants have to consider analysis, collection and reporting of the information to the required levels of authority. Apart from the main objectives, the accountant has to perform some minor additional functions. The time characteristic is influential in this field as an accountant uses time to display the general picture of performance. The evaluation of such systems predicts the profound analysis, where the results are usually drawn in order to improve the performance indicators. Some of the sources contain additional information on business accounting. For example, the scientific articles present that law factor plays a significant role here. When coming up with the finance data, an accountant has to disclose his information in congruence with the government and norms of the state legislation. As the accounting is a complex phenomenon, we have to clarify the respective terminology to make a clear view on accounting as a business field. If the young accountant is invited to the company, he will certainly face some problems during the first several years. The accounting terminology is something very specific. Many people are usually afraid of the accounting terminology, but practice makes it perfect and accounting turns into rather accessible field. In most cases, an accountant deals with debit and credit. Those terms have to do with every single transaction that is recorded by the accountant. Debit is generally considered being a transaction of value ‘added’ to the account. Dealing with credit, we have the vice versa, when the transaction of value ‘removed’ from the account. To be more precise, in our checking accounts, the deposit serves to be a debit, whereas the check itself tends to be a credit. On more significant term of accounting is account. Accounts are usually established to provide the records for the individual business transactions. They are listed in the general ledger. Nowadays, such ledgers are integrated in the software and can be used by any PC user. It tends to be the central source for storing all accounts. On the other hand, we should mention the journal entries, which predict posting of financial transactions to the particular account. Basically, the accounting system utilizes many various types of accounts. Let us observe them in detail. Assets are the accounts that add value both to individual and business worth. On the other hand, we have the liabilities, which tend to be the accounts that remove the value from the business worth. Individual money contributions and investments in business are usually referred to the term ‘equity’. The revenue can depict the sums of money, which form the overall income. Expense accounts are used when the accountant is aimed to keep track on business losses and expenditures. As keeping records is a widespread function, there is the necessity to store all records within one database electronically. This generally accelerates the management processes and aids the healthy environment within the staff. It is a sensible decision to implement the accounting systems because they allow the accountants to track information on every transactions and to easily access it by means of several clicks. Moreover, the accounting systems provide the possibility to check and balance such transactions and depict the clear picture of company’s performance. As the result of performing his duties, every accountant has to be ready with his transactions compiling. It means that he has to come up with balance sheets, profit and loss reports, cash flow statements, invoices and so on. In conclusion, we should mention that accounting is a beneficial phenomenon. None of companies can exist without it. Nowadays, accounting provides stability. If the records are systematically controlled, the stability is provided. Being in the professional environment, we can use the databases to facilitate the accountant’s efforts. Moreover, accounting helps people evaluate the profitability and efficiency of their business. The introduction of accounting software will accelerate the management process and provide the clear financial picture for the company. Earn 10% from every order! Earn money today! Refer our service to your friends