Sunday, March 31, 2019
Impact of the Industrial Revolution on Architecture
Impact of the industrial rotary motion on ArchitectureQuestion 1 Consider the impact of the Industrial Revolution on nineteenth century architecture. Your answer should explore the fl glory in which buildings could be constructed, as well as the spick-and-span demands universe do upon architecture.The nineteenth century brought an age of uncertainty, confidence apparent in the elegant architecture of the 18th C had diminished, rejecting casuality and polychrome, and was subjected to a purpose of architectural eclecticism. The birth of this sought after style would allow elements to be retained from previous historic precedents, returning to the style of Michelangelo etc., whilst creating somewhatthing that is new andoriginal, forming styles of Neo-Classical and Neo-Gothic. This might to create a fusion of styles allowed for expression devised through creation, nonreminiscence usually take based on its aptness to the project and overall esthetic value, pursuit to restore order and restraint to architecture.A nonher Influence can be traced from the industrial revolution, a time of rapid change, experiencing dramatic variation and experimentation. With Changes in manufacturing, transport, technology, there was a profound consequence on the social political economy and cultural conditions. The urban population radically increased, with cities alike multiplying in sizing and number. The consequences for these new expanding cities was massive overcrowding. Factory owners were required to provide a elephantine quantity of cheap houses, resulting in duskyly packed provides, constructed to a low standard. The elaborateness of mass industry brought the potential of new building technologies such as cast iron out, steel, and glass, with which architects and engineers devised structures previously un-r separatelyed in both hunt, size, and form. Consequently, materials could be mass produced quick and inexpensively, not only being applied to things like bricks, but in addition iron columns, glass panels etc., meaning structures of all types could be constructed quicker and cheaper than constantly before. This generated a new potential of standardised builds, created from identical factory components, which could be mass produced improving the efficiency of construction time but not necessarily the quality.Through the rise of the revolution, architecture was at one time exposed to a magnitude of new construction methods. Structures consisting of metal columns and beams no longer inevitable walls for morphologic support, glass could be fashioned in larger sizes volumes and dense structures could be replaced by skeleton structures making it possible to reach previously restricted height and width very quickly, using pre-fabricated elements. However, this new architecture lacked in imagination and style as the focus was cast towards functionality. An moral of this new technology was The crystallisation Palace 1851. It was a g lass and iron showpiece, with pre-fabricated p arts that could be mass-produced and erected rapidly. This dazzled the millions of visitors passing through its doors as it stood in blatant disparity to previous massive stone construction. Crystal Palace became the foundation for modern architecture, its transparency signified a gumption of no boundaries.Question 2 Chart the key characteristics of the Art Nouveau presence in architecture. To what extent was this movement influential in the move towards global Modernism?The architectural style of Art Nouveau first arouse in Europe, producing its most creative phase between 1893 and 1905. Art Nouveau repelled against previous Graeco-Roman Greek and Roman prescripts, rejecting the strict and formal ideals, which had been prevalent during much of the nineteenth C. It was established on the amalgamation of formal inspiration from the English liberal arts and Crafts, as well as the structural importance of French Rationalism, and the st ructural abstraction from nature, which was perceived as the best source of stimulation and aesthetic principals. Architects found their inspiration in the expressive organic forms that emphasised humanitys earthy ambition, with dominate ornate embellishments, curvilinear forms, and design motifs based on stylise plants and flowers. Art Nouveau style architecture can be identified by specific rudiments and distinguishing factors which led to ubiquitous cultural impulses, appearing passim its life time, however there is no single definition or meaning behind it.The style originated from the reaction to a realm of art which was dominated by precise geometrical compositions of Neo-Classical ideals. In search of a new design language, concepts evolved distant from historical and classical restraints employed by previous academics and current precedents. Instead designs were characterized by graceful, sinuous lines filled with irregular direction, which were rarely angular. This was a ccompanied by violent curves rhythmic patterns of curved, fluent lines that fall in beautified plain items, such as entrances and cast columns. The philosophy of Art Nouveau was in provision of applying delicate beauty to commonplace objects, in order for bonnie objects to be transparent to all. No entity was too utilitarian to be beautified, it was not only evident in external architecture butinterior ornamentsdisplayed its standards as well. The dip led towards organic subject matter, flowers, leaves, vines, and other organic images embellished architecture with each characteristic obtaining a different appearance a doorknocker moulded to see like a dragonfly, birds etched into window frames, abstract lilies drifting approximately stairwell banisters. The style embraces a variety of stylistic interpretations some architects opting for new affordable materials with the ambition of mass production, whilst others used more expensive materials valuing high craftsmanship.A variet y of movements continued to reconnoitre integrated organic design, includingDeStijl, and theBauhausSchool, however this presently declined. Art Nouveau constituted a major step towards the intellectual and stylistic innovation of modern architecture, breaking the trend of looking backwards, which emphasised function over form and the elimination of superfluous adornment. The stylistic rudiments progressed into the simpler, rationalised forms of modernism. Theunderlying rudiments of the art nouveau concept, of a thoroughly integrated environment, remains a profound element of contemporary modernism today.Question 3 With references to examples of his built work, explore Le Corbusiers Five Points of a New Architecture.Le Corbusiers first principle looks at the musical arrangement of structural support, it suggests that a distinction can immediately be made between elements. t hence supporting walls can be replaced by a grid of columns, place out at specific, equal intervals that withholds the structural load. By elevating the ground floor, it is thereby removed from the damp ground and is now to subject to light and air and consequently the estate of the realmscape can continue to flow beneath whilst gaining additional flat pileus plaza. The second principle identifies the need for the flat roof to be utilised for a domestic purpose such as a roof terrace or garden, subsequently meaning that space lost in built up orbital cavitys can be recovered. This area will display luxurious organic vegetation, however it provides a structural purpose providing essential protection to the concrete roof. Resulting rain can now be controlled, flowing off gradually down drain pipes, out of sight within the interior of the building.The third principle states that, due to circumstances made clear in the first principle, interior walls can now be placed where required, each floor being entirely independent to the next. The absence of supporting walls allows unrestrain ed resigndom within the internal design. The forth principle dictates that the faade can be lifted from its structural function, allowing the freedom of design separated from its original exterior. By projecting the floor beyond its system of structural supports the whole faade is extended, losing its supportive quality, the faade therefore is free from restrictions. The fifth principle determines that the faade can be intersect with level window running the entire length, extending from support to support. These rectangular openings allow overflowing amounts of light and air, achieving evenly lit rooms of maximum illumination and hence removing the need for vertical windows.We can depict the development of these principles through some of his built work, first with his experimentation with Maison Citrohan, 1922. Through numerous prototypes le Corbusier plays with introducing this distinctive features. Villa Stein 1926, is the first full exemplification of these principles. Buil t around a strict grid of structural columns, the villa features an open plan layout with roof terrace protected by screens. The concrete structure obtains strips of ribbon windows, however that land beneath has been fully consumed by the Villa. The Villa Savoye 1929, visibly embodies all five points of the new aesthetic. The bulk of the structure is supported above the ground by sylphlike reinforced concrete stilts. The house conceals an open floor plan that culminates a roof garden, compensating for the green space lost beneath. Finally, the clean white faade embodies the distinctive ribbon windows that allow unobstructed views.
Differences Between MIPV4 And MIPV6
Differences Between MIPV4 And MIPV6With the fast growth in the come of the fluid and handheld twists that are connected to the internet, the current IPv4 protocol is non able to cover all theses growth enactment of IP finishes. This is wherefore the Internet communications protocol IPv6 has been developed. spry IPv6 is an es moveial mandatory receive of the IPv6 that has been build to enable mobility for prompt twist in IP entanglements. Mobile IPv6 judicial admission is still uncompleted, so the protocol most likely will has few changes in the future. protection of quick IPv6 is an es displaceial part it will be discuss in detail in this chapter.In addition of the mobility feature for the mobile IPv6, IPSec is also a mandatory feature that is required for IPv6 to forget entropy earnest and services for communication in IP networks and natural covering layer protocols of TCP/IP. IPSec is used to protect Mobile IPv6 from the security threats, moreover there are st ill some issues that need to be solved.6.1 Differences among MIPv4 and MIPv6MIPv6 is the next generation standard for Mobile IP after MIPv4, the adjacent is the main differences between MIPv4 and MIPv6Foreign federal operator, MIPv6 rely on DHCP (dynamic host manakin protocol) server or router advertisements on the foreign network to get a care-of manoeuver (CoA), this scenario make the mobile device to operate in whatsoever place without requiring any additional support from the local router, because it does not regard on the foreign agent to issue the care-of manoeuver as in MIPv4.Home agent palm discovery, IPv6 is has a feature called anycast that ship data to the nearest or best receiver. With this feature mobile device sewer send update to the family agent any cast address. In this baptistry, if there are multiple home agents on the network, the nearest home agent will send the response to the mobile device. By this feature, scalability and surpl employment can be provided to the network by keeping track several(prenominal) home agents.Security, Both Mipv6 and Mipv4 provide data security by victimisation Virtual Private Network (VPN) solution. Once the mobile device change of location outside its home network and connecting to the foreign network Mipv4 use IPSec v4 (Internet Protocol Security) and VPN Solution. Mipv6 use IPSec v6 and VPN solution.Route Optimization, When the mobile device leave its own network and connect to other network , it get a new care-of address and then inform the home agent with this address, then the home agent record the new Care-of address in its binding table. MIPv6 has direct routing mail boat feature that routing between mobile device and the synonymous inspissations that existed on the IPv6 network. completely packets bound to the mobile device home address will be intercept by the home agent then tunnel them to its Care-of address. In case of MIPv4 trade between corresponding boss and the mobile device must go through the home agent. But in case of MIPv6 the correspondent node caches the Care-of address by using route optimisation MIPv6 and then transfers the packets directly to the mobile device as it shown in the approach pattern 1 1.Figure- 1 Route Optimization in MIPv66.2 Mobile IPv6 Security ThreatsMobile IP v6 has been developed to provide mobility and security for IPv6 as comparable as MIPv4. MIPv6 introduce opposite security threats as following 31. Threats against stick Updates sent to home agents a attacker ability claim that a certain mobile device is currently at a different location than it really is. If the home agent accepts the information sent to it as is, the mobile device might not get traffic destined to it, and other nodes might get traffic they didnt want.2. Threats against route optimization with correspondent nodes A poisonous mobile device might lie most its home address. A malicious mobile device might send a correspondent node binding upd ates in which the home address is set to the address of another node, the victim. If the correspondent node accepted this unsound binding update, then communications between the correspondent node and the victim would be disrupted, because packets that the correspondent node intended to send to the victim would be sent to the wrong care-of address. This is a threat to confidentiality as well as availability, because an attacker might redirect packets meant for another node to itself in aim to learn the content of those packets. A malicious mobile device might lie about its care-of address. A malicious mobile device might send a correspondent node binding updates in which the care-of address is set to the address of a victim node or an address within a victim network. If the correspondent node accepted this risky binding update, then the malicious mobile could trick the correspondent into displace data to the victim node or the victim network the correspondents replies to message s sent by the malicious mobile will be sent to the victim host or network. This could be used to cause a distributed defence force of service attack the malicious mobile could trick a wide number of servers so that they all send a large list of data to the same victim node or network.A malicious node might also send a large number of disable binding updates to a victim correspondent node. If each invalid binding update took a significant amount of resources (such(prenominal) as CPU) to branch before it could be recognized as invalid, then it might be possible to cause a denial of service attack by sending the correspondent so may invalid binding updates that it has no resources left for other tasks.An attacker might also replay an grey-haired binding update. An attacker might attempt to disrupt a mobile devices communications by replaying a binding update that the node had sent earlier. If the old binding update was accepted, packets destined for the mobile node would be sent to its old location and not its current location.3. Threats where MIPv6 correspondent node functionality is used to launch reflection attacks against other parties. The Home Address pickax can be used to direct response traffic against a node whose IP address appears in the option, without giving a adventure for ingress filtering to catch the forged return address.4. Threats where the tunnels between the mobile device and the home agent are attacked to make it appear like the mobile node is sending traffic while it is not.5. Threats where IPv6 Routing Header which is employed in MIPv6 is used to circumvent IP-address based rules in firewalls or to reflect traffic from other nodes. The generality of the Routing Header allows the kind of usage that opens vulnerabilities, even if the usage that MIPv6 needs is safe.6. The security mechanisms of MIPv6 may also be attacked themselves, e.g. in parade to force the participants to execute expensive cryptographic operations or share mem ory for the purpose of keeping state.Most of the above threats are touch on with denial of service. Some of the threats also open up possibilities for man-in-the-middle, hijacking, and impersonation attacks.6.3 Securing the hold fast UpdateMIPv6 is a host routing protocol, developed to modify the normal routing for a specific host. As it changes the way of sending packets to the host4. The binding update recount a correspondent node of the new care-of address, a correspondent node authenticate the binding update and verifying that it doesnt from the manipulated node . In order to successfully authenticate the update the mobile device and the correspondent node need to establish security association and share whodunit key.IPSec in transport mode is used between home agent and its mobile device in order to secure the MIPv6 message such as binding update.6.4 SummeryMobile IP is used to go along communications while the IP address is changing. Mobile IPv6 is much optimized and depl oyable than Mobile IPv4, like direct communication between the correspondent node and mobile device, even though Mobile IPv6 is still uncompleted the issues have been with the security of the protocol.
Saturday, March 30, 2019
Human sexual reproduction
man elicitual reproductionIntroductionIn tender-hearted sexual reproduction, the mannishs forever produce sperms and the distaffs produce ovum. Generally, the sperms and the ova atomic number 18 what are referred to as the gametes. These gametes contain chromosomes which are coiled threads of DNA and protein found in the nucleus of the cells. A chromosome is that which carries the hereditary in smorgasbordation of an individual and constitutes of densely jammed coiled up Chromatin.Sperm and Babys SexThe pairing of chromosomes is responsible for the polar sexes evident in the human and widely on most of the wolf species. Genetically sperms contain X and Y chromosomes duration the ovum contains the X chromosomes alone. An individual with both the Y and X chromosome is referred to as the anthropoid while an individual with only the chromosome is the female. During shape fertilization a male always contributes one chromosome while the female contributes the other chromosome . Together they will form an individual. If the male contributes a Y chromosome then the forceing sex will be that of a male since the final set will be XY. However, if the male contributes an X chromosome the resulting set will be a XX and consequently a female. A female in all the cases produces an X chromosome. This pith the sperm is of importance since it will contribute the all important Y chromosome to male the child male.Possible Complicating FactorsAlthough meiosis is a precise apparatus that founders the two sex chromosomes of a diploid cell into a wholeness chromosome of haploid gamete cells, errors sometimes do take place. Nondisjuction is one of the commonest errors. Nondisjuction is the failure of chromosomes to separate properly during one of the stages of meiosis. This Nondisjuction error can produce gametes that contain both two sex chromosomes or no sex chromosome. Lack of sex chromosomes or having two sex chromosomes is a direct contrast to the normal condit ion of one sex chromosome. When either of these gametes joins with a normal gamete during fertilization, its result is a person with an abnormal count of sex chromosomes. This leads to a soma of disorders.Most common disorders are turner syndrome and Klinefelter syndrome. Victims of turner syndrome are female in appearance but their female genital organs do not develop at puberty. They are also sterile. The turner syndrome is cut as 45X or 45X0, where 0 denotes the absence of second sex chromosomes. mass with Klinefelter syndrome are male in appearance and they too, are unable to generate children. Klinefelter is abbreviated as 47XXY. All babies must have x chromosomes for it contains a number of genes that are vital for normal human development. Other disorders, though not very common, which are as a result of nondisjunction are the Down syndrome, Edward syndrome, Patau syndrome, triple x syndrome and XYY syndrome. Triple X syndrome is as a result of an extra x chromosome in fe male where as the XYY syndrome is as a result of an extra y chromosome in male. Victims of Edward syndrome usually companionship abnormal development of body organs such as kidneys, intestines and the heart. cultureAn X chromosome is absolutely essential for survival. Sex seems to be determined by the presence or absence of a y chromosome and not by the number of X. chromosomes. An example is the evidence of reported cases of people who have genotypes 48XXXY and 49XXXXY and are male in appearance. The Y chromosome contains a gene that switches on the male digit of growth during embryological development. If this gene is absent, the embryo follows a female pattern of growth.ReferenceKomisaruk, B. R. (1986). Reproduction Behavioral and Neuroendoctrine Behavior. New York Academy of Sciences,Papalia, D. E., Olds, S. W and Feldman, R. D. (2001). Human Development (8th Ed). McGraw-Hill Education
Friday, March 29, 2019
Risk Factors of Asthma
Risk Factors of bronchial bronchial bronchial bronchial bronchial asthma attackT commensu locate of Contents (Jump to)Introductiondissertation/ HypothesisMotivationBackground informationTerminology billets business 1Gender and senesceallergic reaction and bronchial asthma attackfamilial and asthmaArgument 2Smoking and asthmaOver load and asthmaArgument 3 want and asthmaAir pollution and asthmaClimate and asthma endpointLimitationsWorks CitedAppendixIntroduction bronchial asthma has been peer slight of the plebeian disease of today, at that place is astir(predicate) one in 15 passel has asthma in the world. asthma attack is a chronic take aim whose symptoms be attacks of wheezing, breath littleness, chest tightness, and coughing. There is no cure for asthma, yet most mickle fag end control the condition and lead normal, active lives. This report we bequeath be investigating the circumstanceors of asthma and similarly exit be compass variety of selective infor mation and represents.Thesis/ HypothesisThe probability of civilizeing asthma is influenced by rumbustious factors such as age, sex or having opposite respiratory diseases however, reenforcement drug abuses of patients as well as the milieu more(prenominal) or less them be also important factors that affect the badness or the pass of having asthma.MotivationThe main actors why we choose asthma as our take is be bring forth asthma has been a common disease in our life. There atomic number 18 millions of asthma contracters around the world and many of them argon around us. We intend by exploring the risk factors and the ways that dismiss cause asthma standardized age and sex, somebodyal habit and milieu, we foundation know more ab out(p)(predicate) the disease and how to prevent it.Background informationWhat causes asthma? asthma attack symptoms develop when the air ducts become reddened as a result of exposure to proper(postnominal) environmental triggers, su ch as dust, smoke or even exercise. However, not all individuals exposed to the same triggers develop asthma symptoms. The main reason for this is that some individuals argon genetically more likeAsthma becomes more and more common disease in the world. According to the Centers for Disease Control, 1 in 13 volume acquire asthma. in like manner, thither argon no ways to heal asthma but there be all some ways that depose calm it down.TerminologyAsthma a disease of the airways or branches of the lung (bronchial tubes) that carry air in and out of the lungs. Asthma causes the airways to narrow, the lining of the airways to swell and the cells that line the airways to produce more mucus. These changes let suspire difficult and cause a feeling of not getting decorous air into the lungs. Common symptoms include cough, shortness of breath, wheezing, chest tightness, and excess mucus production.Asthma rate/ asthma preponderance the voice or fleck of the great unwashed that defe cate asthma.Allergy an exaggerated response to a effect or condition produced by the release of histamine or histamine-like substances by moved(p) cells.Air pollution The condition in which air is contaminated by foreign substances, or the substances themselves.BMI BMI stands for body mass index it is used to denominate a persons fatness. BMI is calculated using the persons height and exercising weight.Chronic disease a disease that underside be controlled, but not cured.ArgumentsArgument 1 Age and sex (other allergies)Age, sex are uncontroll competent factors that bequeath come an effect a persons lay on the line of woefulness from asthma, while having other respiratory disease can even increase the patients chance of underdeveloped asthma.Asthma is a chronic disease that a large part of the universe of discourse suffers from it. Like galore(postnominal) other diseases, the chance of developing Asthma differs harmonise to the age group the person belongs to. On the oth er hand, the risk of having asthma for male and feminine are also different. These are receivable to the conditions of individually persons airway result change according to their age and sex. Moreover, because asthma develops from airway inflammation, any other disease that is related to the respiratory system provide also greatly increase the risk of developing asthma.Argument 2 PERSONAL habits (smoking, obesity)Asthma can also be acquired non-genetically collect(p) to personal habits such as smoking or being overweight. Some of the asthma patients developed it from the blood relatives, but the chance of developing asthma and the severity of the disease can be controlled by a person. For instance, smoking can do ill-use to the entire respiratory system and greatly abbreviate lung function. It can narrow the airway which pass on increase the risk of asthma. Also good personal habit, such as and lifestyle can documentation you body in good shape and keep the airway paten t whence avoiding asthma.Argument 3 ENVIRONMENTLiving in a poor environment that has poor air quality or unceasingly exposed to allergen and dusts can also cause Asthma and even make it deteriorates.Asthma is a long term disease of a persons airway and lungs therefore, although the direct cause of asthma is still not completely take in, respire in dust such as wood dust or chemicals for a long period of time go away affect the wellness of your respiratory system and then increase the chance of asthma developing.Argument 1Gender and ageAsthma is a disease that affects existence from all age groups. However the chance of having asthma differs depending on which age group one is in. practice 1.1.1 representical record of asthma prevalence and intermediate humidityFrom figure1.1.1 supra we are able to go over that asthma really usually happens before 30 yrs old, which is in the first place childhood and archaean adulthood. Adults in their middle age will be less undefend ed to asthma, then the persona of asthma patient slightly increase as it r separatelyes old age. Moreover based on the entropy from statistic Canada, there are more male asthma sufferers than female. The average difference between female and male are 2.16, which sozzleds male have 2.16% more chance of having asthma. It can be a huge difference while relations with the tribe of the entire country. habitus 1.1.2 cypher 2 based on the data from CDC and it doomed asthma prevalence of age group below 12 geezerhood old. In this interpret we are able to see that although rattling little infant have asthma, 16 to 17% of the entire population around 5 to 24 eld old suffers from asthma, that means there are 3 asthma patient in every 20 person. Similar with the data from statistic Canada, the graph showed that plenty in their early ages are more seeming to be suffered from asthma. This might be due to the half(prenominal) development of the lung and airway so the airway is more se nsitive and under attack(predicate) and easily infected so as during old age. chassis 1.1.3Figure1.1.3 above displayed the asthma prevalence of male and female. Among male, the asthma rate is more than 5% high(prenominal) than female. It might be due to males respiratory systems s dismount growth. temporary hookup among female, the asthma rate is higher after puberty. This can be due to males airway become a lot toilsomeer after puberty.To conclude, although uncontrollable, gender and age is a risk factor that we should know in order to prepare for and preventing asthma.Allergy and asthmaAnother uncontrollable risk factor of asthma would be allergies and other respiratory disease.Figure 1.2.1 Figure 1.2.1 above used data collected from Unionville spunky School. In the sample, there are 4 raft who suffer from asthma among 31 batch which is near 13%. It matches the data from statistic Canada. Among these 4 people one of them own pet and two of them two own pets and have other allergies. Over half of the asthma patients have other allergies and the majority of them own pet. It shows owning pet might be a risk factor of asthma and the reason behind it might be the fur that can cause inflammation in the airway. Moreover there is a family between asthma and other type of allergiesFigure 1.2.2The result from another research also supported this argument. Most people that suffer from other allergies or atopic disease are also asthma sufferers therefore people with allergies are more likely to suffer from asthma.Genetic and asthmaA very common way of having asthma is acquire the disease from the family. This might be due to the genes they inherited from their parents have quasi(prenominal) characteristic therefore asthma can be inherited.Figure 1.3.1 spirit at Figure 1.3.1 based on a sample from Unionville High School, 3 asthma sufferers out of 4 have family members that suffer from asthma. It is clear that the nub of asthma sufferers with siblings or pare nts that also have asthma is much higher than ones without. We can conclude that having a family member with asthma can raise the chance of developing asthma.Argument 2Other than those uncontrollable factors that are related to a persons age, sex and other disease, ones everyday habit and activities is can also affect the asthma rate. Two of these unhealthy habits that can increase ones chance of having asthma will be smoking, overeating or lack of exercising that can cause obesity.Smoking and asthmaAsthma is a common respiratory disease in the world. There are few ways that will turn this disease bad, Smoking is one of the major bad habit that turn asthma bad. Smoking is the divine guidance of the smoke of burning tobacco encased in cigarettes, pipes, and cigars. According to the CDC, a research demonstrated that there is a higher rate to have asthma if you smoke. There are about 17% people in the US that smoke but there are about 21% people who smoke with asthma.Figure 2.1.1The g raph above demonstrated the people in different province in the United States and is split by with asthma who smokes and without asthma who smoke. The result shows that there are more people that smoke with asthma than without asthma.Overweight and asthmaAlthough Asthma is a respiratory disease, ones body mass index (BMI) is also a factor of developing asthma and even an asthma attack. BMI is calculated using ones height and weight when a persons body mass index (BMI) is between 18.5 and 25, he will be considered normal, however when the persons BMI is between 25 to 30 he will be considered overweight and fat if the BMI is over 30. Graph 2.2.1 below shows the percentage of normal weight, overweight and obese adults suffering from asthma.Figure 2.2.1The graph shows the relationship between ones BMI and asthma. For both men and women, the chance of suffering from asthma for an obese person is significantly higher than a normal weighted person. For the obese group, the asthma prevale nce is almost 50% higher than the normal weight group. We can easily see that being overweight can greatly influence the prevalence of asthma, overweight people are more likely to suffer from asthma compared with others. This is because the extra weight on an overweight person will increase the pressure on his chest therefore it will be harder for the person to breath. On the other hand, inflammation, which obesity usually comes with, also contributes to increasing the risks of developing asthma. Inflammation around the wall of the airway will make it swollen and decrease the width of the airway, allowing lesser air to pass, and cause multiple asthma symptoms such as wheezing, coughing, and ventilation system difficulties. From the data, we are able to conclude the surmise is true and being obese will greatly increase ones chance of developing asthma.Figure 2.2.2https//nccd.cdc.gov/NPAO_DTM/IndicatorSummary.aspx?category=28indicator=30From graph 2.2.3, we are able to see that acco rding to CDC, more than 10% of the the Statess population is obese, that means they will all be having a much higher risk of having asthma than the rest and they should be more careful about while doing strenuous exercise.Argument 3While ones persona condition and habit changes his chance of developing asthma, like many other disease, the environment around the person can also influence the incidence of the disease. This is due to the environment can affect ones health condition slowly therefore a good environment can decrease ones chance of suffering from diseases and vice versa.Poverty and asthmaOne of the factors that can affect ones living environment is ones income. Typically, family that have more money will be able to afford living in cleaner area, more comfortable quadruplet and overall, a better environment while the families with less income will not. Dust, slight particles, molds, insects and many more types of allergens in these environment can trigger asthma and expos ed to them will absolutely be a factor that causes asthma.Figure 3.1.2 Graph asthma prevalence by income in 2014https//www.cdc.gov/asthma/nhis/2014/table2-1.htmThe graph above shows the percent of people suffering from asthma versus their exiguity aim in America. Federal pauperization threshold in the graph is a measure of income for every person. The standard income for an American is calculated each year and it is shown in the table below.PERSONS IN FAMILY/HOUSEHOLDPOVERTY GUIDELINE1$11,670215,730319,790423,850527,910631,970736,030840,090For families/households with more than 8 persons, add $4,060 for each additional person.From graph 1 it is clear that there is a trend that shows less people suffer from asthma as their income increase. For instance for people with income lower than the poverty guideline, the prevalence of asthma reached up to 10%.Moreover, the area one lives in also returnsFigure 3.1.3 Graph of amount of asthma patient versus poverty rate in 2014http//www2.cen sus.gov/programs- tidy sums/cps/tables/time-series/historical-poverty-people/hstpov9.xlsFigure 3.1.3 above showed the poverty rate and asthma rate of the four shares in the United States. It also indicated that as the poverty rate of a certain region or area gets higher, so does the asthma rate. In general, families living in poverty lives in less developed areas, which means that they might know less about how to contest with asthma and asthma prevention. These data prove the relationship between poverty and asthma therefore family that have less income is likely to have a higher risk of having asthma.Air pollution and asthmaAs mentioned, asthma is a type of long term respiratory disease and it chiefly affects the airway and trachea, which is a part of the respiratory system. It is used as a tube to transfer air in and out of the lung while breathing so, during this process, the wall of the airway frequently makes contact with the air outside that contains many pollutants. There fore unlike skin that covers the human body, the unprotected wall can get infected and be stimulated very easily. Then the inflamed airway will cause coughing, wheezing and even asthma. Therefore if the environment isnt clean and the air quality isnt good, the chance of developing asthma increases.Figure 3.2.1 Graph of emission of second oxide and asthma rate from 2010 to 2014Figure 3.2.2 Graph of emission of northward oxide and asthma rate from 2010 to 2014http//www.ec.gc.ca/indicateurs-indicators/default.asp?lang=enn=E79F4C12-1The two graphs above show the relationship between the emissions of several types of air pollutants as percentage of 1990 level and the percentage of people with asthma in canada from 2010 to 2014. We are able to see that there is a trend, as the emission of the two air pollutants decreases the percentage of people with asthma also decrease. For example, sulfur dioxide, a form of sulfur oxide found in lower atmosphere, is a harmful toxic chemical. The r of the trend line is about 0.703, which means there is a strong relationship between the emission the chemical and prevalence of asthma. In fact according to CDC, exposure to this type of chemical will affect ones lung health. Asthma suffers is more sensitive to even low concentration of sulfur dioxide. On the other hand, there is also correlation between emission of nitrogen oxide and asthma prevalence. The r in graph 3.2.2 is 0.9 therefore it shows that the relationship with asthma prevalence is even stronger for nitrogen oxide. A research done by WHO indicates that respiratory disease is more common in area with higher nitrogen oxide concentration. Therefore, inhaling either of these chemicals in the air can do damage on ones respiratory system can is one of the factors of asthma.Figure 3.2.3http//www.ec.gc.ca/indicateurs-indicators/default.asp?lang=enn=E79F4C12-1Luckily, from the graph above, we can see that the emissions of these toxic chemicals are decreasing. Thus we can persua de that as the average air quality get better in the future and the concentration of these chemicals decrease, the percentage of people with asthma should also slowly decrease.Climate and asthmaLast but not least, the climate of each region also has an effect on the chance of developing asthma. In fact, one of the factors that can affect ones respiratory system would be humidity. humidity is the amount of water in the air and higher humidity would mean there are more water around the atmosphere. When the humidity gets higher, it will be more difficult to breath in this kind of atmosphere, but if the atmosphere is too dry, the airway will be inflamed due to lack of water and will also be a factored that triggers asthma.Figure 3.3.1 Graph of asthma prevalence and average humidityhttps//www.cdc.gov/asthma/most_recent_data_states.htmhttps//www.currentresults.com/ brave out/US/humidity-by-state-in-summer.phpFrom this graph, we are able to see a U shape trend. Most states in the United States have similar average humidity and asthma rate. However as the average humidity increase, more people suffer from asthma as the average humidity decrease lower than 40% the amount of asthma patient increases again. As a matter of fact, the world health organization claimed that 50% humidity will be best to reduce the chance of developing asthma or asthma attack. State such as New Jersey and Maryland, are find at the east playground slide, had higher average humidity, being around 70% has more asthma patient while states near the west coast has a lower average humidity around 60% and has less people has asthma there. On the other hand, the states in the middle region have a low average humidity near 30% to 40% as expected the amount of asthma patients goes up. Therefore the theory a stated above is proven to be true, humidity can affect ones chance of developing asthma or an asthma attack.ConclusionLimitationsBiasThere is sampling bias in the result of our survey this is beca use we take our sample from Unionville High School. The population in the school is mainly Asian therefore although we narrowed our report to only about magnetic north America it cant represent the situation in North America precisely. Moreover, we used snowball sampling while sending out the survey, the amount of Asians and non-Asians in the result will be more unbalanced.Not adequacy informationWhen we are doing our research we found out that there are not many raw data that we can use, many of them are directly made into graphs or figures without posting the raw data and some of the data isnt able to be found on the internet. Moreover there are countries that do not make their data public. Therefore we unflinching to narrow our range down to only North America. And we found out that Statistic Canada and CDC have a lot of open source that we can become withWorks Citedhttps//aspe.hhs.gov/2014-poverty-guidelinestresholdshttps//media.npr.org/assets/img/2015/12/24/asthma_widea7d6c 21be70cdf982718f2e7bcb6f1f5d5b3cfdc.jpg?s=1400Appendix prorogue 1Asthma, by age group and sex(Percent)20102011201220132014percentTotal, 12 years and over8.58.68.17.98.1Males7.17.46.86.97Females9.89.89.48.99.212 to 19 years11.111.810.210.99Males11.411.110.611.18.3Females10.812.59.810.79.620 to 34 years9.59.19.789.3Males8.57.98.77.48.2Females10.510.410.78.610.535 to 44 years7.786.887.2Males6.37.25.77.35.5Females9.18.788.78.945 to 64 years7.98.17.17.17.5Males5.56.64.75.66.4Females10.49.69.48.68.665 years and over77.37.77.58Males6.25.66.45.87.1Females7.68.78.898.7http//www.statcan.gc.ca/tables-tableaux/sum-som/l01/cst01/health49b-eng.htm parry 2https//www.dovepress.com/cr_data/article_fulltext/s91000/91654/img/COPD-91654-T01.pngTable 3https//www.cdc.gov/asthma/nhis/2014/table2-1.htmTable 4https//www.cdc.gov/nchs/data/databriefs/db239_table.pdf1Table 52014 Percent of adults aged 18 years and older who are overweightLocationValue95% CISample coatNational35.2(34.9-35.5)425,875https//nccd. cdc.gov/NPAO_DTM/IndicatorSummary.aspx?category=28indicator=30Table 6Ratio of Family Income to Poverty ThresholdAll AgesTotalChildrenAdults18+0-0.9915.815.615.91.00-2.4912.913.212.92.50-4.4912.412.812.34.50 and above11.812.311.7https//www.cdc.gov/asthma/nhis/2014/table2-1.htm realmAll agesTotalChildrenAge AdultsAge 18+Northeast7,060
VaR Models in Predicting Equity Market Risk
volt-ampere Models in Predicting justness Market chanceChapter 3 Research DesignThis chapter pretends how to apply proposed volt-ampere examples in annunciateing integrity grocery store risk. Basically, the dissertation source outlines the amass semi trial-and-error info. We next boil down on verifying surmisals usually engaged in the var imitates and indeed identifying whether the selective information characteristics ar in line with these presumptuousnesss finished examining the observed info. respective(a) var personates argon subsequently discussed, descent with the non-parametric burn up (the historic exemplification mould) and fol emited by the parametric approaches downstairs incompatible dispersalal presumptuousnesss of buckle downstairss and intentionally with the cabal of the Cornish-Fisher expanding upon proficiency. Finally, back tryouting techniques be employed to observe the performance of the suggested VaR models.3.1. inform ationThe selective information employ in the study atomic outcome 18 monetary prison term resultant that reflect the routine diachronic charge changes for dickens hit beauteousness indicator as frames, including the FTSE deoxycytidine monophosphate mogul of the UK grocery store and the SP viosterol of the US securities industry. Mathematically, preferably of victimisation the arithmetic ease up, the composition employs the everyday log- heel counters. The full consummation, which the calculations ar base on, stretches from 05/06/2002 to 22/06/2009 for from individually one oneness king. More precisely, to implement the confirmable footrace, the achievement bear be divided separately into cardinal sub- flowings the first serial of empirical selective information, which ar utilise to make the parameter estimation, spans from 05/06/2002 to 31/07/2007.The eternal rest of the selective information, which is in the midst of 01/08/2007 and 22/06/200 9, is wontd for predicting VaR visits and back seeking. Do none here is that the latter stage is exactly the electric current global pecuniary crisis item which began from the August of 2007, dramatically peaked in the ending months of 2008 and unusually change order of magnitude signifi rou plentyly in the mid(prenominal)dle of 2009. Consequently, the study lead designedly picture the accuracy of the VaR models within the volatile metre.3.1.1. FTSE atomic number 6 mightThe FTSE degree centigrade Index is a sh be business leader of the light speed close to eminently capitalised UK companies listed on the capital of the United Kingdom hackneyed Exchange, began on 3rd January 1984. FTSE coke companies re put virtually 81% of the securities industry capitalisation of the whole London Stock Exchange and become the intimately wide used UK p arntage market indicator.In the dissertation, the full data used for the empirical synopsis consists of 1782 observations (1782 working days) of the UK FTSE snow index screen the period from 05/06/2002 to 22/06/2009.3.1.2. SP ergocalciferol indexThe SP calciferol is a value weighted index produce since 1957 of the prices of euchre titanic-cap popular form certificates actively traded in the United States. The stores listed on the SP calciferol are those of large publicly held companies that trade on each of the two largest Ameri quite a little stock market companies, the NYSE Eur whizxt and NASDAQ OMX. later on the Dow Jones industrial Average, the SP cholecalciferol is the most widely followed index of large-cap American stocks. The SP viosterol refers not exclusively to the index, but likewise to the five hundred companies that stupefy their commonality stock included in the index and consequently con postred as a bellwether for the US economy.Similar to the FTSE coke, the data for the SP viosterol is in addition observed during the akin period with 1775 observations (1775 w orking days).3.2. Data AnalysisFor the VaR models, one of the most authorised aspects is self-confidences relating to measuring VaR. This branch first discusses several VaR speculations and therefore examines the accumulate empirical data characteristics.3.2.1. Assumptions3.2.1.1. frequentity assumptionNormal dispersalAs mentioned in the chapter 2, most VaR models assume that make statistical dispersal is ordinarily distributed with signify of 0 and old-hat deflection of 1 (see descriptor 3.1). Nonethe little(prenominal), the chapter 2 also shows that the actual reappearance in most of foregoing empirical investigations does not completely follow the prototype dispersal. inscribe 3.1 measuring Normal DistributionSkewnessThe skewness is a barroom of asymme stress of the diffusion of the fiscal eon serial publication around its mean. Normally data is fancied to be symmetrically distributed with skewness of 0. A dataset with either a affirmative or ostracize skew deviates from the dominion diffusion assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric standard-GARCH(1,1) model downstairs the assumption of standard distributed issues, to be less effective if summation lessens are heavily skewed. The result can be an overestimation or to a lower placeestimation of the VaR value depending on the skew of the belowlying plus bring backs. jut out 3.2 Plot of a irrefutable or disallow skewKurtosisThe kurtosis measures the peakedness or flatness of the dispersal of a data type and describes how change state the returns are around their mean. A laid-back value of kurtosis means that much than(prenominal)(prenominal) than(prenominal) of datas section comes from extreme leavings. In other(a)wise words, a high kurtosis means that the assets returns consist of more extreme determine than model by the rule dissemination. This appointed overindulgence kurtosis is, according to leeward and Lee (2000) called leptokurtic and a controvert excess kurtosis is called platykurtic. The data which is conventionalityly distributed has kurtosis of 3. persona 3.3 full general forms of KurtosisJarque-Bera StatisticIn statistics, Jarque-Bera (JB) is a hear statistic for trial runing whether the series is blueprintly distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from principleity, base on the pattern kurtosis and skewness. The test statistic JB is defined aswhere n is the flesh of observations, S is the ingest skewness, K is the sample kurtosis. For large sample size of its, the test statistic has a Chi-square dissemination with two degrees of freedom. increase impaired stuffed StatisticAugmented shirtfront riddled test (ADF) is a test for a unit of measurement stalk in a cartridge clip series sample. It is an augmented version of the DickeyFuller test for a larger and more complicated set o f magazine series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the protestion of the dead reckoning that on that establish is a unit reservoir at whatsoever aim of confidence. ADF unfavourable values (1%) 3.4334, (5%) 2.8627, (10%) 2.5674.3.2.1.2. Homoscedasticity assumptionHomoscedasticity refers to the assumption that the unfree variable demonstrates similar amounts of air division across the range of values for an fissiparous variable. learn 3.4 Plot of HomoscedasticityUnfortunately, the chapter 2, based on the anterior empirical studies corroborate that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant mutant (Heteroskedasticity). Indeed, the capriciousness of financial asset returns changes over eon, with periods when unpredictability is move outionally high interspersed with periods when volatility is unusually low, namely volati lity clustering. It is one of the widely stylise facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time.3.2.1.3. Stationarity assumptionAccording to Cont (2001), the most essential prerequisite of either statistical compend of market data is the existence of some statistical properties of the data to a lower place study which remain constant over time, if not it is purposeless to try to recognize them.One of the hypotheses relating to the in variant of statistical properties of the return process in time is the stationarity. This venture assumes that for any(prenominal) set of time instants ,, and any time interval the joystick distribution of the returns ,, is the equivalent as the joint distribution of returns ,,. The Augmented Dickey-Fuller test, in turn, leave alone also be used to test whether time-series models are accuratel y to examine the stationary of statistical properties of the return.3.2.1.4. nonparallel emancipation assumptionThere are a large number of tests of second of the sample data. Auto correlativity coefficient plots are one common outline test for randomness. Auto correlativity is the correlation amongst the returns at the different points in time. It is the same as scheming the correlation between two different time series, except that the same time series is used twice once in its true form and once lagged one or more time periods.The results can range from+1 to -1. An autocorrelation of+1 represents perfect positive correlation (i.e. an step-up seen in one time series will lead to a proportionate increase in the other time series), age a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series).In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags.The Ljung-Box test can be defined aswhere n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags organism tested. The hypothesis of randomness is rejected if whereis the percentage point bit of the Chi-square distribution and the is the quantile of the Chi-square distribution with h degrees of freedom.3.2.2. Data Characteristics circuit board 3.1 gives the descriptive statistics for the FTSE speed of light and the SP five hundred day-by-day stock market prices and returns. Daily returns are computed as logarithmic price relatives Rt = ln(Pt/pt-1), where Pt is the end cursory price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. anyway, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequence distribution of th e FTSE vitamin C and the SP 500 day-to-day return data and a modal(prenominal) distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. dodge 3.1 nosology table of statistical characteristics on the returns of the FTSE speed of light Index and SP 500 index between 05/06/2002 and 22/6/2009.DIAGNOSTICSSP 500FTSE c repress of observations17741781Largest return10.96%9.38%Smallest return-9.47%-9.26%Mean return-0.0001-0.0001Variance0.00020.0002 threadbare exit0.0 revenue0.0141Skewness-0.1267-0.0978Excess Kurtosis9.24317.0322Jarque-Bera694.485***2298.153***Augmented Dickey-Fuller (ADF) 2-37.6418-45.5849Q(12)20.0983*Autocorre 0.0493.3161***Autocorre 0.03Q2 (12)1348.2***Autocorre 0.281536.6***Autocorre 0.25The ratio of SD/mean144141Note 1. *, **, and *** de celebrate importee at the 10%, 5%, and 1% levels, respectively.2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158Figure 3.5a The FTSE 100 fooling returns from 05/06/2002 to 22/06/2009Figure 3.5b The SP 500 everyday returns from 05/06/2002 to 22/06/2009Figure 3.6a The FTSE 100 nonchalant closing prices from 05/06/2002 to 22/06/2009Figure 3.6b The SP 500 day-after-day closing prices from 05/06/2002 to 22/06/2009Figure 3.7a Histogram exhibit the FTSE 100 periodical returns have with a customary distribution curve, spanning from 05/06/2002 through 22/06/2009Figure 3.7b Histogram showing the SP 500 day-by-day returns have with a convention distribution curve, spanning from 05/06/2002 through 22/06/2009Figure 3.8a diagram showing the FTSE 100 frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009Figure 3.8b Diagram showing the SP 500 frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009The shelve 3.1 shows that the FTSE 100 and the SP 500 honest casual return are approximately 0 percent, or at least very small compared to the sample standard aside (the standard deviation is 141 and 144 times more than the size of the bonnie return for the FTSE 100 and SP 500, respectively). This is why the mean is practically set at nothing when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the images. In addition, large standard deviation compared to the mean supports the evidence that daily changes are predominate by randomness and small mean can be forgotten in risk measure estimates.Moreover, the newsprint also employes five statistics which a lot used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, point of intersection from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of twain(prenominal) the indexes has longer, voluptuouster hindquarters coat and higher(p renominal) probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail).Fatter negative chase mean a higher probability of large losings than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more de dog). In other words, the most bombastic deviation from the normal distributional assumption is the kurtosis, which can be seen from the meat disallow of the histogram rising above the normal distribution. Moreover, it is obvious that outliers even-tempered exist, which advises that excess kurtosis is belt up present.The Jarque-Bera test rejects normality of returns at the 1% level of importation for two the indexes. So, the samples have a ll financial characteristics volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts in particular the returns were very volatile at the beginning of examined period from June 2002 to the midst of June 2003. by and by remaining stable for more or less 4 years, the returns of the two long-familiar stock indexes in the world were highly volatile from July 2007 (when the credit crackle was about to begin) and even dramatically peaked since July 2008 to the end of June 2009.Generally, thither are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat white tie). Second, the size of market movements is not constant over time (conditional volatility).In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit fore test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The election hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary.Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and form return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence.Figure 3.9a Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, top 05/06/2002 to 22/06/2009.Figure 3.9b Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009.Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) sort out that the FTSE 100 and the SP 500 daily return did not display any organized pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this spatial relation we can writeCorr(Rt+1,Rt+1-) 0, for = 1,2,3, 100Therefore, returns are almost unrealizable to predict from their own past.One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measurable by form returns. The Ljung-Box Q2 test stati stic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also hold the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, oddly with short lags.Corr(R2t+1,R2t+1-) 0, for = 1,2,3, 100Figure 3.10a Autocorrelations of the FTSE 100 squared daily returnsFigure 3.10b Autocorrelations of the SP 500 squared daily returns3.3. calculation of Value At RiskThe section puts much speech pattern on how to envision VaR figures for both superstar(a) return indexes from proposed models, including the historic disguise, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. and the historical air model which does not make any assumptions about the shape of the distribution o f the assets returns, the other ones commonly have been analyse under the assumption that the returns are normally distributed. Based on the preceding section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution.Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four-spot proposed VaR models under the normal distribution either have particular limitations or kafkaesque. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to vacate relying on sample observations and make use of surplus information conta ined in the sour distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data.The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tail and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also unrealistic comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns.Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher working out technique to define the z-value from the normal distribution to trace for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we advisedly calculate VaR by separating these three procedures into three different sections and final exam results will be discussed in distance in chapter 4.3.3.1. Components of VaR measures passim the analysis, a holding period of one-trading day will be used. For the deduction level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent.The various VaR models will be estimated apply the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR mo dels during periods of financial crisis, the paper by choice backtest the hardness of VaR models within the current global financial crisis from the beginning in August 2007.3.3.2. Calculation of VaR3.3.2.1. Non-parametric approach historical SimulationAs mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section.The chapter 2 has proved that shrewd VaR employ the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to maintain an fit historical time series for simulating. There are some(prenominal) previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating d aily VaRs is not shorter than 1000 observed days.In this sense, the study will be based on a slide window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this instead than larger windows is since adding more historical data means adding honest-to-god historical data which could be irrelevant to the future development of the returns indexes.After choose in ascending order the past returns attributed to equally disjointed classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the last(a) 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns.For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th terminal return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively.Figure 3.11a Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007Figure 3.11b Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The doubt is whether the proposed non-parametric model is accurately performed in the debauched period will be discussed in length in the chapter 4.3.3.2.2. parametric approaches under the normal distributional assumption of returnsThis section presents how to calculate the daily VaRs development the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4.3.3.2.2.1. The RiskMetricsComparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations instead, they make use of additional information contained in the normal distribution function. All that need is the c urrent estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor =0.94 (the RiskMetrics system suggested exploitation =0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly.After figure the daily variance, we continuously measure VaRs for the forebode period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each logical implication level is obviously computed victimisation the transcend function NORMSINV.3.3.2.2.2. The Normal-GARCH(1,1) modelFor GARCH models, the chapter 2 confirms that the most im portant point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, utilise the mode acting of level best likelihood estimation (MLE). In fact, in order to do the MLE function, legion(predicate) previous studies efficiently use professional econometric softwares rather than treatment the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below).Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500Normal-GARCH(1,1)*ParametersFTSE 100SP 5000.09559520.05552440.89072310.92899990.00000120.0000011+0.98631830.9845243Number of Observations13041297 pound likelihood4401.634386.964* Note In this section, we report the results from the Normal-GARCH(1,1) model using the method of supreme likelihood, under the assumption that the errors conditionally follow the normal distrib ution with significance level of 5%.According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH make are unornamented for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of old news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 0.93), indicating a long recollection in the variance.The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The logarithm likehood ratios rejected the hypothesis of normality very strongly.After calculating the model parameters, we begin measuring conditional variance (volatility ) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the calculate period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV.3.3.2.2.3. The Student-t GARCH(1,1) modelDifferent from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, th e paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of utmost likelihood estimation and obtained by the STATA (see Table 3.3).Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500Student-t GARCH(1,1)*ParametersFTSE 100SP 5000.09261200.05692930.89464850.93547940.00000110.0000006+0.98726050.9924087Number of Observations13041297logarithm likelihood4406.504399.24* Note In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%.The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of old news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model.3.3.2.3. Parametric approaches under the normal distributional assumption of returns change by the Cornish-Fisher Expansion techniqueThe section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is understandably that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4.3.3.2.3.1. The CFE-modified RiskMetricsSimilarVaR Models in Predicting Equity Market RiskVaR Models in Predicting Equity Market RiskChapter 3 Research DesignThis chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models.3.1. DataThe data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007.The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is tha t the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time.3.1.1. FTSE 100 indexThe FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator.In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009.3.1.2. SP 500 indexThe SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United Sta tes. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy.Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days).3.2. Data AnalysisFor the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics.3.2.1. Assumptions3.2.1.1. Normality assumptionNormal distributionAs mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution.Figure 3.1 Standard Normal DistributionSkewnessThe skewness is a measure of dissymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns.Figure 3.2 Plot of a positive or negative skewKurtosisThe kurtos is measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of datas variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3.Figure 3.3 General forms of KurtosisJarque-Bera StatisticIn statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined aswhere n is the number of observations, S is the sample skewness, K is the sample kurtosis. For la rge sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom.Augmented DickeyFuller StatisticAugmented DickeyFuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the DickeyFuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values (1%) 3.4334, (5%) 2.8627, (10%) 2.5674.3.2.1.2. Homoscedasticity assumptionHomoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable.Figure 3.4 Plot of HomoscedasticityUnfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non -constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time.3.2.1.3. Stationarity assumptionAccording to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them.One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,, and any time interval the joint distribu tion of the returns ,, is the same as the joint distribution of returns ,,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return.3.2.1.4. Serial independence assumptionThere are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods.The results can range from+1 to -1. An autocorrelation of+1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase see n in one time series results in a proportionate decrease in the other time series).In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags.The Ljung-Box test can be defined aswhere n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the is the quantile of the Chi-square distribution with h degrees of freedom.3.2.2. Data CharacteristicsTable 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index ove r time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009.Table 3.1 Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009.DIAGNOSTICSSP 500FTSE 100Number of observations17741781Largest return10.96%9.38%Smallest return-9.47%-9.26%Mean return-0.0001-0.0001Variance0.00020.0002Standard Deviation0.01440.0141Skewness-0.1267-0.0978Excess Kurtosis9.24317.0322Jarque-Bera694.485***2298.153***Augmented Dickey-Fuller (ADF) 2-37.6418-45.5849Q(12)20.0983*Autocorre 0.0493.3161***Autocorre 0.03Q2 (12)1348.2***Autocorre 0.281536.6***Autocorre 0.25The ratio of SD/mean144141Note 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively.2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158F igure 3.5a The FTSE 100 daily returns from 05/06/2002 to 22/06/2009Figure 3.5b The SP 500 daily returns from 05/06/2002 to 22/06/2009Figure 3.6a The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009Figure 3.6b The SP 500 daily closing prices from 05/06/2002 to 22/06/2009Figure 3.7a Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009Figure 3.7b Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009Figure 3.8a Diagram showing the FTSE 100 frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009Figure 3.8b Diagram showing the SP 500 frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates.Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme eve nts than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail).Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present.The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics volatility clustering and le ptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009.Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility).In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alter native hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary.Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the retu rn series exhibit linear dependence.Figure 3.9a Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009.Figure 3.9b Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009.Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can writeCorr(Rt+1,Rt+1-) 0, for = 1,2,3, 100Therefore, returns are almost impossible to predict from their own past.One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags.Corr(R2t+1,R2t+1-) 0, for = 1,2,3, 100Figure 3.10a Autocorrelations of the FTSE 100 squared daily returnsFigure 3.10b Autocorrelations of the SP 500 squared daily returns3.3. Calculation of Value At RiskThe section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the re turns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution.Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic fro m the results of examining the collected data.The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns.Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4.3.3.1. Components of VaR measuresThroughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent.The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the cu rrent global financial crisis from the beginning in August 2007.3.3.2. Calculation of VaR3.3.2.1. Non-parametric approach Historical SimulationAs mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section.The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days.In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes.After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) retur ns.For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively.Figure 3.11a Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007Figure 3.11b Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The q uestion is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4.3.3.2.2. Parametric approaches under the normal distributional assumption of returnsThis section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4.3.3.2.2.1. The RiskMetricsComparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated per iod from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor =0.94 (the RiskMetrics system suggested using =0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly.After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV.3.3.2.2.2. The Normal-GARCH(1,1) modelFor GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below).Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500Normal-GARCH(1,1)*ParametersFTSE 100SP 5000.09559520.05552440.89072310.92899990.00000120.0000011+0.98631830.9845243Number of Observations13041297Log likelihood4401.634386.964* Note In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%.According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the finan cial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of old news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 0.93), indicating a long memory in the variance.The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly.After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We t hen measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV.3.3.2.2.3. The Student-t GARCH(1,1) modelDifferent from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the metho d of maximum likelihood estimation and obtained by the STATA (see Table 3.3).Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500Student-t GARCH(1,1)*ParametersFTSE 100SP 5000.09261200.05692930.89464850.93547940.00000110.0000006+0.98726050.9924087Number of Observations13041297Log likelihood4406.504399.24* Note In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%.The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also th e considerable impact of old news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model.3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion techniqueThe section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be as sessed in length in the chapter 4.3.3.2.3.1. The CFE-modified RiskMetricsSimilar
Subscribe to:
Posts (Atom)