Monday 19 January 2015

Value at risk

Value at risk

Get assignment help for this at assignment4finance@gmail.com
ABSTRACT
Risk management is becoming more and more important after we have faced several times of financial crisis especially the one we are suffering now for financial institutions and regulators. Before recent financial crisis, Value at Risk (VaR) is a simple and widely acceptable tool to measure and manage risk in the last 15 years since JP Morgan published its Riskmetrics to measure and manage risk, but recently more and more analysts doubt its usefullness and efficiency during financial crisis.
In this study we employ four widely used approaches to estimate VaR for three different financial assets at three different confidence levels to test their performance. The approaches we used in this study are the Historical Simulation approach, Moving average approach, GARCH Normal approach and GARCH Student t approach, the three financial assets are S&P 500, Brent oil and United States three month treasury bill, the three confidence levels are 95%, 99% and 99.9%. There are two main purposes in this paper, the first is to test the performance of four approaches to see which one is superior to others. The Second is to analyze the results and try to answer the question whether VaR can measure and manage risk effectively especially during financial crisis time period. The data we collected in this study is daily return of three financial assets from 1st Jun 1989 to 29th May 2009. The results of the study show that GARCH student t approach is superior to other three approaches in most cases, it's the only approach did not underestimate risk, even more in some cases it overestimated risk. We come to a conclusion that VaR can measure and manage risk no matter in which time period if we employ the proper approach for proper financial asset at proper confidence level.
Introduction
In light of recent financial crisis, risk management has drawn very high attention from regulators and financial institutions. Regulators and financial institutions are reviewing the tools to measure and manage risk and making more strict measures to control risk. Value at Risk (VaR) is a simple and widely used tool to measure and manage risk, it is popular in the last 15 years since JP Morgan published its Riskmetrics to measure and manage risk, but recently more and more analysts doubt its usefullness and efficiency during financial crisis. In this study we will try to answer the above question by testing the performance of VaR during different time period.
The concept of risk is refers to the volatility of unexpected outcomes in finance. It includes business risk, strategic risk and financial risk. Business risk and strategic risk are risks that relate to product markets or economic and political environments of a company, financial risk is the risk associated with financial market activities. Financial risk can be further divided into several sub-categories: first is market risk, it is the risk due to changes in market prices. Second is credit risk that counterparties are not able to fulfill their contractual obligations. Third is liquidity risk, the risk of inability to meet payment obligations. Fourth is operation risk, this risk stem from the internal staff or system failure or external events. Fifth is legal risk, which is the risk that due to unlawful transactions. This paper will focus only on financial risk, and more specifically, on how this type of risk can be captured through four most commonly used methods to estimate Value-at-Risk (VaR) based on three different characteristics of financial assets and at different confidence levels.
Historical Simulation Approach
Unlike other parametric VAR models, the historical simulation (HS) model[i] does not make specific assumption about the distribution on the asset returns. Also, it is a nonparametric approach. The VAR number of the historical simulation is easy to understand, so it is more easily accepted by management and the trading community. It is predict that the current positions will replay the record of history. Also, it is relatively easy to implement. In most simple case, historical simulation provides current weights to a time serious of historical asset returns, that is, (Jorion, 1995)
There are several advantages of historical simulation. First, historical simulation is simple to implement which based on historical data on risk factors have been collected in-house for daily marking to market. Second, historical simulation accounts for fat tails which arepresent in the historical data. Third, historical simulation uses the choice of horizon for measuring VaR. Also, historical is intuitive. Users can go back in time and explain the circumstances behind the VaR measure. (Best on 1998)
On the other hand, the historical simulation approach has a number of drawbacks. Due to the value of the portfolio changes, the percentage value changes in the portfolio no longer refer to the original portfolio value. One problem of historical simulation approach is that extreme percentiles are difficult to estimate precisely without a large sample of historical data. Another problem of historical simulation is that asset prices often exhibit trending behavior. A solution provided to deal with trend problem is to imply symmetry on the portfolio value distribution by taking the negative of the profits and losses used in standard historical simulation, which doubles the data used in computing the percentiles and eliminates the trend. (Holt on 1998)
The Variance-Covariance Approach
The variance-covariance approach is the simplest of the VaR methods in calculation required. Normally, global banks used it to aggregate data from a large number of trading activities. Variance-Covariance approach is widely used by banks with comparatively low levels of trading sites and it also the first VaR model to be provided in off-shelf computer packages. Portfolio profits and loss are normally distributed in the variance-covariance approach which is based on assumption that financial-asset returns. (Colleen& Marianne 1997)
Define Rt to be the matrix of market returns at time t and let ?t represent the variance-covariance model is that it has zero mean, which matches standard market practice. Based on this assumption, Jackson (1997) points out that the estimation error associated with poorly determined mean estimates which may decrease the efficiency to estimate variance-covariance matrix. The return on a portfolio of foreign-exchange positions can be expressed as a linear combination of exchange-rate returns due to we are not considering complex derivatives. In terms of the sensitivity of portfolio, one risk factor is explained the change of the portfolio value.
This
The Fixed-weight Specification
Return covariance and variance are constant over the period, which is the assumption of the fixed-weight approach. Hence, it is predict that future variances and covariances are equal to the sample variances and covariances calculated over the fixed-length data history.
The unbiased and efficient estimator of the population variance-covariance matrix should use all data which each observation is equal, if return variances and variances are constant. One of the fixed-weight approach? the random-walk model which restricts the past data period to just one observation (i.e T=1). The fixed-weight assumes that ?t is a random-walk and that ?t is based on much empirical work with asset returns, which suggests that relatively old data should be ignored (Engel and Gizycki, 1998)
Multivariate GARCH
Bollerslev (1996) described the generalized-autoregression conditional heteroscedasticy (GARCH) models which captures volatility clustering. These models apply both autoregression and moving average behavior in variance and covariance.
It is necessary to impose restrictions before engaging in estimation as GARCH model is that the number of risk factors increases calculation rapidly becomes intractable.
Monte Carlo Simulation
The Monte Carlo simulation method is a parametric approach which can be priced using full valuation, generating random movements in risk factors from estimated parametric distributions. The Monte Carlo simulation approach proceeds in two steps.
First, all risk factors will be specified by the risk manager in a parametric stochastic process. Second, all the risk factors simulate different price paths. The portfolio of the Monte Carlo simulation method is marked to market using full valuation as in the historical simulation method, that is, V*k=V(S*i,k), considering at each horizon. Hence, the Monte Carlo method is similar to the historical simulation approach, excluding the hypothetical charges in prices ?Si for asset i in equation which are created by random draws from a prespecified stochastic process instead of sampled from historical data (Sorvon 1995).
The Monte Carlo methods interject an explicit statistical approach and apply mathematical techniques to generate a large number of possible portfolio-return outcomes. The Monte Carlo approach takes into account the events that probable occur, but, in fact, they were not observed over the historical period. One of the main advantages of Monte-Carlo methods is that it evaluates a richer set of events than contained within past history. In order to implement the Monte-Carlo method, a statistical approach of the asset returns must be selected. We use the Monte Carlo method into two statistical approaches: simple normal distribution and a mixture of normal distribution.
Monte- Carlo Methods Using Normally-Distributed Asset Returns
We consider apply the assumption that assets returns are normally distributed, which is the first implementation of the Monte-Carlo approach. The variance covariance matrix is estimated using the fixed-weight variance-covariance approach. The VaR estimate is provided by the appropriate percentile and the resulting changes in portfolio value. The results should be close to those obtained from the fixed-weight variance-covariance approach due to this method using the same distributional assumptions as the variance-covariance method.
A Monte-Carlo approach is proposed by Zangari (1996) which makes use of a mixture of normal distributions. This approach is to duplicate the fat-tailed nature of asset returns. The assumption implies that an asset-return realization is from two distributions: one with probability p and another one with probability (1-p). The parameters of the mixture of normal distribution one estimated that both distributions have zero means.
Unfortunately, Hamilton (1991) proposed that this function does not have a global maximum. When one of the observation is exactly zero the likelihood is infinite, this property emerges. Although Hamilton has provided Bayesian solutions to this problem, our model was to restart the estimation procedure with various starting values. The standard Monte-Carlo model is used to acquire the VaR, when the parameters have been estimated. In the mixed distribution, observations are simulated by drawing p observations from simulations of the first distribution and (1-P) observations from second distribution.
The problem of non-linearity is solved by a lot of approximations, using a second order Taylor series of expansion. This approach brings two main problems. First, the Taylor series is not able to cover all non-linearities well enough, especially the stock price in the relatively large movements, which merge in a risk management setting. Second, the normal distribution of portfolio returns is lost, which makes the delta model computationally efficient and easy to implement. According to compare three approximations with respect to accuracy and computational time to a full valuation mode, Pritsker (1997) finds that in 25% of the approximations a Monte Carlo simulation using the second order Taylor series, normally underestimated the true VaR by an average of 10%. It assumes that no limit on computational time, the full valuation model implemented considers all non-linear relationships. This model implements a VaR computation focused on a Monte Carlo simulation, staying within the Black-Scholes framework of constant volatility and stock price movement.
This essay is an example of a student's work
Disclaimer
Methodology
Analytical Approach
In this study, it takes an analytical approach to prove the results by assuming an independent reality. Arbnor & Bjerke (1994) points out that the characteristic of this approach is its cyclic nature. This characteristic can begins and ends with facts and these facts can lead to the start of a new cycle. When it applies for this study, it means to select a good model to describe the objective reality or test a model whether it is nice fit for describe the objective reality. In addition, the approach shows quantitative character and involves some complicated mathematics computation apply to the model.
Quantitative Approach
We use a large amount of empirical data apply the approached we employed to estimate VaR, it means that the results come from a lot of historical data test and analysis. This way of study shows we are taking a quantitative approach. According to Arbor & Bjerkne (1994), quantitative approach show much clear about the variables and cover a great amount of historical data compare to qualitative approach. This approach also assumes that the theoretical concepts can be measured. A lot of empirical data is collected to be tested to measure the approaches whether can estimate VaR precisely.
Deductive Approach
Deductive approach begins with a general concept, given rule or existing theory and then moving on to a more specific conclusion. Woolfolk (Woolfolk, 2001, p. 286) describes this approach as "drawing conclusions by applying rules or principles; logically moving from a general rule or principle to a specific solution".
In this study, we test the performance of four commonly use VaR approaches based on three different underlying assets with different characteristics at different confidence level. The purpose is to examine the models not create a new model to estimate VaR.
The final conclusion might strengthen some approaches for some specific underlying assets and might weaken some approaches for other approaches on other underlying assets at different confidence level.
Reliability
All the empirical data is used in this study can be checked in public sources, and a certain amount of previous studies regarding to the approaches used in this study to estimate VaR. One can check the results whenever they want to see whether the results are reliable by checking whether the reproducible results are the same as this study shows, if not the same, means this study is not reliable.
Validity
It is very important to show validity when justify an approach or a model. This means if the results is not able tell the truth of the reality, the results which is employed by the approach or model is not validity, the result is not meaningful. In other words, the degree of validity is depends on how closer we get a true picture of the reality of a given situation.
To better show validity, it is vital to know the relation between the theory and data. If the data is adapting to the theories in a continuously way, that means it have a strong validity of the theory or the model which employed for the study. This is confirmed by the study of Holme & Solvang (1991). In this paper, different approaches are chosen to estimate VaR base on three different assets and empirical time series data at three confidence levels. The validity will be enhanced if the data fit the approaches or model continuously.
Estimation of VaR
In this study, four approaches are employed to estimate VaR for three different underlying assets. The ideal situation is that estimation value of VaR is fit for the future value of returns. But the actual situation is that the approach might overestimate or underestimate VaR compare to the actual returns. For example of bank industry, if VaR is overestimated, it means that banks hold excessive capital to cover losses under the regulation of Basel II accord. While in case of VaR is underestimated, it might lead to failure to cover unexpected losses. This is why some American bank went to bankruptcy during the recent financial crisis.
The four approaches are Historical simulation approach, Moving average approach, GARCH normal approach and GARCH student t approach. The underlying assets are being analyzed are Brent oil, S&P 500 and United States three month treasury bill.
When using the parametric approaches to estimate VaR, we do suspect that whether the returns of underlying assets are fit for our assumption of distribution, such as normal distribution for moving average approach and GARCH normal approach. According to Jorion (2007) that economic time series are rarely normally distributed. So the performance of these parametric approaches will be showed less efficiency if the underlying assets are away from normal distribution.
This essay is an example of a student's work
Disclaimer
This essay has been submitted to us by a student in order to help you with your studies. This is not an example of the work written by our professional essay writers.
Historical Simulation Approach
Estimating VaR by using the historical simulation approach is not a complicated mathematically calculations but it requires a lot of historical data. As presented in Chapter 2, the right window size is critical because if the empirical data is too short, it might have a highly varying VaR, while a longer window length would produce a better estimation but the elder empirical data might be low relevance of future returns.
The first task of this approach is to select an empirical window length to forecast the future returns. We will select a moving window of the previous 2000 observations which about eight calendar years. The window length chosen are based on total sample size which is more than 5000 observations for three underlying assets and confidence level 95%, 99% and 99.9% are used in this study. This window length should produce better performance of this approach at higher confidence level.
We use PERCENTILE function in Excel to calculate the n percent percentile of the value of a time series data. The value by percentile function is usually not an exact value in data set, and Excel will calculate a desired value between two closest values by doing a linear interpolation. The results will be showed in later chapter.
Moving Average Approach
The first task of this approach is the same as historical simulation approach is choosing a window size. In this study we choose 45 days which is 9 calendar weeks to calculate the standard deviation.
It is easy to use STDEV function to calculate standard deviation base on moving 45 days window size in Excel, then use the result apply to the parametric VaR formula to get the value of VaR. The results will be showed in later chapter.
GARCH Normal Approach
Like moving average approach we need to calculate standard deviation first to calculate the final VaR, but before doing that, we have to estimate the parameters , and first. The parameters are estimated by using maximum likelihood estimation. It is a challenge job because the previous literatures have been studied in this field only described the MLE function but did not show how to implement it.
In this study, we estimate the parameters by in EVIEWS. Then the question is how to decide the moving window size. As MLE function also assumes that the returns are normally distributed, so the smaller the window size, the larger the risk that those values away from normal distribution. First estimate the value with window size of 1000, 2000, 3000, 4000, 5000 respectively, we found that the value of window size 3000 is most close to the value shows by Jorion (2007) base on the similar financial assets, so we take 3000 as the window size to estimate these three parameters value. The little different between our estimated value compare to the value advised by Jorion (2007) is because we use different time period and the underlying asset is not exactly the same, and we believe the results is reliable. The value and results showed by EVIEWS are showed in appendix 1.
After we got the value of parameters , and , we input the value to the formula of GARCH(1,1) model to calculate the volatility, the VaR is estimated base on the volatility value accordingly and the results will be shown in the results and analysis chapter.
GARCH student t Approach
Under this approach, we estimate the parameters , and first, it is the same job as GARCH normal approach, we did it by EVIEWS and the results are showed in appendix 1.
When we got the parameters value, we employ it to estimate the value of volatility, then we apply it to calculate VaR with the critical value under student t distribution ( Jorion, 2007). All these calculations are done by Excel and the results of VaR will be showed in later chapter.
Skewness
The skewness indicates a distribution looks compare with a normal distribution. For normal distribution, the shape of distribution is symmetrically distributed around its mean. In the case of normal distribution, its skew is 0. As mentioned before, the financial assets is not exactly normally distributed, they might have a positive or negative skewness. Below graph show the negative skew and positive skew.
Negative skew means that it has a longer left tail, most of the distribution is concentrated on the right again the mean. While Positive skew is the positive case of negative skew. Skewness is to be concerned because moving average approach and GARCH approach are assumes the underlying assets are under normally distribution, if the underlying assets are heavily skewed, their estimation of VaR will less accurate. The two approaches might underestimate or overestimate the VaR value according to the skew of the underlying assets return
Kurtosis
When kurtosis is 0 and symmetrically distributed, it is the case of normal distribution. A high kurtosis of underlying assets indicates that there are more extreme values on this underlying asset distribution than those of normal distribution. The positive excess kurtosis is called leptokurtic and the negative excess kurtosis is called platkurtic. The VaR values will be underestimated when it is negative kurtosis.
The Source of Data
In this study, we use daily empirical data of three financial assets, they are time series data. The time period is selected from 1st Jun 1989 to 29th May 2009 for all three assets. The first two thousands observations (from 1989 to 1997) are used as historical data for forecast the future, then the rest of data are classify into two period, period 1 is from 1997 to 2009 (more than 3000 observations) representing for the time period including normal time and financial crisis time, while period 2 is from 2008 to 2009 (about 355 observations) representing for financial crisis time period. Divide this two period is for the purpose of this study.
The data we collect is historical daily price of three assets: S&P 500 index, Brent oil and US three month treasury bill. It is easy to get the data from public source. S&P 500 data that we can get it from Yahoo finance (http://finance.yahoo.com), Brent oil data can get from Energy Information Administration (http://www.eia.doe.gov), US three month treasury bill data we get from Federal Reserve Bank of St. Louis (http://research.stlouisfed.org).
The S&P 500 is a valued weighted index which consist 500 large-cap companies of United States, it can be viewed as a well diversified portfolio, so their volatility is not as high as Brent oil and US three month treasury bill. Figure 3.6a shows the daily returns of S&P 500. From table 3.5 we see the value of average daily volatility of S&P 500 is 1.38% and annual is 21.74%, it indicates that performance of price change is pretty stable. The skewness of S&P 500 is -0.15 and the average daily ln price change is 0.94%, it shows the distribution curve of S&P 500 is a nice match with normal distribution without considering the kurtosis.
The kurtosis is 7.33, it shows that the distribution is narrower than the normal distribution and has a fatter tails. This value of kurtosis is between assets Brent oil and US three month treasury bill, plus consideration of skewness value, it can be concluded that Brent oil is the asset that most match normal distribution, then S&P 500 and US three month treasury bill. This can be viewed by comparing the histogram of figure 3.6b, 3.7b and 3.8c. Therefore it can foresee that S&P 500 will to produce a performance between Brent oil and US three month treasury bill by moving average approach and GARCH normal approach. The value of high kurtosis make this asset performs less effective by parametric approaches which assume assets are follow normal distribution.
Brent Oil
The Brent oil prices is the second volatile among the three underlying assets, this can be seen from the table 3.5 compare with other two assets. The daily volatility of Brent oil is 2.71% and annual is 43.02% shows that it much more volatile than S&P 500.
The Kurtosis is 4.46 indicates that it is a little bit narrower and has a slight fatter tails than normal distribution, while the skewness is -0.18, it shows that the distribution is relatively high symmetric. Combine with the value of kurtosis and skewness, it indicates that Brent oil is the asset that most fit for normal distribution, and this can be seen of figure 3.7b. Therefore asset Brent oil suppose to perform better in parametric approach than other two assets. The daily ln price change is 1.94%, it shows this asset are high volatility asset, it might performs less effective by nonparametric approach because the nonparametric approach is not good at assets with high volatility.
The US three month treasury bills is the most volatility asset of all three assets we employed in this study, its daily volatility is 8.74% and average ln price change is 2.32%, while the value of skewness is -3.27 and a very high kurtosis of 360.84, above values suggest the return distribution of this asset is poor fit normal distribution, it shows this feature in figure 3.8c.essay is an example of a student's work
Disclaimer
This essay has been submitted to us by a student in
From the figure 3.8b, US three month treasury bill was not a high volatility asset before 2007, but since 2007, its volatility sharply goes up and become very high volatility which can be seem from figure 3.8a. If the time period chosen is before 2007, its volatility should be less than Brent oil. But this period is not match the purpose of this study, our study is focus on the time period including recent financial crisis. Base on above characteristics of this asset, it can be expected that this asset will perform the worst no matter nonparametric or parametric approaches.
Autocorrelation
For time series data, it is important to check whether it has autocorrelation or others call serial correlation. Autocorrelation in time series data means the data correlate with itself over time and can be measured by a one lagged Durbin-Watson test in a regression. The existence of autocorrelation indicates that the employed approach is poor fit to the time series data that the price of today cannot be described as a linear function of price of yesterday. Tsay (2002) states that in case of autocorrelation for time series data, there are other factors besides the historical prices that affect today's price, the results will be less effective if we use this approaches forecast the future prices.
The null hypothesis will be rejected if the value of DW is not in the regions of the interval which is the value between DL and DU, from table 3.9 shows that all these three assets are within the interval, therefore the null hypothesis will not be rejected and there is no evidence of showing autocorrelation for these three time series data.
Backtesting
The best is the value equal to the number of observations times the outcome of one minus the selected confidence level, the regions shows the acceptable interval for the exceptions of VaR. The nearer of value of exceptions close to best value, the higher performance the approach do. If the exceptions are over the regions a lot, it indicates that the approach underestimate risk a lot in future, on the other side, it overestimates risk a lot in future.
Data Result & Analysis
Backtesting Results of Christoffersen
The backtesting results of Christoffersen based on three underlying assets and four approaches are shown below. The results will be analyzed and discussed according to the different assets, different approaches and different time period at different confidence levels. The summary results of Christoffersen for the four approaches are shown in Appendix 2. We choose two period data to test the VaR performance. Period 1 is from Apr 1997 to May 2009 (more than 3000 observations), representing the time period that includes the normal economic time and the financial crisis time. Period 2 represents the recent financial crisis time period from 2008 to 2009 (355observations). Period 2 was selected in order to find out how its VaR performs compared to that of period 1. One might ask if period 2 makes sense by using about 355 observations at a confidence level of 99.9%. It is true that 355 observations is not a good sample size for a 99.9% confidence level, but our purpose here is to see whether the approaches underestimate risk in financial crisis time, because this can be indicated by the exceptions figures over the regions, we are not focusing on whether the approaches overestimate risk at this confidence level 99.9% because the observations is too small.
Historical Simulation Approach
Regarding the asset S&P 500, it produces bad results in period 1 and terrible results in period 2. This approach cannot estimate the riskiness of this asset properly. At a confidence level 99% and 99.9%, the number of exceptions are twice compared to the region's maximum limit in period 1, while in period 2, the figure reveals that it is worse. It indicates that this approach produce a poor result to estimate VaR in the above two periods with respect to the asset of S&P 500. We should notice that because the above two periods include the recent financial crisis time period, it affects the results of this approach given that it is not good at predicting risk during extreme time periods. For example, at 99.9% confidence level, if the time does not include the financial crisis time period (period 1 minus period 2), its result is within the regions, that means it work at 99.9% confidence.
This For Brent oil, it also produces bad result, but it shows better than S&P 500, however it still shows that this approach underestimate risk for this asset. The number of exceptions is slightly over the regions at 95% and 99% confidence and within the regions at 99.9% in period 1. In period 2, it works poorly at 95% and 99% confidence level while at a confidence of 99.9%, no clear results can be drawn because the number of exceptions within the regions and the sample size are too small. If the period is does not include period 2, the results will be within the regions at 99% and 99.9% confidence level, which means this approach can produces an acceptable result for this underlying asset during the normal time period before the recent financial crisis time period at 99% and 99.9% confidence level.
With respect to US three month treasury bill, this approach performs the worst. This means the approach is not appropriate for estimating the risk for this underlying asset. The figures are unacceptable both in the two periods at all confidence levels. The reason is that this asset is the most volatile one among the three underlying assets with a daily volatility 8.74% and with a very high kurtosis. Because historical simulation approach weight all the returns equally, it takes time to react to extreme fluctuations of the returns. Another reason for this bad results might be the choice of window length, 2000 empirical returns might takes too much old information which is not relevant for estimating future return for this highly volatile asset, especially to estimate risk since 2007.
The results indicate that this approach produces very poor performance at 95% confidence level and poor performance at 99% confidence level for all three assets, especially for the asset of US three month treasury bill no matter the time period, its figure shows it underestimate risk a lot. At 99.9% confidence level, this approach works to some extent for asset of Brent oil, and it also works for S&P 500 in the normal period (the period not including period 2). Therefore in this test, historical simulation approach performs very poor.
This approach assumes that the future is identical to the past, but with the fact that more and more uncertainty is affecting the future, the volatility in future seems not identical to the past, that is why this approach produces very bad results for the three assets. Overall for all the approaches in this study, the performance of historical simulation approach is the worst one compared to the other three. However this does not mean that this approach is not valid for estimating VaR. It can be used with respect to higher confidence level and proper assets, like the asset Brent oil in high confidence level and not with assets like the US three month treasury bill with very high volatiltiy.
Moving Average Approach
This approach assumes the return of underlying assets follow a normal distribution which is not realistic because the returns of most financial assets including the present one has been shown by the skewness and kurtosis figure in chapter three. Because the three assets are not really normally distributed, so the performance of this approach is affected by the degree to which the underlying assets' distribution look like that of a normal distribution.
For asset S&P 500, this approach does not perform well both in period 1 and period 2. For all three confidence levels, it produces the worst result for this asset among the three assets. The reason for this may be because of the window size meaning that a 45 days window size is more fit for the assets Brent oil and US three month treasury bill because they have high volatility compared to asset S&P 500. It might have provided better results if another window size were used.
For Brent oil, this approach produces a very good result in period 2, and it is works well at a confidence level of 95% in period 1. It indicates that this approach can produce a better performance at a low confidence level for assets like Brent oil and can also work during the extreme time period for this asset. The results are good because the characteristic of this asset's returns ressemble that of the normal distribution.
This Regarding the asset US three month treasury bill, this approach only performs well at a confidence level of 95% in both periods. It indicates again that this approach can produce a better result at a low confidence level for an asset like US three month treasury bill with a very high skewness and kurtosis. Compared to the low confidence level, it shows a bad performance at a high confidence level no matter the time period. Another reason why this approach cannot produce a good result for this asset is the clustering issue. We can see the daily return graph in chapter three reveals that the US three treasury bill have a seriously clustering extreme value during the recent financial crisis, and moving average approach does not take the clustering phenomenon into consideration.
Overall for all the approaches in this study, this approach produces a good performance at low confidence level and is more appropriate for assets like the Brent oil that whose return distribution ressembles the normal distribution though it performs poorly at higher confidence level. Compared to the GARCH approaches, it shows less efficiency to estimate VaR.
GARCH normal Approach
The GARCH normal approach also works under the assumption of normal distribution, so it is the same as the moving average approach where the results might be less efficient if the distribution of assets deviates from the normal distribution. Contrary to the moving average approach, the GARCH model can take into consideration the clustering phenomenon. That will enable the GARCH model to produce better results than the moving average model.
The above two tables show that this approach performs very well in period 1 at a confidence level of 95% with the number of its exceptions being more or less like the best target number for three assets. Compared to the low confidence level, it produces bad results at higher confidence level in both period 1 and period 2 with the exception of the asset Brent oil. The results again demonstrated that parametric approaches under the assumption of normal distribution produce better results if the distribution of the returns of the underlying asset ressembles the normal distribution.
Though GARCH model can deal with clustering problems, but the results of GARCH normal approach is not largely superior to the moving average approach. Goorbergh & Vlaar claim that the characteristic of volatility clustering is the most important characteristic when calculating VaR. However from the results using the moving average approach and GARCH normal approaches which are tested by Christoffersen, it did not make a big difference. We agree that the volatility clustering is an important characteristic when estimating VaR, but it is not the most important one. The most important should be the distribution of the underlying assets. Because both the moving average approach and the GARCH normal approach are under the assumption of normal distribution, if the underlying assets are not actually like the normal distribution, it will produce more or less acceptable results. The results of these two approaches prove above statement.
Another factor with respect to this approach is the parameters estimation using the MLE. The value of parameters , and can affect the results of VaR estimation. In this study we take 3000 observations to estimate the parameters using EVIEWS, which might not precisely represents the real value of the parameters , and . Also, because the maximum likelihood function is under the assumption of the normal distribution hence, it might make this approach less efficient.
Overall for all four approaches, the GARCH normal produces better results than the historical simulation approach and the moving average approach no matter whether in period 1 or period 2 at 95% and 99% confidence level. But at higher confidence level, it performs poorly. Compared to the historical approach and moving average approach, GARCH normal does a better job in dealing with the volatility clustering phenomenon by using an advanced way to estimate volatility and make the estimation precisely. But under the assumption of normal distribution, its performance is not powerful, it still underestimate risk at higher confidence level and for the extreme time period. To better deal with this problem, GARCH with student t approach does a very good job.
GARCH with a student t-distribution
Unlike the above two parametric approaches, this approach is under the assumption of student-t distribution which assumes that the underlying assets have a heavy tail. This is more realistic for financial assets and enables the GARCH model to produce better results. In addition, this approach can take into account the volatility and clustering phenomenon.
This essay is an example of a student's work
DisclaimerIIt can be seen from the above two tables that GARCH with the student-t approach produces better results for all the assets at all confidence levels in both periods. All the number of exceptions is less than the maximum value for the regions implying that this approach does not underestimate future risk for all three confidence levels and underlying assets no matter what time period is used. This shows this approach produces powerful results to estimate VaR among the four approaches. Similar to the GARCH normal approach, the estimation of the parameters , and have an effect on the accuracy of the VaR estimation. However from the number of exceptions shown in the above two tables, it can observed that this approach can capture the risk completely and may sometimes overestimate the riskiness.
In period 1, the number of exceptions at 95% and 99% confidence level is less than the minimum value of the regions. This is rejected by Christoffersen test because it largely overestimate risk. It is too conservative to estimate VaR at these two confidence level in period 1. While at 99.9% confidence level, it performs quite well as the number of exceptions is more or less like the best expected value. Therefore this approach is too conservative in period 1 at 95% and 99% confidence level, and is quite acceptable at 99.9 confidence level.
In period 2, it performs better than in period 1 while showing reasonable performance at 95% and 99.9% confidence level for all three assets implying it does quite well during the financial crisis time period. It is however too conservative at 99% confidence level for the Brent oil and US three month treasury bill during the extreme time period.
The difference between this approach and the GARCH normal approach is only the assumption of assets' distribution, but it produces completely different results. This again demonstrates that the distribution is the most important characteristic that affect VaR estimation. Because these three underlying assets are in reality not normally distributed, the moving average approach and GARCH normal produce poor results. The three assets might not exactly follow student t distribution, but they have heavier tails than normal distribution and the tails are not as heavy as student t distribution, therefore, GARCH student t approach overestimate risk in period 1 at 95% and 99% confidence level in period 1.
Overall for the four approaches, this approach produces the best result when estimating VaR. Also this approach shows overestimated risk at lower confidence level in period 1 meaning that the approach is too conservative when estimating VaR at lower confidence level. Such a characteristic is not welcomed by firms because they do not want to keep so much capital reserve to prevent risk which is not actually existing. On the other hand, it performs very well at higher confidence level no matter the time period, especially during the financial crisis time period.
Validity
We want to once again stress the validity of the findings because it is really important for any study. We stress that the approaches we applied and the results we got are highly valid in this study. We will look at two aspects to show the validity of this study - one is the surface validity and another one is external validity.
Regarding to the surface validity, the results we concluded might be conflicting with previous studies, or previous studies have not been done under the same conditions before because we are using four approaches to estimate VaR for three underlying assets with different characteristic, at three different confidence levels and the time periods are very close to now (time period 1 is from 1st Jun 1989 to 29th May 2009, time period 2 is from 1st Jan 2008 to 29th May 2009). In addition, the approaches are the parametric and nonparametric technique with the inclusion of the assumption of normal distribution and student t distribution for the parametric approach.
The previous studies shows that the historical simulation approach is a simple approach but can still produce relatively good results, though it performed poorly in this sturdy because the time period and the assets we chose were different. Including recent financial time period and a highly volatile asset like the US three month treasury bill affect the results of the historical simulation approach. In addition, there are some studies showing that the moving average approach outperform GARCH normal approach or GARCH normal approach outperform moving average approach. In this study, we found that the GARCH normal performs better than the moving average approach as the number of exceptions indicate. For the underlying assets, it is important to know their characteristics, for example, for the Brent oil, both the moving average approach and the GARCH normal can do a good job in period 1 at 95% confidence level. But for the asset US three month treasury bill, the results provide a big different between the parametric and the nonparametric approach.
This essay is an example of a student's work
Disclaimer
For the internal validity, although there are some divergences from the results we expected compared to the actual results, the results are quite good in general. We expected that the historical simulation approach should have a very poor performance for the asset US three month treasury bill which is highly volatile. The Brent oil performs the best of the three assets using the parametric approaches under the assumption of the normal distribution and the GARCH student-t approach produces superior results than the other approaches used in this study.

The general degree of validity in this study can be seen as relatively high, even though some results shows slightly different as we expected, but the general results are quite good to reflect this fact. The nonparametric approach is not useful when including extreme time period though it might perform well at high confidence level in normal time period. The parametric approach does not performs very well under the assumption of normal distribution because the normal distribution is not realistic, while under the student-t distribution, it performs well and not only can capture risk completely, but also overestimate risk.





 

Why Is Financial Reporting Important

Why Is Financial Reporting Important

Get assignment help for this at assignment4finance@gmail.com

Accounting or accountancy is a process used for the collection, processing and communication of financial information. Accounting is an information system. Its main purpose is to communicate financial information to interested parties about economic events that relate to the business organizations. Accounting information is used for decision making about possible future operations. It enables or should not, the efficient and effective utilization of scarce resources.
Therefore, the objective of financial statements is to satisfy the information requirements of decisions makers in the acquisition and allocation of scarce resources. This is turn facilitate the operational efficiency and financial viability of business operations.
Besides that, the end results of the accounting information are collectively referred to as financial statements. They include the Income Statement, Balance Sheet and Cash Flow Statement.
A Balance Sheet shows in statement formed the relationship between Assets, Liabilities and Owner’s Equity at a point time. The Balance Sheet is a representation of the Accounting Equation in action; the term “balance” refers to the equality of Assets and Equities.

This essay is an example of a student's work

The Income Statement is a financial report to the owner of the business which shows the revenue for an accounting period matched with the expenses of earning that income. Information is shown in the Income Statement in a narrative form. Items are grouped together to help with analysis and interpretation.
The Cash Flow Statement reports the performance of a business in terms of its cash movements, for instance, the cash receipts and cash payments of the business for a period.
Furthermore, the objective of the accounting information system is to satisfy the financial information requirements of the end users of accounting information. There are some examples of the users:

Users

Purposes and benefits

Managers
Manager is to make decision relating to the allocation of scarce resources in order to meet corporate objectives.
Financial information as input into the decision- making process.
Financial information enables the achievement of objectives or goals with the limited amount of existing resources available.
Owners/ shareholders
They have the rights to know all assets and profits after the repayment of the business debts.
Accounting systems with the task of reporting financial information to owners about their ownership interest.
Financial information enables them to make decisions, such as:
Expand business operations by investment further funds
Reinvest profit into expansion
Short-term creditors
Lenders of finance are mainly concerned with the short-term liquidity of the business.
They need to get the accounting information for the business ability to repay its debts.
Long-term creditors
Before a business is given long-term finance, it must be able to prove its long term credit-worthiness by providing accounting information.
They need to ensure the company is able to make interest payments.
Employees
They show an active interest in the financial position and profitability of organizations to enable the claim for increased salaries.
Question 2: What makes accounting information useful?
What makes financial information useful?
Comparability
Understandability
Consistency
Disclosure of accounting policies
Relevance
Reliability
Prudence
Completeness
Neutrality
Faithful representation
The IASB framework comes out a set of qualitative characteristics that make the information given in financial statements useful to all the users. There have four main qualitative characteristics which are understandability, relevance, reliability and comparability.
First, understandability means the expression, with clarity, of accounting information in such a way that it will be understandable to users who are generally to have a reasonable knowledge of business and economic activities. Thus, information on complicated issues should not be lost from financial statements just on the argument that some users may find it hard to understand.
For accounting information to be useful, it must be relevant to a decision. Information has the value of relevance when it manipulates the economic judgments of users by helping them evaluate past, present or future dealings or confirming, or correcting their past evaluations. Information has two roles in helping the users.
Predictive role
It is helping users to look to the future.
Explaining unusual features of current performance helps users to understand prospect potential.
Confirmatory role
Showing users how the entity has or has not met their expectations.
In addition, concept of materiality is intimately related to relevance and deals with the amount of an error in accounting information. The question is the error big enough to affect the decision of someone to rely on the information.
Accounting information is reliable when it is liberated from material error and can depend by the user to represent the economic situation or occasion that is implies to represent. Information may be relevant but so unreliable that it could be misleading. In contrast, information could be reliable but quite non-relevant. Additionally, reliability has five basic characteristics:
Prudence
It is the addition of an extent of caution in the exercise of judgments needed in making approximates under circumstances of insecurity, whereas fair presentation should be essential to all financial statements.
Completeness
Good accounting information is complete.
That means it provides intended users with all information that is necessary to fulfill their information needs and requirements.
The assumption is there no error of omission.
Neutrality
Financial information must be neutral.
If the financial information is not neutral, it will influence the decisions or judgments in order to achieve a predestined results or conclusion. essay is an example of a student's work

Disclaimer

Faithful representation
It is important if accounting information is to be reliable.
It involves the words as well as the figures in the financial statement when match up in an actual occasion.
Substance over form
If the information is to meet a test of faithful representation, then the method of account must reflect the substance of the economic reality of the transaction and not just a legal form.
Both of the qualitative characteristics of relevance and reliability are associated with the comparability. For accounting information, comparability allows a user to evaluate two or more corporations and look for similarities and differences. It also means users able to compare the financial statements of a company eventually to determine the development in its financial performance. The concept of comparability has two important roles which are:
Consistency
It means that financial statements can be compared within a company from one accounting period to the next or between the different companies in the same period.
Disclosure of accounting policies
It means the users of financial statements must be informed of the accounting policies employed in the preparation of financial statements.
Question 3: Briefly identify the difference between an income statement and the balance sheet. Using relevant example and clearly illustrated format, explain what is accounted for in the balance sheet.
The income statement is a summary of the income and expenses for a period. Its preparation involves matching the income or revenue for a period against the costs or expenses for the period. The net result is the profit or loss for the period.
Besides that, the income statement reports hoe the owner/shareholders’ equity increased or decreased as a result of business activities. The income statements display the sources of net income, generally classified as revenue (value coming in from selling products) and expenses (value going out in earning revenue).

Classification in a Balance Sheet

Assets - Things of value which belong to the business and which will provide benefit in the future.
Current Assets – Assets which are cash or convertible into cash within twelve months, or which will provide a benefit to the business within twelve months, e.g. motor vehicles, account receivables.
Non- current Assets – Assets which will provide benefit to the business beyond twelve months (i.e. no more than one year remaining on the date of the balance sheet), for example land and buildings, machinery.
Liabilities- Amounts owing by the business which have to be repaid in the future
Current Liabilities- Debts which have to be paid within twelve months from the date on the balance sheet, for example account payables, overdraft.
Non-current Liabilities- amount owing by the business which are payable more than 12 months after reporting date, for example long-term loan, mortgage.
Net assets- Total Assets less total liabilities
Equals Owner’s Equity in accordance with the Proprietorship Equation,

OE = A – L

Owner’s Equity (Capital) – The financial interest that the owner has in the business representing any capital contribution by the owner and any profits retained in the business.
Capital = Opening Capital + Net Profit - Drawings
Format of a balance sheet
The basic format of a balance sheet is as follows:
In conclusion, the difference between the income statement and balance sheet is based on the terms of what it’s explained. An income statement covers an accounting period of a company. Income statement also describes how much capital came into an organization during an accounting period; the amount went out as expenses and what was missing at the end of the period.

This essay is an example of a student's work

On the other hand, a balance sheet is generally produced to show what an organization has or owes on the end of the period enclosed by the income statement. The balance sheet explains what the organization owns or owes to carry on production or paying money during the next period of time covered by the next income statement.

Section B

Question: Your friend mark is thinking of starting his own business as he wants to be his own boss and have total control over his income but he is not sure how to set prices for his products. He has read some books and come across terms like ‘mark up and profit margin’. He has sought your advice and as an accounting student, you are to explain to him how he is able to do the appropriate pricing and make profits from his business venture. What factors would Mark need to consider to ensure the success of his business?
Markup and margin are measures businesses use to set and manage prices to maximize profitability. As the mark up and margin is referring to the same things but calculate price or profit in different ways. In general, mark up is calculated based on the cost whereas margin is calculated based on the sales/selling price.
A markup is percentage of the cost price plus to get the selling price.
Mark-up = Gross profit x 100
Cost
The margin is the percentage of the final selling price that is profit.
Margin = Gross profit x 100
Sales
Business people usually apply markup for setting prices, while margin is more useful for considering and improving the profitability of the products in a business markets. In order to use the markup of inventory to predict the future gross margins, the business owner must understand the distribution between the markup and gross margin. The markup of his inventory is frequently called initial markup, because it is where the business owner begin. Nevertheless in order to sell it, business owner may have to discount it, promote it or distribution it down for clearance at the end of the season. Gross margin is the profit actually earn when sell it after any discounting or markdowns therefore it is frequently called maintained margin. This spread will be specific to the business, depending upon the level of promotional and clearance activity.
For instance, Mark wants to open a muffins store. Assumed the cost of production is £2.00 per muffin.
To find out a muffin's selling price using the mark-up method, we must know the cost/muffin. Total cost is supposed to take account of all of the costs incurred in producing the muffins to the end of sale.
The method for determining a muffin's selling price using a preferred mark-up percent is:
Selling Price = Total Cost x (1 + Mark-Up %)
Selling Price = £2.00 x (1 + 0.30)
Selling Price = £2.00 x (1.30)
Selling Price = £2.60
Consequently, if want a mark-up of 30% (a profit equal to 30% of total cost) the selling price must be set at £2.60. Mark-up percent is the amount of total cost represented by profit.
Within some occasions, the selling price may be set belong to the comparison of the cost of production with the market price. For instance, if cost of production is £2.00 per muffin and the market comes out to maintain a selling price of £2.80, the selling price may be set around £2.60. These figures can be used to establish the mark-up percent. In this situation, the method for the mark-up % is:
Mark-up % = (Selling Price - Total Cost) ÷Total Cost
Mark-up %= [(£2.60 - £2.00) ÷ £2.00] x 100
Mark-up % = £0.60 ÷ £2.00 x 100
Mark-up %= 30%
The idea of mark-up pricing should not be confused with profit margins and gross margins. The profit margin is the monetary value difference in the selling price and total cost. So, the profit margin in the earlier illustration is (£2.60 - £2.00) £0.60 per unit. Consequently, as the gross margin is usually considered of as gross margin which is the percentage of the selling price accounted for by the profit margin. Gross margin is calculated as the profit margin divided by the selling price. The method for gross margin percentage is:
Gross Margin % = (Selling Price - Total Cost) ÷ Selling Price
Gross Margin % = [(£2.60 - £2.00) ÷ £2.60] x 100
Gross Margin % = £0.60 ÷ £2.60 x 100
Gross Margin % = 23%
If a preferred level of gross margin is recognized, the method for gross margin can be adapted to calculate the selling price. With a preferred gross margin percent, the method for calculating the selling price is:
Selling Price = Total Cost ÷ (1 - Gross Margin)
Selling Price =£2.00 ÷ (1 – 0.23)
Selling Price = £2.00 ÷ £0.77
Selling Price = £2.60
There is understandable that the gross margin of 23% is different than the mark-up of 30%, even though both examples used a selling price of £2.60 and a total cost of £2.00. Mark-up and gross margins are frequently used in calculating and estimating selling prices. Nevertheless, mark up and margin should not be used cross-over for they are distinct and calculated in a different way.
Besides the mark up and margin, the evaluation of liquidity and profitability as well can determine the performance of a business. Liquidity ratios and profitability ratios use the components of classified financial statements to show how well a firm has performed in expressions of maintaining liquidity and achieving profitability.

Liquidity

Means having enough money on hand to pay bills when due to date and to take care of unexpected needs of cash.
Liquidity evaluate that is calculated by dividing net cash flows from operating activities by average total assets.
There are some methods to measure liquidity, which are:
Current Ratio = Current Assets
Current Liabilities
Acid test = Current Assets – inventories
Current liabilities

Profitability

Means the capability to earn adequate income.
As an objective, profitability participated with liquidity for managerial attention because liquid assets are not the profit producing resources.
To evaluate profitability of a company, the needs of comparison of the past and the present performance.
There are some ratios to calculate the profitability:
Profit margin = Net income
Net sales
Asset turnover = Net sales
Average total assets
Return on assets = Net sales
Average total assets