A test of the effectiveness of downside risk framework over mean-variance framework in optimal portfolio selection:Evidence from the Nairobi securites exchange (NSE).
Variance is commonly used as risk measure in portfolio optimization to find the trade-off between the risk and return. Investors wish to minimize the risk at the given level of return. However, the mean-variance model has been criticized because of its limitations. The meanvariance model strictly relies on the assumptions that the assets returns are normally distributed and investor has quadratic utility function. This model will become inadequate when these assumptions are violated. Besides, variance not only penalizes the downside deviation but also the upside deviation. Variance does not match investor’s perception towards risk because upside deviation is desirable for investors. Therefore, downside risk measures have been proposed to overcome the deficiencies of variance as risk measure. The downside risk measures have better theoretical properties than variance because they are not restricted to normal distribution and quadratic utility function. The downside risk measures focus on return below a specified target return which better match investor’s perception towards risk. This study seeks to test the effectiveness of downside risk framework over mean-variance framework in optimal portfolio selection at the Nairobi Securities Exchange. Overall, the study found that the choice of risk measure has a significant effect on portfolio allocation. From the analysis, CVaR as a downside risk measure, outperformed the variance.