**SHARPE, TREYNOR AND JENSEN'S RATIOS**

SHARPE RATIO

This ratio measures the return earned in excess of the risk free
rate (normally
Treasury instruments) on a portfolio to the portfolio's total risk as
measured
by the standard deviation in its returns over the measurement period.
Or
how much better did you do for the risk assumed.

S =__ Return portfolio- Return of Risk free investment__

Standard Deviation of Portfolio

Example: Let's assume that we look at a one year period of time where an index fund returned 11%

Treasury bills earned 6%

The standard deviation of the index fund was 20%

Therefore S =__ 11-6__/.20 = 25

The Sharpe ratio is an appropriate measure of performance for an
overall
portfolio particularly when it is compared to another portfolio, or
another
index such as the S&P 500, Small Cap index, etc.

That said however, it is not often provided in most rating services.

**TREYNOR RATIO**

This ratio is similar to the above except it uses beta instead of
standard
deviation. It's also known as the Reward to Volatility Ratio, it is the
ratio
of a fund's average excess return to the fund's beta. It measures the
returns
earned in excess of those that could have been earned on a riskless
investment
per unit of market risk assumed.

T = __Return of Portfolio - Return of Risk Free Investment__

Beta of Portfolio

The absolute risk adjusted return is the Treynor plus the risk free
rate.

Assume two portfolios A B

Return 12 14

Beta .7 1.2

Risk Free Rate= 9%

Ta=

__ .12 - .09__ = .043 Risk adjusted rate of return of Portfolio A
= .043+
.09 = .12 = 13.3%

.07

Tb=

__ .14 - .09 __ = 0.04 Risk adjusted rate of return of Portfolio
B = 0.04
+ .09 = .13 = 13%

1.2

For many investors, without any analysis of risk, if you ask them what is the better number (12% or 14%) almost universally they state 14%. (I did this with about 1,000 HP employees who owned considerable sums of mutual funds in 401(k) plans). However, when you point out the risk adjusted rate of return, many adjust their thinking.

The example I used was for 1990 - 1993 (roughly) where Fidelity
Magellan
had earned about 18%. Many bond funds had earned 13 %. Which is better?
In
absolute numbers, 18% beats 13%. But if I then state that the bond
funds
had about** half** the market risk, now which is better?
You
don't even need to do the formula for that analysis. But that is
missing
in almost all reviews by all brokers. For clarification- I do not
suggest
they put all the money into either one- just that they need to be aware
of
the implications. It is information, not advice per se. But if you give
really
good information, the advice is implied.

**JENSEN'S ALPHA**

This is the difference between a fund's actual return and those that could have been made on a benchmark portfolio with the same risk- i.e. beta. It measures the ability of active management to increase returns above those that are purely a reward for bearing market risk. Caveats apply however since it will only produce meaningful results if it is used to compare two portfolios which have similar betas.

Assume Two Portfolios

A | B | Market Return | |

Return | 12 | 14 | 12 |

Beta | .7 | 1.2 | 1.0 |

Risk Free Rate +9%

The return expected= Risk Free Return + Beta portfolio (Return of Market - Risk Free Return)

Using Portfolio A, the expected return = .09 + .7 (.12 - .09) = .09 + .02 = .11

Alpha = Return of Portfolio- Expected Return= .12 - .11 = .01 = 1%

As long as "apples are compare to apples"- in other words a computer sector fund A to computer sector fund b- I think it is a viable number. But if taken out of context, it loses meaning. Alphas are found in many rating services but are not always developed the same way- so you can't compare an alpha from one service to another. However I have usually found that their relative position in the particular rating service to be viable. Short term alphas are not valid. Minimum time frames are one year- three year are more preferable.