We had the orientation for new students at the Columbia FE program yesterday, and I listened to several of our past students talk about how useful they found the optimization course we teach them in their professional lives.

I understand the value of optimization in the world of known parameters, but I have to say that until now I’ve always been a bit confused about the use of optimization in finance, so listening to these students who were practitioners made me think again.

I used to think as follows: when you want to find the shortest path between the upper west side of Manhattan and Tuscaloosa, Vermont, constraining yourself to pass no fewer than six service stations, then optimization makes sense, because the number of service stations and the lengths of the roads are not going to change significantly between the time that you optimize and the time you drive.

Similarly, if you want to put together a portfolio of bonds whose coupons will come closest to paying your known future obligations at definite times, then that makes sense too, because coupons and their dates are known and so are your obligations.

But what gives me difficulty, is the idea of finding the portfolio of stocks that has the best expected return for a given variance, or the portfolio of hedge funds with maximum return subject to minimum variance and zero beta. Because these are EXPECTED returns and EXPECTED variances and EXPECTED betas, and optimizing over expectated scenarios, which you know pretty damn well are going to be wrong, seems to me a qualitatively different matter from optimizing over known scenarios. Averaging over scenarios that are all a little wrong seems like it would make sense, since the errors might cancel. But picking the optimal scenario when you know it’s a little wrong seems less safe.

I suspect I’m wrong about this (and I certainly don’t understand it as well as I’d like to), so I’d like to understand why I’m wrong a little better.