⚠️ Disclaimer: Any suggestion or information described in this post does not constitute investment advice. The content reflects only the personal point of view of the writer and should not be considered as a recommendation for any financial decision.
Risk is not the enemy. Ignorance of it is.
I have spent a good portion of my professional career inside risk management departments, and one thing that strikes me every single time is how often people outside the field (and even some inside it) treat risk management as a bureaucratic checkbox. A necessary evil. Something that slows things down. This post is an attempt to push back on that idea, as clearly as I can.
Risk management, when done well, is not about preventing you from taking positions. It is about understanding what you own, what it can do to your capital in bad conditions, and how much of that you can truly afford to absorb. That distinction matters enormously in portfolio management, and it matters even more in the specific corner of the field I care about most: market risk.
What is market risk?
Let us start from first principles. When you hold a financial instrument (a stock, a bond, a derivative, a currency position), its value changes over time in response to movements in the broader market. Those movements can be driven by interest rate changes, equity index swings, foreign exchange fluctuations, commodity price shifts, or volatility expansions. The potential loss arising from any of those adverse price movements is what practitioners call market risk.
It sits under the wider umbrella of financial risk, which also includes credit risk (the risk that a counterparty defaults), operational risk (internal processes failing), and liquidity risk (the risk of not being able to exit a position without moving the market against yourself). Each category deserves its own serious treatment, and I plan to get there. But market risk is where I feel most at home, so it gets to go first.
What makes market risk particularly interesting in portfolio management is that it is not just about individual asset losses. It is about how those assets behave together. A portfolio of 50 stocks is not simply 50 separate risks added up. Correlations, concentration, and tail dependencies mean the aggregate behaviour can be far more dangerous than the sum of its parts suggests on a calm trading day.
The cost of not having a framework
History is unfortunately generous with examples of what happens when market risk is poorly understood or deliberately ignored.
The collapse of Long-Term Capital Management (LTCM) in 1998 is probably the most cited academic case. A fund run by Nobel laureates and decorated traders, with leverage ratios that in calm markets looked like arbitrage genius, and in stressed markets looked like a loaded weapon pointed inward. When correlations between asset classes broke down during the Russian default crisis, positions that were modelled as independent blew up simultaneously. The Federal Reserve had to orchestrate a private bailout to prevent contagion across the broader financial system.
A decade later, the 2008 global financial crisis showed the same dynamic at systemic scale. Mortgage-backed securities were rated as low-risk instruments because historical default correlations were estimated in a period of stable and rising house prices. When that assumption collapsed, so did the models, and with them trillions of dollars in portfolio value across institutions that believed they had adequate risk controls in place.
These are not anomalies. They are what happens when a framework for measuring and limiting market risk is either absent, miscalibrated, or (and this is the most dangerous case) present on paper but not actually used to constrain decisions.
For an individual investor or a smaller portfolio manager, the consequences are less systemic but no less painful. A portfolio without risk controls can turn a temporary drawdown into a permanent loss of capital, particularly when leverage is involved. And in today’s environment, leverage is accessible to almost everyone.
The metrics that actually matter
Risk management is only as useful as the tools it uses to quantify exposure. Here are the ones you will encounter most frequently in a market risk context, along with an honest assessment of what each one does (and does not) tell you.
Volatility (σ)
The most fundamental measure. Volatility quantifies how much an asset’s returns deviate from their average over a given period. It is typically expressed as an annualised standard deviation. High volatility means large price swings in both directions. In portfolio terms, the overall portfolio volatility is not just a weighted average of individual asset volatilities because correlation between assets plays a central role.
It is a fast and intuitive metric, but volatility alone is symmetric: it treats upside surprises and downside losses equally. That is not how investors experience returns, which is why it is rarely used in isolation in professional practice.
Historical Volatility vs. Implied Volatility
Historical volatility (HV) is computed from past price movements. It reflects what actually happened over a backward-looking window (30 days, 60 days, etc.). It is backward-looking and purely descriptive.
Implied volatility (IV) is extracted from the market prices of options themselves. It represents what the market expects volatility to be going forward. If an option is traded at a certain price, you can reverse-engineer the volatility assumption baked into that price using models like Black-Scholes. It is forward-looking and expectation-based.
Why derivatives (especially options) use implied volatility:
Options are fundamentally bets on future volatility. A call option becomes more valuable if market participants expect large price swings ahead, regardless of what actually happened historically. Historical volatility is stale information by the time an option is priced. Implied volatility reflects current market consensus about future uncertainty, incorporating all available information and sentiment.
Value at Risk (VaR)
VaR is the most widely used risk metric in financial institutions and is likely the one you will encounter most often in regulatory and internal reporting. The concept is deceptively simple: given a confidence level (typically 95% or 99%) and a time horizon (one day, ten days), VaR answers the question: what is the maximum loss I should not expect to exceed in X% of scenarios?
For example, a one-day 99% VaR of €500,000 means that on 99 out of 100 trading days, losses should not exceed that amount. The remaining 1% of days is where things get interesting.
There are three main approaches to computing VaR:
- Historical simulation: Use the actual historical distribution of returns to estimate future losses. Simple to understand, no distributional assumption required, but highly sensitive to the historical window chosen.
- Parametric (variance-covariance) VaR: Assume returns follow a normal distribution and compute the quantile analytically. Fast and elegant, but dangerous in fat-tailed environments (which markets almost always are).
- Monte Carlo simulation: Generate thousands of synthetic return scenarios using a model and compute the loss distribution from those. Flexible and powerful, but computationally heavy and model-dependent.
VaR is useful precisely because it gives a number. A single, communicable figure that executives and regulators can reason about. But it has a well-known technical blind spot: it says nothing about what losses look like beyond the threshold. Two portfolios can share the same VaR while having dramatically different tail behaviours.
Expected Shortfall (CVaR)
Expected Shortfall (also known as Conditional VaR, or CVaR) addresses exactly the blind spot of VaR. Instead of asking what the worst loss is in 99% of scenarios, it asks: given that we are already in the worst 1% of scenarios, what is the average loss we should expect?
This makes it a coherent risk measure in the technical sense: it respects the mathematical properties that a well-behaved risk metric should have (including sub-additivity, which means diversification always reduces risk). The Basel III and Basel IV regulatory frameworks progressively shifted from VaR to Expected Shortfall as the primary capital requirement metric for market risk, specifically because of this property.
For a portfolio manager, CVaR is the more honest metric. It forces you to think about what happens in genuinely bad markets, not just the edge of bad markets.
Beta (β)
Beta measures how much a portfolio (or a single asset) moves relative to a benchmark, typically the overall market. A beta of 1 means the portfolio moves in lockstep with the market. A beta of 1.5 means it amplifies market movements by 50% in both directions. A beta of 0.5 implies half the market sensitivity.
Beta is a central concept in the Capital Asset Pricing Model (CAPM) and in factor-based investing more broadly. In risk management, it helps isolate the portion of portfolio risk that is systematic (driven by broad market exposure, which cannot be diversified away) versus idiosyncratic (specific to individual holdings, which can be reduced through diversification).
Managing beta deliberately is one of the most direct levers a portfolio manager has to control market risk exposure.
Maximum Drawdown (MDD)
Maximum drawdown measures the largest peak-to-trough decline in a portfolio’s value over a given period. It is the closest market risk metric to what investors actually feel in their stomach during a crisis. A portfolio that has experienced a 40% drawdown needs a subsequent return of approximately 67% just to get back to the prior peak.
For strategies that may carry systematic biases (trend-following, momentum, leveraged strategies), maximum drawdown is an essential metric to combine with VaR and CVaR. It grounds the conversation in real trajectory rather than distributional abstractions.
The Sharpe Ratio (and its risk-adjusted cousins)
Strictly speaking, the Sharpe ratio is a performance metric, but in portfolio management, it lives comfortably inside the risk management conversation. It measures the excess return earned per unit of volatility:
\[\text{Sharpe} = \frac{R_p - R_f}{\sigma_p}\]Where $R_p$ is the portfolio return, $R_f$ is the risk-free rate, and $\sigma_p$ is the portfolio’s standard deviation of returns. A higher Sharpe ratio means you are being better compensated for the risk you are taking.
Its cousins include the Sortino ratio (which uses downside deviation instead of total standard deviation, penalising only negative volatility) and the Calmar ratio (which uses maximum drawdown in the denominator). Each tells a slightly different story, and together they give a richer picture of risk-adjusted performance than any single number alone.
A word on liquidity risk
No discussion of market risk in portfolio management is complete without at least acknowledging liquidity risk, because the two are far more entangled than they appear in textbooks.
Liquidity risk refers to the risk of being unable to exit a position at, or near, the prevailing market price without causing (or absorbing) significant price impact. In quiet markets, it is easy to underestimate. In stressed markets, it becomes the dominant risk.
Two key dimensions define it in practice:
Market liquidity risk is the risk that an asset’s bid-ask spread widens dramatically, or that market depth (the available volume of buy and sell orders at various price levels) evaporates when you actually need to transact. A position that looks perfectly liquid on a normal trading day can become nearly untradeable in a crisis, precisely because every other market participant is trying to do the same thing at the same time.
Funding liquidity risk is the risk that you are unable to meet your obligations (margin calls, redemption requests, collateral requirements) even though your assets have intrinsic value. This is what ultimately killed LTCM: not that its positions were wrong, but that it could not survive long enough for the market to recognise that.
The standard tool for integrating market and liquidity risk is Liquidity-Adjusted VaR (LVaR), which modifies the standard VaR calculation to account for the additional loss that could arise from being forced to unwind a position over a stressed timeframe rather than instantaneously. It is a more conservative and more honest number, and unsurprisingly, it is the one regulators increasingly prefer.
Why this matters beyond the trading floor
It is tempting to think that all of this is relevant only to institutional asset managers, investment banks, or hedge funds with dedicated risk teams. But the logic applies at any scale.
If you manage a personal investment portfolio, understanding how correlated your positions are, what your implicit beta to the equity market is, and whether your liquidity assumptions hold under stress is not a professional luxury. It is the basic intellectual hygiene that separates a portfolio that weathers a crisis from one that forces painful decisions at the worst possible moment.
The frameworks described here do not require a Bloomberg terminal or a team of quants to use meaningfully. Many can be approximated with open-source tools, historical data, and a clear head. Future posts in this series will go deeper into the implementation side, including concrete examples in Python and references to the FRM curriculum for those who want a rigorous foundation.
Hope this was a useful starting point. If you have questions or want to explore any of these topics further, do not hesitate to reach out via LinkedIn. I am always happy to discuss.