Topic 6: Simulation Methods
Learning Objectives
After completing this topic, you should be able to:
- Explain the relationship between normal and lognormal distributions and why the lognormal distribution is used to model asset prices when using continuously compounded asset returns
- Describe Monte Carlo simulation and explain how it can be used in investment applications
- Describe the use of bootstrap resampling in conducting a simulation based on observed data in investment applications
Learning Objective 1: Normal vs Lognormal Distributions
Core Concept: Statistical Distributions and Asset Modeling
The relationship between normal and lognormal distributions is fundamental to financial modeling, particularly in understanding how asset prices and returns behave differently. This topic connects the distributional properties we studied earlier to practical modeling choices. exam-focus
Normal Distribution Properties
- Symmetrical: Bell-shaped curve with equal probability on both sides of the mean
- Range: Unbounded (-∞ to +∞)
- Applications: Models asset returns, especially continuously compounded returns
- Parameters: Mean (μ) and standard deviation (σ)
Lognormal Distribution Properties
- Asymmetrical: Right-skewed distribution
- Range: Bounded below by zero (0 to +∞)
- Applications: Models asset prices (which cannot be negative)
- Relationship: If X ~ N(μ, σ²), then e^X ~ Lognormal
Why Lognormal for Asset Prices? exam-focus
Economic Logic:
- Non-negativity: Asset prices cannot be negative
- Percentage changes: Price changes are better modeled as percentage rather than absolute changes
- Compounding: Continuous compounding naturally leads to lognormal distributions
Mathematical Foundation: If continuously compounded returns are normally distributed:
- Let r_t = ln(S_t/S_{t-1}) ~ N(μ, σ²)
- Then S_t/S_{t-1} = e^{r_t} ~ Lognormal
- Therefore, prices follow a lognormal distribution
Formulas & Calculations
Transformation Between Normal and Lognormal
From Normal Returns to Lognormal Prices:
If r_t ~ N(μ, σ²), then:
S_t = S_0 × e^{(μ - σ²/2)t + σ√t × Z}
Where:
- S_t = price at time t
- S_0 = initial price
- μ = expected return
- σ = volatility
- Z ~ N(0,1) = standard normal random variable
- t = time period
Lognormal Distribution Parameters:
If S ~ Lognormal(μ, σ²), then:
- E[S] = e^{μ + σ²/2}
- Var[S] = e^{2μ + σ²}(e^{σ²} - 1)
- Median[S] = e^μ
Key Relationships:
ln(S_t) ~ N(ln(S_0) + (μ - σ²/2)t, σ²t)
Practical Examples
Example 1: Stock Price Evolution
Given:
- Initial stock price: $100
- Expected annual return: 12%
- Annual volatility: 20%
- Time horizon: 1 year
Calculation:
S_1 = 100 × e^{(0.12 - 0.20²/2)×1 + 0.20×√1×Z}
S_1 = 100 × e^{0.10 + 0.20×Z}
Where Z is a standard normal random variable
Price Distribution:
- Mean price: E[S_1] = 100 × e^{0.12 + 0.20²/2} = $112.75
- The distribution is right-skewed with median < mean
Example 2: Option Pricing Foundation
The Black-Scholes model assumes:
S_T = S_0 × e^{(r - σ²/2)T + σ√T×Z}
This lognormal assumption enables:
- Analytical option pricing formulas
- Risk-neutral valuation
- Delta hedging strategies
DeFi Application: Token Price Modeling
Impermanent Loss Simulations
Simulating token price paths using the lognormal model is the foundation for estimating impermanent loss distributions for Uniswap and other AMM liquidity providers. By running thousands of price paths, LPs can understand the full range of potential outcomes, not just the expected case. defi-application
Context: Liquidity providers in automated market makers (AMMs) face impermanent loss when token prices diverge.
Model Setup:
# Pseudo-code for impermanent loss simulation
def simulate_impermanent_loss(initial_price_A, initial_price_B, mu_A, mu_B, sigma_A, sigma_B, correlation, time_horizon, n_simulations):
results = []
for i in range(n_simulations):
# Generate correlated normal returns
Z1 = random.normal(0, 1)
Z2 = correlation * Z1 + sqrt(1 - correlation²) * random.normal(0, 1)
# Calculate lognormal price evolution
price_A_final = initial_price_A * exp((mu_A - sigma_A²/2) * time_horizon + sigma_A * sqrt(time_horizon) * Z1)
price_B_final = initial_price_B * exp((mu_B - sigma_B²/2) * time_horizon + sigma_B * sqrt(time_horizon) * Z2)
# Calculate impermanent loss
price_ratio_final = price_A_final / price_B_final
price_ratio_initial = initial_price_A / initial_price_B
il = 2 * sqrt(price_ratio_final / price_ratio_initial) / (1 + price_ratio_final / price_ratio_initial) - 1
results.append(il)
return resultsApplication: This simulation helps LPs understand potential losses under different market scenarios and correlation assumptions.
Learning Objective 2: Monte Carlo Simulation
Core Concept: Probabilistic Modeling Through Simulation
Monte Carlo simulation is a computational technique that uses random sampling to solve problems that might be deterministic in principle but too complex for analytical solutions. It extends the probability framework from discrete scenarios to continuous distributions with thousands of paths.
Key Principles
1. Random Sampling: Generate random variables according to specified probability distributions 2. Repetition: Perform many iterations to build statistical confidence 3. Aggregation: Combine results to estimate the quantity of interest 4. Convergence: As the number of simulations increases, estimates converge to true values
Monte Carlo Process Framework
Step 1: Problem Definition
- Define the quantity to be estimated (e.g., option value, portfolio VaR)
- Identify key random variables and their distributions
- Specify the mathematical relationship between inputs and outputs
Step 2: Model Specification
- Choose appropriate probability distributions
- Define correlation structures between variables
- Set boundary conditions and constraints
Step 3: Simulation Execution
- Generate random samples from specified distributions
- Calculate the outcome for each simulation path
- Store results for statistical analysis
Step 4: Result Analysis
- Calculate summary statistics (mean, standard deviation, percentiles)
- Assess convergence and simulation error
- Perform sensitivity analysis
Formulas & Calculations
Basic Monte Carlo Estimator formula
θ̂_MC = (1/n) × Σ(i=1 to n) f(X_i)
Where:
- θ̂_MC = Monte Carlo estimate
- n = number of simulations
- f(X_i) = function value for simulation i
- X_i = random input vector for simulation i
Standard Error of Monte Carlo Estimate formula
SE(θ̂_MC) = σ̂/√n
Where:
- σ̂ = sample standard deviation of f(X_i)
- n = number of simulations
Confidence Intervals
95% CI: θ̂_MC ± 1.96 × SE(θ̂_MC)
Variance Reduction Techniques
Antithetic Variates:
For each random variable Z, also use -Z
This reduces variance when f is monotonic
Control Variates:
Use known analytical results for similar problems
θ̂_CV = θ̂_MC + c(θ_control_MC - θ_control_true)
Practical Examples
Example 1: European Option Valuation
Setup:
- Stock price: $100
- Strike price: $105
- Risk-free rate: 5%
- Volatility: 20%
- Time to expiration: 0.25 years
Simulation Process:
def monte_carlo_option_pricing(S0, K, r, sigma, T, n_simulations):
payoffs = []
for i in range(n_simulations):
# Generate random stock price at expiration
Z = random.normal(0, 1)
ST = S0 * exp((r - 0.5 * sigma**2) * T + sigma * sqrt(T) * Z)
# Calculate option payoff
payoff = max(ST - K, 0)
payoffs.append(payoff)
# Discount to present value
option_value = exp(-r * T) * mean(payoffs)
standard_error = std(payoffs) / sqrt(n_simulations)
return option_value, standard_errorExample 2: Portfolio Value-at-Risk (VaR)
Monte Carlo VaR extends the portfolio mathematics framework by simulating the full distribution of outcomes rather than relying on the normal approximation.
Multi-asset Portfolio VaR:
def portfolio_var_simulation(weights, expected_returns, covariance_matrix, initial_value, confidence_level, time_horizon, n_simulations):
portfolio_values = []
for i in range(n_simulations):
# Generate correlated asset returns
random_returns = multivariate_normal(expected_returns * time_horizon, covariance_matrix * time_horizon)
# Calculate portfolio return
portfolio_return = sum(weights * random_returns)
# Calculate portfolio value
portfolio_value = initial_value * (1 + portfolio_return)
portfolio_values.append(portfolio_value)
# Calculate VaR
portfolio_values.sort()
var_index = int((1 - confidence_level) * n_simulations)
var = initial_value - portfolio_values[var_index]
return varExample 3: Path-Dependent Derivatives
Asian Option Pricing:
def asian_option_monte_carlo(S0, K, r, sigma, T, n_steps, n_simulations):
dt = T / n_steps
payoffs = []
for i in range(n_simulations):
path_sum = 0
S = S0
# Generate price path
for j in range(n_steps):
Z = random.normal(0, 1)
S = S * exp((r - 0.5 * sigma**2) * dt + sigma * sqrt(dt) * Z)
path_sum += S
# Calculate average price
average_price = path_sum / n_steps
# Calculate payoff
payoff = max(average_price - K, 0)
payoffs.append(payoff)
option_value = exp(-r * T) * mean(payoffs)
return option_valueDeFi Application: Protocol Stress Testing
Automated Market Maker (AMM) Simulations
Liquidity Pool Dynamics:
def simulate_amm_performance(initial_reserves_A, initial_reserves_B, fee_rate, price_process_params, trading_intensity, time_horizon, n_simulations):
results = []
for sim in range(n_simulations):
reserves_A = initial_reserves_A
reserves_B = initial_reserves_B
total_fees = 0
# Simulate price movements and trading
for t in range(time_horizon):
# Generate price movement
price_change = generate_price_movement(price_process_params)
# Simulate arbitrage trading
if abs(price_change) > 0:
trade_size = calculate_arbitrage_trade(reserves_A, reserves_B, price_change, trading_intensity)
if trade_size > 0:
# Execute trade and collect fees
new_reserves_A, new_reserves_B, fee_collected = execute_trade(reserves_A, reserves_B, trade_size, fee_rate)
reserves_A, reserves_B = new_reserves_A, new_reserves_B
total_fees += fee_collected
# Calculate final performance metrics
impermanent_loss = calculate_impermanent_loss(initial_reserves_A, initial_reserves_B, reserves_A, reserves_B)
net_return = total_fees - impermanent_loss
results.append({
'impermanent_loss': impermanent_loss,
'fees_earned': total_fees,
'net_return': net_return
})
return resultsGas Price Forecasting
def simulate_gas_prices(base_fee_params, priority_fee_params, network_congestion_model, time_steps, n_simulations):
gas_price_paths = []
for sim in range(n_simulations):
base_fees = []
priority_fees = []
for t in range(time_steps):
# Simulate network congestion
congestion_level = generate_congestion(network_congestion_model, t)
# Model base fee dynamics (EIP-1559)
if t == 0:
base_fee = base_fee_params['initial']
else:
utilization = congestion_level
if utilization > 0.5:
base_fee = base_fees[-1] * (1 + base_fee_params['increase_factor'] * (utilization - 0.5) * 2)
else:
base_fee = base_fees[-1] * (1 - base_fee_params['decrease_factor'] * (0.5 - utilization) * 2)
# Model priority fee
priority_fee = lognormal(priority_fee_params['mu'], priority_fee_params['sigma'])
base_fees.append(base_fee)
priority_fees.append(priority_fee)
gas_price_paths.append({
'base_fees': base_fees,
'priority_fees': priority_fees,
'total_fees': [b + p for b, p in zip(base_fees, priority_fees)]
})
return gas_price_pathsLearning Objective 3: Bootstrap Resampling
Core Concept: Non-parametric Statistical Inference
Bootstrap resampling is a non-parametric statistical method that estimates the sampling distribution of a statistic by resampling with replacement from the original data. It connects to the sampling and inference concepts covered next and is particularly valuable when theoretical distributions are unknown or when sample sizes are small.
Bootstrap Methodology
Key Principles:
- Empirical Distribution: Use the sample as an approximation of the population
- Resampling with Replacement: Create new samples by drawing randomly from the original data
- Statistical Inference: Estimate confidence intervals and hypothesis tests without distributional assumptions
- Consistency: Bootstrap estimates converge to true population parameters as sample size increases
Types of Bootstrap Methods
1. Non-parametric Bootstrap:
- Resample directly from observed data
- No distributional assumptions
- Most common in financial applications
2. Parametric Bootstrap:
- Fit a parametric model to data
- Resample from the fitted distribution
- Useful when distributional form is known
3. Block Bootstrap:
- Preserve time series structure
- Resample blocks of consecutive observations
- Essential for dependent financial data
Formulas & Calculations
Basic Bootstrap Algorithm
1. Original sample: X = {x₁, x₂, ..., xₙ}
2. For b = 1 to B bootstrap replications:
a. Draw random sample X*ᵦ = {x*₁, x*₂, ..., x*ₙ} with replacement from X
b. Calculate statistic θ*ᵦ = f(X*ᵦ)
3. Bootstrap distribution: {θ*₁, θ*₂, ..., θ*ᵦ}
Bootstrap Standard Error
SE_bootstrap(θ̂) = √[(1/(B-1)) × Σ(θ*ᵦ - θ̄*)²]
Where:
- θ̄* = (1/B) × Σθ*ᵦ (bootstrap mean)
- B = number of bootstrap replications
Bootstrap Confidence Intervals
Percentile Method:
95% CI: [θ*₍₀.₀₂₅₎, θ*₍₀.₉₇₅₎]
Where θ*₍ₚ₎ is the p-th percentile of bootstrap distribution
Bias-Corrected and Accelerated (BCa):
Adjusted percentiles: [θ*₍α₁₎, θ*₍α₂₎]
Where α₁ and α₂ are bias and skewness adjusted
Block Bootstrap for Time Series
Block length: l
Number of blocks: k = ⌈n/l⌉
Resample k blocks with replacement
Concatenate to form bootstrap sample
Practical Examples
Example 1: Sharpe Ratio Confidence Interval
Problem: Estimate confidence interval for portfolio Sharpe ratio using limited historical data.
def bootstrap_sharpe_ratio(returns, risk_free_rate, n_bootstrap=1000):
n = len(returns)
original_sharpe = (mean(returns) - risk_free_rate) / std(returns)
bootstrap_sharpes = []
for b in range(n_bootstrap):
# Resample with replacement
bootstrap_sample = [returns[random.randint(0, n-1)] for _ in range(n)]
# Calculate bootstrap Sharpe ratio
bootstrap_mean = mean(bootstrap_sample)
bootstrap_std = std(bootstrap_sample)
bootstrap_sharpe = (bootstrap_mean - risk_free_rate) / bootstrap_std
bootstrap_sharpes.append(bootstrap_sharpe)
# Calculate confidence interval
bootstrap_sharpes.sort()
ci_lower = bootstrap_sharpes[int(0.025 * n_bootstrap)]
ci_upper = bootstrap_sharpes[int(0.975 * n_bootstrap)]
return {
'original_sharpe': original_sharpe,
'ci_lower': ci_lower,
'ci_upper': ci_upper,
'standard_error': std(bootstrap_sharpes)
}Example 2: VaR Model Validation
Backtesting VaR Models:
def bootstrap_var_backtest(returns, var_estimates, confidence_level, n_bootstrap=1000):
n = len(returns)
violations = [1 if returns[i] < -var_estimates[i] else 0 for i in range(n)]
actual_violation_rate = mean(violations)
expected_violation_rate = 1 - confidence_level
bootstrap_violation_rates = []
for b in range(n_bootstrap):
# Resample violation indicators
bootstrap_violations = [violations[random.randint(0, n-1)] for _ in range(n)]
bootstrap_rate = mean(bootstrap_violations)
bootstrap_violation_rates.append(bootstrap_rate)
# Test if actual rate is significantly different from expected
p_value = sum([1 for rate in bootstrap_violation_rates if abs(rate - expected_violation_rate) >= abs(actual_violation_rate - expected_violation_rate)]) / n_bootstrap
return {
'actual_violation_rate': actual_violation_rate,
'expected_violation_rate': expected_violation_rate,
'p_value': p_value,
'bootstrap_distribution': bootstrap_violation_rates
}Example 3: Historical Simulation vs Parametric VaR
Comparing VaR Methods:
def compare_var_methods(returns, confidence_level, n_bootstrap=1000):
n = len(returns)
# Historical Simulation VaR
returns_sorted = sorted(returns)
historical_var = -returns_sorted[int((1 - confidence_level) * n)]
# Parametric VaR (normal assumption)
mu = mean(returns)
sigma = std(returns)
parametric_var = -(mu + norm.ppf(1 - confidence_level) * sigma)
# Bootstrap confidence intervals for Historical VaR
bootstrap_historical_vars = []
for b in range(n_bootstrap):
bootstrap_sample = [returns[random.randint(0, n-1)] for _ in range(n)]
bootstrap_sample.sort()
bootstrap_var = -bootstrap_sample[int((1 - confidence_level) * n)]
bootstrap_historical_vars.append(bootstrap_var)
# Confidence interval for Historical VaR
bootstrap_historical_vars.sort()
hist_var_ci_lower = bootstrap_historical_vars[int(0.025 * n_bootstrap)]
hist_var_ci_upper = bootstrap_historical_vars[int(0.975 * n_bootstrap)]
return {
'historical_var': historical_var,
'parametric_var': parametric_var,
'historical_var_ci': [hist_var_ci_lower, hist_var_ci_upper],
'bootstrap_distribution': bootstrap_historical_vars
}DeFi Application: MEV and Liquidity Analysis
Maximum Extractable Value (MEV) Opportunity Analysis
Problem: Analyze the distribution of MEV opportunities using limited historical data from blockchain transactions.
def bootstrap_mev_analysis(mev_opportunities, gas_costs, n_bootstrap=1000):
"""
Analyze MEV opportunity distribution using bootstrap resampling
mev_opportunities: list of observed MEV profits
gas_costs: corresponding gas costs for each opportunity
"""
n = len(mev_opportunities)
net_profits = [mev - gas for mev, gas in zip(mev_opportunities, gas_costs)]
# Original statistics
original_mean_profit = mean(net_profits)
original_profit_rate = sum([1 for profit in net_profits if profit > 0]) / n
original_max_profit = max(net_profits)
bootstrap_results = {
'mean_profits': [],
'profit_rates': [],
'max_profits': [],
'percentile_90': [],
'sharpe_ratios': []
}
for b in range(n_bootstrap):
# Resample MEV opportunities
indices = [random.randint(0, n-1) for _ in range(n)]
bootstrap_profits = [net_profits[i] for i in indices]
# Calculate bootstrap statistics
bootstrap_results['mean_profits'].append(mean(bootstrap_profits))
bootstrap_results['profit_rates'].append(sum([1 for p in bootstrap_profits if p > 0]) / n)
bootstrap_results['max_profits'].append(max(bootstrap_profits))
bootstrap_results['percentile_90'].append(percentile(bootstrap_profits, 90))
# MEV Sharpe ratio (profit/volatility)
if std(bootstrap_profits) > 0:
bootstrap_results['sharpe_ratios'].append(mean(bootstrap_profits) / std(bootstrap_profits))
# Calculate confidence intervals
confidence_intervals = {}
for metric in bootstrap_results:
sorted_values = sorted(bootstrap_results[metric])
confidence_intervals[metric] = [
sorted_values[int(0.025 * n_bootstrap)],
sorted_values[int(0.975 * n_bootstrap)]
]
return {
'original_statistics': {
'mean_profit': original_mean_profit,
'profit_rate': original_profit_rate,
'max_profit': original_max_profit
},
'bootstrap_distributions': bootstrap_results,
'confidence_intervals': confidence_intervals
}Liquidity Crisis Simulation
Block Bootstrap for Correlated Liquidity Events:
def bootstrap_liquidity_crisis(price_data, volume_data, liquidity_metrics, block_length=5, n_bootstrap=1000):
"""
Use block bootstrap to preserve temporal dependencies in liquidity analysis
"""
n = len(price_data)
n_blocks = n // block_length
crisis_indicators = []
for b in range(n_bootstrap):
# Block bootstrap to preserve time series structure
bootstrap_indices = []
for _ in range(n_blocks):
start_idx = random.randint(0, n - block_length)
block_indices = list(range(start_idx, start_idx + block_length))
bootstrap_indices.extend(block_indices)
# Trim to original length
bootstrap_indices = bootstrap_indices[:n]
# Create bootstrap time series
bootstrap_prices = [price_data[i] for i in bootstrap_indices]
bootstrap_volumes = [volume_data[i] for i in bootstrap_indices]
bootstrap_liquidity = [liquidity_metrics[i] for i in bootstrap_indices]
# Identify liquidity crisis periods
crisis_periods = identify_liquidity_crisis(bootstrap_prices, bootstrap_volumes, bootstrap_liquidity)
crisis_indicators.append({
'crisis_frequency': len(crisis_periods) / n,
'average_crisis_duration': mean([period['duration'] for period in crisis_periods]) if crisis_periods else 0,
'max_price_impact': max([period['max_impact'] for period in crisis_periods]) if crisis_periods else 0
})
return crisis_indicators
def identify_liquidity_crisis(prices, volumes, liquidity_metrics, threshold_percentile=95):
"""
Identify periods of liquidity crisis based on multiple indicators
"""
# Calculate price volatility
price_changes = [abs(prices[i] - prices[i-1]) / prices[i-1] for i in range(1, len(prices))]
# Calculate liquidity stress indicators
volume_threshold = percentile(volumes, 100 - threshold_percentile) # Low volume
liquidity_threshold = percentile(liquidity_metrics, threshold_percentile) # High impact
volatility_threshold = percentile(price_changes, threshold_percentile) # High volatility
crisis_periods = []
in_crisis = False
crisis_start = None
for i in range(len(volumes)):
# Crisis conditions: low volume AND (high liquidity impact OR high volatility)
crisis_condition = (volumes[i] < volume_threshold and
(liquidity_metrics[i] > liquidity_threshold or
(i > 0 and price_changes[i-1] > volatility_threshold)))
if crisis_condition and not in_crisis:
# Start of crisis
in_crisis = True
crisis_start = i
elif not crisis_condition and in_crisis:
# End of crisis
in_crisis = False
crisis_periods.append({
'start': crisis_start,
'end': i,
'duration': i - crisis_start,
'max_impact': max(liquidity_metrics[crisis_start:i+1])
})
return crisis_periodsIntegration and Advanced Applications
Combining Simulation Methods
Hybrid Monte Carlo-Bootstrap Approach
def hybrid_simulation_var(historical_returns, portfolio_weights, confidence_level, monte_carlo_steps=1000, bootstrap_replications=500):
"""
Combine Monte Carlo simulation with bootstrap resampling for robust VaR estimation
"""
var_estimates = []
for b in range(bootstrap_replications):
# Bootstrap resample historical returns
n = len(historical_returns)
bootstrap_returns = [historical_returns[random.randint(0, n-1)] for _ in range(n)]
# Estimate parameters from bootstrap sample
mu = mean(bootstrap_returns)
sigma = std(bootstrap_returns)
# Monte Carlo simulation with bootstrap-estimated parameters
simulated_returns = []
for mc in range(monte_carlo_steps):
random_return = random.normal(mu, sigma)
portfolio_return = sum([w * random_return for w in portfolio_weights]) # Simplified single-asset case
simulated_returns.append(portfolio_return)
# Calculate VaR from simulated returns
simulated_returns.sort()
var = -simulated_returns[int((1 - confidence_level) * monte_carlo_steps)]
var_estimates.append(var)
# Analyze distribution of VaR estimates
final_var = mean(var_estimates)
var_confidence_interval = [
sorted(var_estimates)[int(0.025 * bootstrap_replications)],
sorted(var_estimates)[int(0.975 * bootstrap_replications)]
]
return {
'var_estimate': final_var,
'confidence_interval': var_confidence_interval,
'var_distribution': var_estimates
}Model Validation and Sensitivity Analysis
Simulation Method Comparison Framework
def compare_simulation_methods(asset_returns, option_params, var_confidence_level):
"""
Comprehensive comparison of different simulation approaches
"""
results = {}
# 1. Historical Simulation (Bootstrap-based)
results['historical_simulation'] = bootstrap_historical_var(asset_returns, var_confidence_level)
# 2. Parametric Monte Carlo
mu = mean(asset_returns)
sigma = std(asset_returns)
results['parametric_monte_carlo'] = monte_carlo_var(mu, sigma, var_confidence_level)
# 3. Bootstrap Monte Carlo Hybrid
results['hybrid_method'] = hybrid_simulation_var(asset_returns, [1.0], var_confidence_level)
# 4. Option Pricing Comparison
if option_params:
results['option_pricing'] = {
'monte_carlo': monte_carlo_option_pricing(**option_params),
'black_scholes': black_scholes_analytical(**option_params)
}
return resultsSummary and Key Takeaways
Critical Relationships
- Normal-Lognormal Connection: Continuously compounded returns (normal) → Asset prices (lognormal)
- Simulation Hierarchy: Historical simulation ⊂ Bootstrap methods ⊂ Monte Carlo methods
- DeFi Applications: Traditional finance simulation + Blockchain-specific factors (gas, MEV, liquidity)
Best Practices
- Model Validation: Always compare simulation results with analytical solutions when available
- Convergence Testing: Monitor simulation convergence and use appropriate sample sizes
- Sensitivity Analysis: Test robustness across different parameter assumptions
- Time Dependencies: Use block bootstrap for time series data to preserve correlation structure
Exam Focus Points exam-focus
- Understand WHY lognormal distributions are used for asset prices
- Know the complete Monte Carlo simulation process
- Distinguish between parametric and non-parametric bootstrap methods
- Apply simulation methods to practical investment problems
- Recognize when each method is most appropriate
DeFi-Specific Considerations
- Gas Price Volatility: Models must account for network congestion dynamics
- MEV Extraction: Bootstrap methods help analyze limited MEV opportunity data
- Liquidity Crises: Block bootstrap preserves temporal dependencies in DeFi liquidity events
- Protocol Stress Testing: Monte Carlo simulations evaluate protocol resilience under extreme scenarios
This comprehensive approach to simulation methods provides the foundation for advanced quantitative analysis in both traditional finance and emerging DeFi applications.