Skip to main content
Thesis defences

PhD Oral Exam - Zaniar Ahmadi, Mathematics

Three Essays on Stochastic Processes, Reinforcement Learning, and Financial Applications


Date & time
Monday, November 10, 2025
10 a.m. – 1 p.m.
Cost

This event is free

Organization

School of Graduate Studies

Contact

Dolly Grewal

Where

J.W. McConnell Building
1400 De Maisonneuve Blvd. W.
Room 921-04

Accessible location

Yes - See details

When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.

Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.

Abstract

In the first study, by adopting a perturbation approach, we find expressions of potential densities for refracted skew Brownian motion (skew Brownian motion with two-valued drift). As applications, we recover its transition density and study its long-time asymptotic behaviors. In addition, we also compare with previous results on transition densities for skew Brownian motions. We propose two approaches for generating quasi-random samples by approximating the cumulative distribution function and discuss their risk measurement application.

In the second study, we introduce a dynamic portfolio optimization model for an investor that is averse to both financial risk and the greenhouse gas (GHG) emissions footprint from the portfolio. The dynamic asset allocation is obtained by solving for the optimal mean-variance-emission trade-off with reinforcement learning (RL). As an application of the model, we consider the case of an investor seeking the optimal allocation across the eleven sectors of the S&P 500. We conduct an analysis for multiple investors, each with varying levels of risk and GHG aversion. Optimal RL portfolios are shown to substantially outperform equal-weigthed, optimal static and myopic dynamic benchmarks by (i) properly identifying sectors offering favorable trade-off between financial and environmental performance that are in-line with the investor's preferences, (ii) adjusting the allocation dynamically to reflect changes in market and emission conditions, and (iii) considering the interaction between the various rebalancing decisions over time. Moreover, the RL optimization model demonstrates consistent performance throughout the entire investment period, not just at the final time horizon, making it a robust strategy.

Finally, in the last study, we investigate reinforcement learning (RL) as a framework for the dynamic hedging of swaptions, contrasting its performance with traditional sensitivity-based rho-hedging. We design agents under three distinct objective functions: mean squared error, downside risk, and Conditional Value-at-Risk to capture alternative risk preferences and evaluate how these objectives shape hedging styles. Relying on a three-factor arbitrage-free dynamic Nelson-Siegel model for our simulation experiments, our findings show that most hedging effectiveness is achieved with two swaps as hedging instruments. The underlying swap consistently anchors the hedge by replicating exposure to the level factor, while a second swap complements it by mitigating residual slope and curvature risks, or by serving as a targeted tail-risk hedge under CVaR objectives. The different objectives induce distinct behaviors: the MSE agent relies on level exposures, the downside risk agent balances gain protection with selective risk-taking, and the CVaR agent prioritizes tail protection while opportunistically harvesting premia. Compared with rho-hedging, RL strategies avoid over-hedging, adapt dynamically across payoff states, reduce turnover when transaction costs are present, and remain robust under model misspecification. These results highlight RL’s potential to deliver more efficient and resilient swaption hedging strategies.

Back to top

© Concordia University