Skip to main content
LATEST INFORMATION ABOUT COVID-19

READ MORE

MSc, MA, & PhD defences


Upcoming defence

TBA
 


Past defences
 

Title: Heuristic Conjectures for Moments of Cubic L-Functions Over Function Fields
Speaker: Mr. Brian How (MSc)
Date: Friday, June 26, 2020
Time: 1:00 p.m.
Location: For more information regarding this online defence, please contact Dr.Chantal David
Abstract: Let Lq(s, χ) be the Dirichlet L-function associated to χ, a cubic Dirichlet character with conductor of degree d over the polynomial ring Fq[T]. Following similar work by Keating and Snaith for moments of Riemann zeta-function, Conrey, Farmer, Keating, Rubinstein, and Snaith [CFKRS 2005] introduced a framework for proposing conjectural formulae for integral moments of general L-functions with the help of random matrix theory.

In this thesis we review the heuristic found in [CFKRS 2005] and apply their work in order to propose moments for Lq(s, χ). cubic L-functions over function fields. We find asymptotic formulae when q ≡1 (mod 3), the Kummer case, and when q ≡2 (mod 3), the non-Kummer case. Moreover, while the authors of [CFKRS 2005] provide only the framework for proposing (k,k)-moments of primitive L-functions, we extend their work following the work of David, Lalín, and Nam to propose (k,l)-moments of cubic L-functions where k ≥ l ≥ 1 [DLN]. Furthermore, we provide explicit computations that elucidate the combinatorics of leading order moments and find a general form as well.
Title: On Properties of Ruled Surfaces and their Asymptotic Curves 
Speaker: Ms. Sokphally Ky (MSc)
Date: Thursday, June 11, 2020
Time: 4:00 p.m.
Location: For more information regarding this online defence, please contact Dr. Alina Stancu
Abstract: Ruled surfaces are widely used in mechanical industries, robotic designs, and architecture in functional and fascinating constructions. Thus, ruled surfaces have not only drawn interest from mathematicians, but also from many scientists such as mechanical engineers, computer scientists, as well as architects. In this paper, we study ruled surfaces and their properties from the point of view of differential geometry, and we derive specific relations between certain ruled surfaces and particular curves lying on these surfaces. We investigate the main features of differential geometric properties of ruled surfaces such as their metrics, striction curves, Gauss curvature, mean curvature, and lastly geodesics. We then narrow our focus to two special ruled surfaces: the rectifying developable ruled surface and the principal normal ruled surface of a curve. Working on the properties of these two ruled surfaces, we have seen that certain space curves like cylindrical helix and Bertrand curves, as well as Darboux vector fields on these specific ruled surfaces are important elements in certain characterizations of these two ruled surfaces. This latter part of the thesis centers around a paper by Izmuiya and Takeuchi, for which we have considered our own proofs. Along the way, we also touch on the question of uniqueness of striction curves of doubly ruled surfaces.
Title: Sieberg-Witten Tau-Function on Hurwitz Spaces
Speaker: Ms. Meghan White (MSc)
Date: Thursday, January 23, 2020
Time: 4:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: We provide a proof of the form taken by the Sieberg-Witten Tau-Function on the Hurwitz space of N-fold ramified covers of CP1 by a compact Riemann surface of genus g, as a result derived in [11] for a special class of monodromy data. To this end, we examine the Riemann-Hilbert problem with N x N quasi-permutation monodromies, whose corresponding isomonodromic tau-function contains the Sieberg-Witten tau-function as one of three factors.

We present the solution of the Riemann-Hilbert problem following [9]. Along the way, we give elementary proofs of variational formulas on Hurwitz spaces, including the Rauch formulas.
Title: Individual Claims Reserving: Using Machine Learning Methods
Speaker: Mr. Dong Qiu (MSc)
Date: Friday, December 6, 2019
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: To date, most methods for loss reserving are still used on aggregate data, arranged in a triangular form such as the Chain-Ladder (CL) Method and the over-dispersed Poisson (ODP) Method. With the booming of machine learning methods and the significant increment of computing power, the loss of information resulting from the aggregation of the individual claims data into accident and development year buckets is no longer justifiable. Machine learning methods like Neural Networks (NN) and Random Forest (RF) are then applied and the results are compared with the traditional methods on both simulated data and real data (aggregate at company level).
Title: A Brief Review of Support Vector Machines and a Proposal for a New Kernel via Localization Heuristics
Speaker: Mr. Malik Balogoun (MA)
Date: Thusday, August 29, 2019
Time: 2:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

In this paper, an attempt of solution is brought to a particular problem of binary classification case in the framework of support vector machine. The case displays observations from two classes, and uniformly distributed on a space so that linear separation by a hyperplane is only possible in tiny cubes (or rectangles) of that space. The general approach to classification in the input space is then extended with the design of a new ad hoc kernel that is expected to perform better in the feature space than the most common kernels found in the literature. Theoretical discussions to support the validity, the convergence to Bayes classifier of the new designed kernel, and its application to simulated dataset will be our core contribution to one a way we can approach a classification problem.


In order to make our way to this goal and grasp the necessary mathematical tools and concepts in support vector machine, a literature review is provided with some applications in the first four sections of this document. The last and fifth section answers the question that motivates this research.

Title: Skewed Spatial Modeling for Arsenic Contamination in Bangladesh
Speaker: Mr. Qi Zhang (MA)
Date: Thursday, August 29, 2019
Time: 11:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Bangladesh has been facing serious problem in arsenic contamination for more than two decades. Drinking and irrigating contaminated water put the health of more than 85 million people at risk. The project “Groundwater Studies for Arsenic Contamination in Bangladesh” led by British Geological Survey had been conducted during 1998 to 2001. A few studies have been carried out from different perspectives. The district Comilla is considered to be the most severed region with highest arsenic-related deaths according to Flanagan, Johnston, and Zheng, 2012.

In this thesis, we examine the arsenic groundwater concentration in Comilla district.

We propose spatial models for making inference under Bayesian framework. We demonstrate that models based on the gamma distribution with spatial structure capture the characteristics of arsenic levels, appropriately compared to other models. We also perform spatial interpolation (kriging) to describe the situation of the arsenic levels across all Comilla.
Title: Worst-Case Valuation of Equity-Linked Products Using Risk-Minimizing Strategies
Speaker: Mr. Emmanuel Osei-Mireku MSc)
Date: Tuesday, August 27, 2019
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: The global market for life insurance products has been stable over the years. However, equity-linked products which form about fifteen percent of the total life insurance market has experienced a decline in premiums written. The impact of model risk when hedging these investment guarantees has been found to be significant.

We propose a framework to determine the worst-case value of an equity-linked product through partial hedging using quantile and conditional value-at-risk measures. The model integrates both the mortality and the financial risk associated with these products to estimate the value as well as the hedging strategy. We rely on robust optimization techniques for the worst case hedging strategy. To demonstrate the versatility of the framework, we present numerical examples of point-to-point equity-indexed annuities in multinomial lattice dynamics.
Title: Decomposition of Risk in Life Insurance Based on the Martingale Representation Theorem
Speaker: Mr. Edwin Ng (MSc)
Date: Thursday, August 1, 2019
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Numerous methods have been proposed throughout the literature for decomposing liabilities into risk factors. Such analysis is of great importance because it allows for explaining the impact of each source of risk in relation to the total risk, and thus it allows actuaries to have a certain degree of control over uncertainties. In an insurance context, such sources usually consist of the mortality risk, represented in this paper by the systematic and by the unsystematic mortality risk, and of the investment risk. The objective of this thesis is to consider the Martingale Representation Theorem (MRT) introduced by Schilling et al., (2015) for such risk decomposition, because this method allows for a detailed analysis of the influence of each source of risk.

The proposed dynamic models used in this thesis are the Lee-Carter model for the mortality rates and, the arbitrage-free Nelson-Siegel (AFNS) models for the interest rates. These models are necessary in providing accuracy by improving the overall predictive performance. Once, the risk decomposition has been achieved, quantifying the relative importance of each risk factor under different risk measurements is then proceeded. The numerical results are based on annuities and insurances portfolios. It is found that for extended coverage periods, investment risk represents most of the risk while for shorter terms, the unsystematic mortality risk takes larger importance. It is also found that the systematic mortality risk is almost negligible.
Title: Yield Curve Modelling:  A Comparison of Principal Components Analysis and the Discrete-Time Vasicek Model
Speaker: Ms. Irene Asare (MSc)
Date: Wednesday, July 31, 2019
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: The term structure of interest rates is relevant to economists as it reflects the information available to the market about the time value of money in the future. Affine term structure models such as short rate models have been used in interest rate modelling over the past years to determine the mechanisms driving the term structure. Machine learning approaches are explored in this thesis and compared to the traditional econometric approach, specifically the Vasicek model. Multifactor Vasicek models are considered as the one factor model is found not adequate to characterize the term structure of interest rates.

Since the short rates are not observable the Kalman filter approach is used in estimating the parameters of the Vasicek model. This thesis utilizes the Canadian zero-coupon bond price data in the implementation of both methods and it is observed from both methods that increasing the number of factors to three increases the ability to capture the curvature of the yield curve. The first factor is identified to be responsible for the level of the yield curve, the second factor the slope and third factor the curvature of the yield curve. This is consistent with results obtained from previous work on term structure models. The results from this work indicates that the machine learning technique, specifically the first three principal components of the Principal Component Analysis (PCA), outperforms the Vasicek model in fitting the yield curve.
Title: Absolutely Continuous Invariant Measures for Piecewise Convex Maps of Interval with Infinite Number of Branches
Speaker: Mr. Md Hafizur Rahman (MSc)
Date: Tuesday, July 2, 2019
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: The main results of this Master's Thesis is the generalization of the existenceof absolutely continuous invariant measure for piecewise convex maps of an interval from a case with finite number of branches to the one with infinitely many branches. We give a similar result for piecewise concave maps as well. We also provide examples of piecewise convex maps without ACIM.
Title: Application of the Distributed Lag Models for Examining Associations Between the Built Environment and Obesity Risk in Children, Quality Study
Speaker: Ms. Anna Smyrnova (MSc)
Date: Thursday, July 4, 2019
Time: 3:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Features of the neighbourhood environment are associated with physical activity and nutrition habits in children and may be a key determinant for obesity risk. Studies commonly use a fixed, pre-specified buffer size for the spatial scale to construct environment measures and apply traditional methods of linear regression to calculate risk estimates. However, incorrect spatial scales can introduce biases. Whether the spatial scale changes depending on a person’s age and sex is largely unknown. Distributed lag models (DLM) were recently proposed as an alternative methodology to fixed, pre-specified buffers. The DLM coefficients follow a smooth association over distance, and a pre-specification of buffer size is not required. Therefore, the DLMs may provide a more accurate estimation of association strength, as well as the point in which the association disappears or is no longer clinically meaningful.

Using data from the QUALITY cohort (an ongoing longitudinal investigation of the natural history of obesity in Quebec youth, N=630, Mage=9.6 at baseline), we aimed to apply the DLM to determine whether the association between the residential neighbourhood built environment (BE) and obesity risk in children differed depending on age and sex. A second objective aimed to compare the DLM model with that of a linear regression model (which used pre-specified circular buffer sizes).
Title: Modeling and Measuring Insurance Risks for a Hierarchical Copula Model Considering IFRS 17 Framework
Speaker: Mr. Carlos Araiza Iturria (MSc)
Date: Wednesday, June 26, 2019
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: A stochastic approach to insurance risk modeling and measurement that is compliant with IFRS 17 is proposed. The compliance is achieved through the use of a rank-based hierarchical copula which accounts for the dependence between the various lines of business of the Canadian auto insurance industry. A model for the marginal IBNR losses of each line of business based on double generalized linear models is also developed. Development year and accident year effect factors along with an autoregressive feature for residuals enable modeling the dependence between the various entries of the IBNR loss triangles in a given line of business. Capital requirements calculations are then performed through simulation; numbers obtained with univariate and multivariate risk measures are compared. Moreover, a risk adjustment for non-financial risk required by IFRS 17 is also computed through a cost of capital approach.
Title: Critical L-values of Primitive Forms Twisted by Dirichlet Characters
Speaker: Mr. Jungbae Nam (PhD)
Date: Thursday, December 13, 2018
Time: 10:30 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Let f  be a primitive form of level N and weight k > 1 with nebentypus ε and χ be a primitive Dirichlet character of conductor with (N, fχ) = 1. Then, we consider the twist of f by χ and its Dirichlet L-series denoted by L(f, s, χ). Those central L-values (or even vanishings and nonvanishings of them) are believed (partially true) to encode important arithmetic invariants of algebraic objects over various fields as the class number formula for number fields.

In this thesis, we study the central L-values of f twisted by chi of prime orders and present three nonvanishing theorems on some families of twists. More precisely, we first study for f of k > 1 twisted by χ of order a prime order l using Hecke operators and modular symbols and present some numerical results on vanishings. Next, we study for elliptic curves twisted by primitive cubic characters using special family which is supported on primes. Lastly, we study for elliptic curves twisted by primitive quadratic characters using Galois cohomology
Title: Global Hedging with Options
Speaker: Ms. Behnoosh Zamanlooy (MSc)
Date: Tuesday, December 11, 2018
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: The classical global hedging approach presented in the literature (see Schweizer [1995]) involves using only the underlying asset to hedge a given contingent claim. The current thesis extends this approach by allowing for the use of a portfolio comprised of the underlying as well as other options written on that same underlying to be used as hedging instruments. Classical quadratic global hedging results such as the dynamic programming solution approach are adapted to this framework and are used to solve the global hedging problem presented here. The performance of this methodology is then investigated and benchmarked against the classical global hedging, as well as the traditional delta and delta-gamma hedging approaches. Various numerical analyses of the hedging errors, turnover and the shapes of quantities involved in dynamic programming solution approach are performed. It is found that option-based global hedging, where options are used as hedging instruments, outperforms other methodologies by yielding the lowest quadratic hedging error as expected. Situations where option-based global hedging has the most significant advantage over the other hedging methodologies are identified and discussed.
Title: Comparison of Weight Growth Models in a Sample of Children from 6 to 15 Years
Speaker: Ms. Neha Wadhawan (MSc)
Date: Wednesday, September 19, 2018
Time: 3:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Human growth is a complex, natural developmental phenomenon comprised of prenatal (fetal) and postnatal (infancy, childhood, adolescence, and adulthood) growth. Weight is an eco-sensitive growth measurement that responds more rapidly to illness and loss of appetite than any other anthropometric measurement. Modelling postnatal growth in children's weight is of particular interest in order to identify those at greatest risk for serious health outcomes later in adult life such as obesity, hypertension, cardiovascular disease, and diabetes. Traditionally, the most commonly used parametric growth models (Jenss-Bayley, Reed 1st order, and Reed 2nd order) have been recommended for children from birth to 6 years of age but the literature on their performance in an older age range of children is limited. The Adapted Jenss-Bayley was developed to extend the models from birth to puberty.  In contrast, the recently developed SITAR (SuperImposition by Translation And Rotation) model has no age range constraints, and has been shown to be superior when compared to the previous models (Jenss-Bayley and Reed 1st order) for modeling weight from birth to four years of age. No study has yet assessed the comparison and performance of these models in an older age range of children. This present study aims to extend the previous work by comparing these models (Jenss-Bayley, Reed 1st order, Reed 2nd order, Adapted Jenss-Bayley, and SITAR) to model longitudinal weight in an age range of children that starts from middle childhood and includes puberty (6 to 15 years) in the Quebec Longitudinal Study of Child Development (QLSCD) cohort (n = 2,120). Results demonstrate that the SITAR model outperformed the other four models but should be reassessed in additional studies with longer follow-up.
Title: Large Deviations for the Local Time of a Jump Diffusion with Two Sided Reflection
Speaker: Mr. Giovanni Zoroddu (MSc)
Date: Thursday, August 30, 2018
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Let X be a jump diffusion, then its reflection at the boundaries 0 and b>0 forms the process V. The amount by which V must reflect to stay within its boundaries is added to a process called the local time. This thesis establishes a large deviation principle for the local time of a reflected jump diffusion. Upon generalizing the notion of the local time to an additive functional, we establish the desired result through a Markov process argument. By applying Ito's formula to a suitably chosen process M and in proving that M is a martingale, we find its associated integro-differential equation. M can then be used to find the limiting behavior of the cumulant generating function which allows the large deviation principle to be established by means of the Gärtner-Ellis theorem. These theoretical results are then illustrated with two specific examples. We first find analytical results for these examples and then test them in a Monte Carlo simulation study and by numerically solving the integro-differential equation.
Title:  New Kernels For Density and Regression Estimation via Randomized Histogram
Speaker: Ms. Ruhi Ruhi (MSc)
Date: Tuesday, August 28, 2018
Time: 2:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: In early the 20's, the first person to notice the link between Random Forest (RF) and Kernel Methods, Leo Breiman, pointed out that Random Forests grown using independent and identically distributed random variables in tree construction is equivalent to kernel acting on true distribution. Later, Scornet defined Kernel based Random Forest (KeRF) estimates and gave explicit expression for the kernels based on Centered RF and Uniform RF. In this paper, we will study the general expression for the connection function (kernel function) of an RF when splits/cuts are performed according to uniform distribution and also according to any general distribution. We also establish the consistency of KeRF estimates in both cases and their asymptotic normality.
Title: Registration and Display of Functional Data
Speaker: Mr. Mahdi Bahkshi (MSc)
Date: Tuesday, August 28, 2018
Time: 12:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Functional data refer to data which are in the form of functions or smooth curves that are assessed at a finite, but large subset of some interval. In this thesis, we explore methods of functional data analysis, especially curve registration, in the context of climate changes in a group of 16 cities of the United States. In the first step, spline functions were developed in order to convert the raw data into functional objects. Data are available in function forms, but the mean function which was obtained by the unregistered curve fails to produce a satisfactory estimator. This means that the mean function does not resemble any of the observed curves. A significant problem with most functional data analyses is that of mis-aligned curves (Ramsay & Silverman, 2005). Curve registration is one method in functional data analysis that attempts to solve this problem. In the second step, we used curve registration method based on “landmarks alignment” and “continuous monotone registration” in order to construct a precise measurement of the average temperature. The results show the differences between unregistered data and registered data and a significant rise of the temperature in U.S. cities within the last few decades.
Title: Support Vector Machines with Convex Combination of Kernels
Speaker: Ms. Farnoosh Rahimi (MSc)
Date: Tuesday, August 28, 2018
Time: 10:30 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Support Vector Machine (SVMs) are renowned for their excellent performance in solving data-mining problems such as classification, regression and feature selection. In the field of statistical classification, SVMs classify data points into different groups based on finding the hyperplane that maximizes the margin between the two classes. SVMs can also use kernel functions to map the data into a higher dimensional space in case a hyperplane cannot be used to do the separation linearly. Using specific kernels allows us to model a particular feature space, and a suitable kernel can improve the SVMs' performance to classify data more accurately. We present a method to combine existing kernels in order to produce a new kernel which improves the accuracy of the classification and reduce the process time. We will discuss the theoretical and computational issues on SVMs. We are going to implement our method on a simulated data-set to see how it works, and then we will apply it to some large real-world datasets.
Title: Arbitrage-free Regularization, Geometric Learning, and Non-Euclidean Filtering in Finance
Speaker: Mr. Anastasis Kratsios (PhD)
Date: Monday, August 27, 2018
Time: 9:30 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

This thesis brings together elements of differential geometry, machine learning, and pathwise stochastic analysis to answer problems in mathematical finance. The overarching theme is the development of new stochastic machine learning algorithms which incorporate arbitrage-free and geometric features into their estimation procedures in order to give more accurate forecasts and preserve the geometric and financial structure in the data. This thesis is divided into three parts. The first part introduces the non-Euclidean upgrading (NEU) meta-algorithm which builds the universal reconfiguration and universal approximation properties into any objective learning algorithm. These properties state that a procedure can reproduce any dataset exactly and approximate any function to arbitrary precision, respectively. This is done through an unsupervised learning procedure which identifies a geometry optimizing the relationship between a dataset and the objective learning algorithm used to explain it. The effectiveness of this procedure is supported both theoretically and numerically. The numerical implementations find that NEU-ordinary least squares outperforms leading regularized regression algorithms and that NEU-PCA explains more variance with one NEU-principal component than PCA does with four classical principal components.

The second part of the thesis introduces a computationally efficient characterization of intrinsic conditional expectation for Cartan-Hadamard manifolds. This alternative characterization provides an explicit way of computing non-Euclidean conditional expectation by using geometric transformations of specific Euclidean conditional expectations. This reduces many non-convex intrinsic estimation problems to transformations of well-studied Euclidean conditional expectations. As a consequence, computationally tractable non-Euclidean filtering equations are derived and used to successfully forecast efficient portfolios by exploiting their geometry.

The third and final part of this thesis introduces a flexible modeling framework and a stochastic learning methodology for incorporating arbitrage-free features into many asset price models. The procedure works by minimally deforming the structure of a model until the objective measure acts as a martingale measure for that model. Reformulations of classical no-arbitrage results such as NFLVR, the minimal martingale measure, and the arbitrage-free Nelson-Siegel correction of the Nelson-Siegel model are all derived as solutions to specific arbitrage-free regularization problems. The flexibility and generality of this framework allows classical no-arbitrage pricing theory to be extended to models that admit arbitrage opportunities but are deformable into arbitrage-free models. Numerical implications are investigated in each of the three parts.

Title: The Shift from Classic to Modern Probability:  A Historical Study with Didactical and Epistemological Reflexions
Speaker: Mr. Vinicius Gontijo Lauar (MSc)
Date: Friday, August 24, 2018
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: In this thesis, we describe the historical shift from the classical to the modern definition of probability. We present the key ideas and insights in that process, from the first definition of Bernoulli, to Kolmogorov's modern foundations discussing some of the limitations of the old approach and the efforts of many mathematicians to achieve a satisfactory definition of probability. For our study, we've looked, as much as possible, at original sources and provided detailed proofs of some important results that the authors have written in an abbreviated style.  We then use these historical results to investigate the conceptualization of probability proposed and fostered by undergraduate and graduate probability textbooks through their theoretical discourse and proposed exercises. Our findings show that, despite textbooks give an axiomatic definition of probability, the main aspects of the modern approach are overshadowed by other contents. Undergraduate books may be stimulating the development of classical probability with many exercises using proportional reasoning while graduate books concentrate the exercises on other mathematical contents such as measure and set theory without necessarily proposing a reflection on the modern conceptualization of probability.
Title: Understanding Inquiry, an Inquiry into Understanding: A Conception of Inquiry Based Learning in Mathematics
Speaker: Mr. Julian Frasinescu (MTM)
Date: Thursday, August 23, 2018
Time: 1:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: IBL (Inquiry Based Learning) is a group of educational approaches centered on the student and aiming at developing higher-level thinking, as well as an adequate set of Knowledge, Skills, and Attitudes (KSA). IBL is at the center of recent educational research and practice, and is expanding quickly outside of schools: in this research we propose such forms of instruction as Guided Self-Study, Guided Problem Solving, Inquiry Based Homeschooling, IB e-learning, and particularly a mixed (Inquiry-Expository) form of lecturing, named IBLecturing. The research comprises a thorough review of previous research in IBL; it clarifies what is and what is not Inquiry Based Learning, and the distinctions between its various forms: Inquiry Learning, Discovery Learning, Case Study, Problem Based Learning, Project Based Learning, Experiential Learning, etc. There is a continuum between Pure Inquiry and Pure Expository approaches, and the extreme forms are very infrequently encountered. A new cognitive taxonomy adapted to the needs of higher-level thinking and its promotion in the study of mathematics is also presented. This research comprises an illustration of the modeling by an expert (teacher, trainer, etc.) of the heuristics and of the cognitive and metacognitive strategies employed by mathematicians for solving problems and building proofs. A challenging problem has been administered to a group of gifted students from secondary school, in order to get more information about the possibility of implementing Guided Problem Solving. Various opportunities for further research are indicated, for example applying the recent advances of cognitive psychology on the role of Working Memory (WM) in higher-level thinking.
Title: Analysis on Infinite Trees and Their Boundaries
Speaker: Ms. Chana Pevzner (MSc)
Date: Monday, July 30, 2018
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: The aim of this thesis is to understand the results of Björn, Björn, Gill and Shanmugalingam [BBGS], who give an analogue of the famous Trace Theorem for Sobolev spaces on the infinite K-ary tree and its boundary.  In order to do so, we investigate the properties of a tree as a metric measure space, namely the doubling condition and Poincaré inequality, and study the boundary in terms of geodesic rays as well as random walks. We review the definitions of the appropriate Sobolev and Besov spaces and the proof of the Trace Theorem in [BBGS].
Title: Optimization of Random Forest Based Models Applying Genetic Algorithm
Speaker: Ms. Zahra Aback (MSc)
Date: Friday, July 20, 2018
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: In this century access to large and complex datasets is much easier. These datasets are large in dimension and volume, and researchers are interested in methods that are able to handle this types of data and at the same time produce accurate results.Machine learning methods are particularly efficient for this type of data, where the emphasis is on data analysis, and not on fitting a statistical model. A very popular method from this group is Random Forest which has been applied in different areas of study on two types of problems: classification and regression. The former is more popular, while the latter can be applied to produce acceptable results. Moreover, many efficient techniques for missing value imputation were added to Random Forest over time. One of these methods which can handle all types of variables is MissForest. There are several studies that applied different approaches to improve the performance of classification type of Random Forest, but there are not many studies available for regression type. In the present study, it was evaluated if the performance of regression type of Random Forest and MissForest could be improved by applying Genetic Algorithm as an optimization method. The experiments were conducted on five datasets to minimize the MSE of Random Forest and imputation error of MissForest. The results showed the superiority of the proposed method to the classical Random Forest.
Title: Some Results for FO-definable Constraint Satisfaction Problems Described by Digraph Homomorphisms
Speaker: Mr. Patrick Moore MSc)
Date: Thursday, May 17, 2018
Time: 10:30 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Constraint satisfaction problems, or CSPs, are a naturally occurring class of problems which involve assigning values to variables while respecting a set of constraints. When studying the computational and descriptive complexity of such problems it is convenient to use the equivalent formulation, introduced by Feder and Vardi, that CSPs are homomorphism problems. In this context we ask if there exists a homomorphism to some target structure. Using this view many tools and ideas have been introduced in combinatorics, logic and algebra for studying the complexity of CSPs. In this thesis we concentrate on combinatorics and give characterization results based on digraph properties. Where previous studies focused on CSPs defined by a single digraph with lists we extend our relational structures to consist of many binary relations which each individually describe a distinct digraph on the structures universe. A majority of our results are obtained by using an algorithm introduced by Larose, Loten and Tardif which determines whether a structure defines a CSP whose homomorphism problem can be represented by first order logic. Using this tool we begin by completely classifying which of these structures are FO-definable when each of the relations defines a transitive tournament. We then generalize a characterization theorem, first given by Lemaître, to include structures containing any finite number of digraph relations and lists. We conclude with examples of obstructions and properties that can determine if a particular relational structure has a CSP which is FO-definable and how to construct such structures.
Title: On Estimators of a Spectral Density Function
Speaker: Ms. Chengyin Wang (MSc)
Date: Monday, May 14, 2018
Time: 11:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: This paper presents two main approaches to estimating the spectral density of a stationary time series, that are based on the classical estimation which is the periodogram. Both of them are related to the non-parametric density estimation.  One is the Kernel spectral density estimator while the other one is the Bernstein polynomial spectral density estimator. Then we have introduced the method to determine the optimal parameters of a spectral density of a stationary zero-mean process in each estimator. Finally, the paper conduct simulation experiments to examine the finite sample properties of the proposed spectral density estimators and associated tests.
Title: Transformation Based on Circular Density Estimators​
Speaker: Ms. Yuhan Cao (MSc)
Date: Monday, May 14, 2018
Time: 9:30 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Circular density estimation has been discussed for a long time. In this thesis, we would like to introduce two transformation methods to estimate the circular density. One is derived from kernel density estimator and the other one comes from the Bernstein polynomial estimator (see Chaubey (2017)). We know both kernel density estimation (see Silverman (1986)) and Bernstein polynomial estimation (See Babu and Chaubey (2002)) are appropriate for estimating linear data, and we also could transform the linear data to circular data and vice versa, so we transformed the linear estimation to a circular one and we would like to see which estimator leads to a better transformation. We will conduct a simulation study to compare their estimation abilities based on three distributions. The present result shows that kernel density estimation has a stronger ability to alleviate the boundary problems than Bernstein polynomial density estimation, however, they performs pretty much similar when we estimate the central part of distribution. So in general we can say, kernel density estimator leads to a little bit better transformation and further research may be needed
Title: ACIMs for Non-Autonomous Discrete Time Dynamical Systems; A Generalization of Straube’s Theorem
Speaker: Mr. Chris Keefe (MSc)
Date: Thursday, March 22, 2018
Time: 11:30 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: This Master’s thesis provides sufficient conditions under which a Non-Autonomous Dynamical System has an absolutely continuous invariant measure. The main results of this work are an extension of the Krylov-Bogoliubov theorem and Straube’s theorem, both of which provide existence conditions for invariant measures of single transformation dynamical systems, to a uniformly convergent sequence of transformations of a compact metric space, which we define to be a non-autonomous dynamical system.
Title: On the Upper Bound of Petty’s Conjecture in 3 Dimensions
Speaker: Ms. Emilie Cyrenne (MSc)
Date: Thursday, March 8, 2018
Time: 11:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Among the various important aspects within the theory of convex geometry is that of the field of affine isoperimetric inequalities. Our focus deals with validating the upper bound of the Petty conjecture relating the volume of a convex body and that of its associated projection body. We begin our study by providing some background properties pertaining to convexity as seen through the lens of Minkowski theory. We then show that the Petty conjecture holds true in a certain class of 3-dimensional non-affine deformations of simplices. More precisely, we prove that any simplex in attains the upper bound in comparison to any deformation of a simplex by a Minkowski sum with a unitary line segment. As part of our theoretical analysis, we make use of mixed volumes and Maclaurin series expansion in order to simplify the targeted functionals. Finally, we provide an example validating what is known in the literature as the reverse and direct Petty projection inequality. In all cases, Mathematica is used extensively as our means of visualizing the plots of our selected convex bodies and corresponding projection bodies.
Title: How Do Students Know They Are Right and How Does One Research It?
Speaker: Ms. Natalia Vasilyeva (MTM)
Date: Monday, January 15, 2018
Time: 9:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

Although standards of rigor in mathematics are subject to debate among philosophers, mathematicians and educators, proof remains fundamental to mathematics and distinguishes mathematics from other sciences. There is no doubt that the ability to appreciate, understand and construct proofs is necessary for students at all levels, in particular for students in advanced undergraduate and graduate mathematics courses. However, studies show that learning and teaching proof may be problematic and students experience difficulties in mathematical reasoning and proving.

This thesis is influenced by Lakatos’ (1976) view of mathematics as a ‘quasi-empirical’ science and the role of experimentation in mathematicians’ practice. The purpose of this thesis was to gain insight into undergraduate students’ ways of validating the results of their mathematical thinking. How do they know that they are right? While working on my research, I also faced methodological difficulties. In the thesis, I included my earliest experiences as a novice researcher in mathematics education and described the process of choosing, testing and adapting a theoretical framework for analyzing a set of MAST 217 (Introduction to Mathematical Thinking) students’ solutions of a problem involving investigation. The adjusted CPiMI (Cognitive Processes in Mathematical Investigation, Yeo, 2017) model allowed me to analyze students’ solutions and draw conclusions about the ways they solve the problem and justify their results. Also I placed the result of this study in the context of previous research.

Title: Modeling Nested Copulas with GLMM Marginals for Longitudinal Data
Speaker: Ms.Roba Bairakdar (MSc)
Date: Tuesday, December 19, 2017
Time: 2:30 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: A flexible approach for modeling longitudinal data is proposed. The model consists of nested bivariate copulas with Generalized Linear Mixed Models (GLMM) marginals, which are tested and validated by means of likelihood ratio tests and compared via their AICc and BIC values. The copulas are joined together through a vine structure. Rank-based methods are used for the estimation of the copula parameters, and appropriate model validation methods are used such as the Cramér Von Mises goodness-of-fit test. This model allows flexibility in the choice of the marginal distributions, provided by the family of the GLMM. Additionally, a wide variety of copula families can be fitted to the tree structure, allowing different nested dependence structures. This methodology is tested by an application on real data in a biostatistics study.
Title: The Longitudinal Effect of Structural Brain Measurements on Cognitive Abilities
Speaker:  Ms.Fatemeh Hosseininasabnaja (MSc)
Date: Monday, December 11, 2017
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Loss of brain tissues and cognitive abilities are natural processes of aging, and they are related to each other. These changes in cognition and brain structure are different among the cognitively normal elderly and those with Alzheimer’s disease (AD). Despite the great development in the longitudinal study of decline in brain volume and cognitive abilities, previous studies are limited by their small number of data collection waves and inadequate adjustments for important factors (such as a genetic factor). These limitations diminish the power to detect changes in brain tissues and cognitive abilities over a longer period of time. In this study, firstly, we aimed to explore the longitudinal association between cognitive abilities and global and regional structural brain variables among individuals with normal cognitive status, mild cognitive impairment (MCI), and AD using mixed effects models. Secondly, we investigated the effect of education on the relationship between cognition and brain structure. Lastly, we utilized latent class growth analysis in order to study the change in cognition between different MCI sub-classes based on their functional abilities. The data in this study were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) which contained 6 time points over three years (n = 686). The results showed that cognitive abilities decreased over time across different groups, and the rate of decline in cognition depended on the whole brain volume. Importantly, the effect of brain volume on the rate of decline in cognitive abilities was greater among MCI subjects who progressed to AD (pMCI) and participants with AD. Ventricle enlargement in the pMCI group also showed a significant influence on the rate of cognitive decline. Lastly, based on an assessment of functional abilities at baseline, this study demonstrated an efficient methodology to identify MCI subjects who are most at-risk for cognitive impairment progression.
Title: Sieve Methods and its Application in Problematic Galois Theory  
Speaker:  Mr. Salik Bahar (MA)
Date: Monday, September 25, 2017
Time: 1:00 p.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

David Hilbert [Hil92] showed that for an irreducible polynomial F (X, T ) ∈ ℚ(T )[X] there are infinitely many rational numbers t for which F (X, t) is irreducible in ℚ[X]. In 1936 van der Waerden [vdW34] gave a quantitative form of this assertion. Consider the set of degree n monic polynomials with integer coefficients restricted to a box |ai| ≤ B. Van der Waerden showed that a polynomial drawn at random from this set has Galois group Sn with probability going to 1 as B tends to infinity.

In the first part of the thesis, we introduce the Large Sieve Method and apply it to solve Probabilistic Galois Theory problems over rational numbers. We estimate, En(B),  the number of polynomials of degree n and height at most B whose Galois group is a proper subgroup of the symmetric group Sn. Van der Waerden conjectured that En(B) ≪ Bn—1. P.X. Gallagher [Gal73] utilized an extension of the Large Sieve Method to obtain an estimate of En(B)= O(Bn—1/2 log1—𝝲 B), where 𝜸∼ (2𝛑n)—1/2.

In the second part of the thesis, we state and prove a quantitative form of the Hilbert’s Irreducibility Theorem by using an extension of the Gallagher’s Larger Sieve method over integral points. David Zywina [Zyw10] showed that combining the Large and Larger sieve Methods together, one can obtain a sharper estimate of En(B)= O(Bn—1/2

Title: Two-Sample Test for Time Series
Speaker:  Ms. Abeer Alzahrani (MSc)
Date: Monday, September 11, 2017
Time: 10:00 a.m.
Location: LB 921-04 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: In this thesis, we consider the two-sample problem of time series. Given two time series data x1,...,xn and y1,...,ym, we would like to test whether they follow the same time series model. First, we develop a unified procedure for this testing problem. The procedure consists of three steps: testing stationarity, comparing correlation structures and comparing residual distributions. Then, we apply the established procedure to analyze real data. We also propose a modification to a nonparametric two-sample test, which can be applied to high dimensional data with equal means and variances.
Title: The Bare Necessities for Doing Undergraduate Multivariable Calculus
Speaker: Ms. Hadas Brandes (MSc)
Date: Monday, September 11, 2017
Time: 2:00 p.m.
Location: LB 646 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Students in two mathematics streams at Concordia University start their programs on similar footing in terms of pre-requisite courses; their paths soon split in the two directions set by the Pure and Applied Mathematics (MATH) courses and the Major in Mathematics and Statistics (MAST) courses. In particular, likely during their first year of studies, the students set out to take a two-term arrangement of Multivariable Calculus in the form of MAST 218 – 219 and MATH 264 – 265, respectively. There is an ongoing discussion about the distinction between the MAST and MATH courses, and how it is justified. This thesis seeks to address the matter by identifying the mathematics that is essential for students to learn in order to succeed in each of these courses. We apply the Anthropological Theory of the Didactic (ATD) in order to model the knowledge to be taught and to be learned in MAST 218 and MATH 264, as decreed by the curricular documents and course assessments. The ATD describes units of mathematical knowledge in terms of a practical block (tasks to be done and techniques to accomplish them) and a theoretical block that frames and justifies the practical block. We use these notions to model the knowledge to be taught and learned in each course and reflect on the implications of the inclusion and exclusion of certain units of knowledge in the minimal core of what students need to learn. Based on these models, we infer that the learning of Multivariable Calculus in both courses follows in a tradition observed in single-variable calculus courses, whereby students develop compartmentalized units of knowledge. That is, we find that it is necessary for students in MAST 218 and MATH 264 to specialize in techniques that apply to certain routine tasks, and to this end, it suffices to learn bits and pieces of theoretical knowledge that are not unified in a mathematically-informed way. We briefly consider potential implications of such learning in the wider context of the MATH and MAST programs.
Title: Optimal Measure Transformations and Optimal Trading
Speaker: Mr. Renjie Wang (PhD Oral Examination)
Date: Monday, August 28, 2017
Time: 9:30 a.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

We first associate the bond price with an optimal measure transformation problem which is closely related to decoupled nonlinear forward-backward stochastic differential equation (FBSDE).1 In the default-free case we prove the equivalence of the optimal measure transformation problem and an optimal stochastic control problem of Gombani and Runggaldier (Math. Financ. 23(4):659-686, 2013) for bond price in the framework of quadratic term structure models. The measure which solves the optimal measure transformation problem is the forward measure. These connections explain why the forward measure transformation employed in the FBSDE approach of Hyndman (Math. Financ. Econ. 2(2):107-128, 2009) is effective. We obtain explicit solutions to FBSDEs with jumps in affine term structure models and quadratic term structure models, which extend Hyndman (Math. Financ. Econ. 2(2):107-128, 2009). From the optimal measure transformation problem for defaultable bonds, we derive FBSDEs with random terminal condition to which we give a partially explicit solution. The futures price and the forward price of a risky asset are also considered in the framework of optimal measure transformation problems.

In the second part we consider trading against a hedge fund or large trader that must liquidate a large position in a risky asset if the market price of the asset crosses a certain threshold.2 Liquidation occurs in a disorderly manner and negatively impacts the market price of the asset. We consider the perspective of small investors whose trades do not induce market impact and who possess different levels of information about the liquidation trigger mechanism and the market impact. We classify these market participants into three types: fully informed, partially informed and uninformed investors. We consider the portfolio optimization problems and compare the optimal trading and wealth processes for the three classes of investors theoretically and by numerical illustrations. Finally we study the portfolio optimization problems with risk constraints and make comparison with the results without risk constraints.

1. Based on the paper with Cody Hyndman.
2. Based on the paper with Caroline Hillairet, Cody Hyndman and Ying Jiao.

Title: On Some Refinements of the Embedding of Critical Sobolev Spaces into BMO, and a Study of Stability for Parabolic Equations with Time Delay
Speaker: Mr. Almaz Butaev (PhD Oral Examination)
Date: Monday, August 28, 2017
Time: 1:30 p.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

Van Schaftinen [76] showed that the inequalities of Bourgain and Brezis [11], [12] give rise to new function spaces that refine the classical embedding W1,n (Rn) ⊂ BMO(Rn). It was suggested by Van Schaftingen [76] that similar results should hold in the setting of bounded domains Ω ⊂ Rfor bmor (Ω) and bmoz (Ω) classes.

The first part of this thesis contains the proofs of these conjectures as well as the devel­opment of a non-homogeneous theory of Van Schaftingen spaces on Rn. Based on the results in the non-homogeneous setting, we are able to show that the refined embeddings can also be established for bmo spaces on Riemannian manifolds with bounded geometry, introduced by Taylor [68].

The stability of parabolic equations with time delay plays important role in the study of non-linear reaction-diffusion equations with time delay. While the stability regions for such equations without convection on bounded time intervals were described by Travis and Webb [70], the problem remained unaddressed for the equations with convection. The need to determine exact regions of stability for such equations appeared in the context of the works Mei on the Nicholson equation with delay [50].

In the second part of this thesis, we study the parabolic equations with and without convection on R. It has been shown that the presence of convection terms can change the regions of stability. The implications for the stability problems for non-linear equations are also discussed.

Title: What Algebra Do Calculus Students Need to Know?
Speaker: Ms. Sabrina Giovanniello (MTM)
Date: Thursday, August 24, 2017
Time: 2:00 p.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: Students taking a Calculus course for the first time at Concordia University are mature students returning to school after an extended period of time away from formal education, or students lacking the prerequisites to enter into a science, technology, engineering, or mathematics (STEM) related field. Thus, an introductory Calculus course is the gateway for many STEM programs, inhibiting students’ academic progression if not passed. Calculus tends to be construed as a very difficult subject. This impression may be due to the fact that this course is taught in a condensed form, with limited class time, new knowledge (concept, type of problem, technique or method) introduced every week, and little practice time. Calculus requires higher order thinking in mathematics, compared to what students have previously encountered, as well as many algebraic techniques.  As will be shown in this thesis, algebra plays an important role in solving problems that usually make up the final examination in this course.  Through detailed theoretical analysis of problems in one typical final examination, and solutions produced by 63 students, we have identified the prerequisite algebraic knowledge for the course and the specific difficulties, misconceptions and false rules experienced and developed by students lacking this knowledge. We have also shown how the results of our analyses can be used in the construction of a “placement test” for the course – an instrument that could serve the goal of lessening the failure rate in the course, and attrition in STEM programs, by avoiding having underprepared students.
Title: CIsoperimetric-Type Inequalities for G-Chordal Star-Shaped Sets in ℝ𝒏
Speaker: Ms. Zahraa Abbas (MSc)
Date: Wednesday, August 9, 2017
Time: 2:00 p.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: This paper generalizes certain existing isoperimetric-type inequalities from ℝ2 to higher dimensions. These inequalities provide lower bounds for the n-dimensional volume and, respectively, surface area of certain star-shaped bodies in ℝ𝑛 and characterize the equality cases.  More specifically, we work with g-chordal star-shaped bodies, a natural generalization of equichordal compact sets. A compact set in ℝ𝑛 is said to be equichordal if there exists a point in the interior of the set such that all chords passing through this point consist of a segment of equal length. To justify the significance of our results, we provide several means of constructing g-chordal star-shaped bodies.

The method used to prove the above inequalities is further employed in finding new lower bounds for the dual quermassintegrals of g-chordal star-shaped sets in ℝ𝑛 and, more generally, lower bounds for the dual mixed volumes involving these star bodies. Finally, some of the previous results will be generalized to 𝐿𝑛-stars, star-shaped sets whose radial functions are n-th power integrable over the unit sphere 𝑆𝑛−1.
Title: Computing the Average Root Number of a One-Parameter Family of Elliptic Curves Defined Over Q
Speaker: Mr. Iakovos (Jake) Chinis (MSc)
Date: Monday, August 7, 2017
Time: 10:00 a.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

It is well known that the root number of any elliptic curve defined over Q can be written as an infinite product of local root numbers wp, over all places p of Q, with wp=+/- 1 for all p and such that wp=1 for all but finitely-many p. By considering a one-parameter family of elliptic curves defined over Q, we might ask ourselves if there is any bias in the distribution (or parity) of the root numbers at each specialization.

From the work of Helfgott in his Ph.D. thesis, we know (at least conjecturally) that the average root number of an elliptic curve defined over Q(T) is zero as soon as there is a place of multiplicative reduction over Q(T) other than -deg. In this thesis, we are concerned with elliptic curves defined over Q(T) with no place of multiplicative reduction over Q(T), except possibly at -deg. More precisely, we will use the work of Helfgott to compute the average root number of an explicit family of elliptic curves defined over Q and show that this family is "parity-biased" infinitely-often.

Title: Multivariate Robust Vector-Valued Range Value-at-Risk
Speaker: Ms Lu Cao (MA)
Date: Tuesday, July 25, 2017
Time: 1:30 p.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract: The dependence between random variables has to be accounted for modeling risk measures in a multivariate setting. In this thesis, we propose a bivariate extension of the robust risk measure Range Value-at-Risk (RVaR) based on bivariate lower and upper orthant Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) introduced by Cossette et al. (2013, 2015). They are shown to possess properties similar to bivariate TVaR, such as translation invariance, positive homogeneity and monotonicity.  Examples with different copulas are provided. Also, we present the consistent empirical estimators of bivariate RVaR along with the simulation. The robustness of estimators of bivariate VaR, TVaR and RVaR are discussed with the help of their sensitivity functions. We conclude that the bivariate VaR and RVaR are robust statistics
Title: Decomposing Liabilities in Annuity Portfolios using Martingale Representation Theorem Decomposition
Speaker: Mr. Chengrong Xie (MSc)
Date: Monday, July 24, 2017
Time: 1:30 p.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

A life annuity is a series of payments made at fixed intervals while the annuitant is alive. It has been a major part of actuarial science for a long time and it plays an important role in life insurance operations. In order to explore the interaction of various risks in an annuity portfolio, we decompose the liabilities by using the so called Martingale Representation Theorem (MRT) decomposition. The MRT decomposition satisfies all 6 meaningful properties proposed by Schilling et al. (2015).

Before presenting some numerical examples to illustrate its applicability, several stochastic mortality models are compared and the Renshaw-Haberman (RH) model is chosen as our projection model. Then we compare two one-factor short rate models and estimate the parameters of CIR model to construct the stochastic interest rate setting. Finally, we allocate risk capitals to risk factors obtained from the MRT decomposition according to the Euler principle and analyze them when the age of cohort and the deferred term change.

Title: Analytical Structure of Stationary Flows of an Ideal Incompressible Fluid
Speaker: Mr. Alexander Danielski (MSc)
Date: Friday, April 28, 2017
Time: 2:00 p.m.
Location: LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve Blvd. W.)
Abstract:

The Euler equations describing the flow of an incompressible, inviscid fluid of uniform density were first published by Euler in 1757. One of the outstanding achievements of mathematical fluid dynamics was the discovery that the particle trajectories of such flows are real analytic curves, despite limited regularity of the initial flow (Serfati, Shnirelman, Frisch&Zeligovsky, Kappeler, Inci, Nadirashvili, Constantin, Vicol, and others). Hence, the flow lines of stationary solutions to the Euler equations are real analytic curves. In this work we consider a two-dimensional stationary flow in a periodic strip.

Our goal is to incorporate the analytic structure of the flow lines into the solution of the problem. The equation for the stream function is transformed to new variables, more appropriate for the further analysis. New classes of functions are introduced to take into account the partial analytic structure of solutions. This makes it possible to regard the problem as an analytic operator equation in a complex Banach space. The Implicit Function theorem for complex Banach spaces is applied to establish existence of unique solutions to the problem and the analytic dependence of these solutions on the parameters. Our approach avoids working in the Frechet spaces and using the Nash-Hamilton Implicit Function Theorem used by the previous authors (Sverak&Choffrut), and provides stronger results.

Back to top Back to top

© Concordia University