When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
Network Function Virtualization (NFV) is a network architecture that separates network functions from dedicated hardware, implementing them as software modules known as Virtual Network Functions (VNFs), which are executed in virtual machines or containers. A Network Service (NS) consists of a chain of VNFs known as a VNF Forwarding Graph (VNF-FG). NFV increases deployment flexibility and agility within operator networks and reduces operating and capital expenditures significantly. Deploying an NS requires solving the NFV resource allocation (NFV-RA) problem, which involves the three stages of (i) VNF-FG composition, (ii) VNF-FG embedding, and (iii) VNF scheduling. Resource allocation in NFV requires efficient algorithms to determine on which physical node VNFs are embedded and to be able to migrate VNFs from one node to another. A major challenge in NFV is how to maintain reasonable VNF embedding to adapt to the changes in the network. As the VNF embedding stage may also be dynamic; it brings an additional dimension of complexity in terms of keeping track of where a given VNF is running. In other words, the VNF migration is responsible for where, when, and how to transfer the VNFs from source to destination in response to the variation in service requests. The VNF Migration problem generally refers to the process of migrating VNFs from one node to another due to specific requirements such as reduction of cost, energy saving, recovery from failures, etc. However, VNF migration faces several challenges. The first challenge arises from the mobility of end-users and the fog nodes, along with limited fog node coverage, resulting in service discontinuity and increasing application delay. A second challenge presents when there are stringent latency requirements between VNFs and can make them tightly coupled, thus hindering each VNF from being migrated individually, and resulting in poor performance. The third challenge is when we have a limitation of resources in the network. The overloaded node can significantly impact the determination of the best VNF decomposition option among all possible choices, potentially leading to a degradation in Quality of Service (QoS). VNF migration can offer great potential to address these challenges. However, the challenge remains: where, when, and which VNF should migrate to improve performance.
In this PhD thesis, we aim to address the challenges in the VNF migration problem mentioned above. Firstly, we introduce a reinforcement learning-based optimization framework for application component migration in NFV cloud-fog environments where both fog nodes and end-users are mobile. More specifically, our main objective is to efficiently migrate the VNFs of a request such that the total delay and cost are minimized. Secondly, we introduce a cost-efficient solution for solving the problem of cluster migration of VNFs for VNF-FG embedding by taking into account the latency requirement between VNFs and reusing the already deployed VNFs. The objective is to migrate the cluster of VNF so that the total embedding cost, including resource, instantiation, reuse, and transmission cost, is minimized. Lastly, when considering VNF migration in the case of VNF decomposition, we investigate how VNF migration and VNF decomposition can be mutually beneficial. We achieve this by designing a joint VNF decomposition and migration approach to minimize the embedding cost of network services (NS) and promote VNF reusability. To accomplish this, we propose two efficient heuristics for identifying the best decomposition options and facilitating the migration of previously deployed VNFs across the network.