Our campuses are closed but thesis defences are proceeding normally, albeit remotely. Refer to our COVID-19 FAQs for more information.
When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
Approximate Computing (AC) has been widely advocated for energy-efficiency in error-tolerant applications as it offers the opportunity to trade-off output quality for reduced power consumption and execution time. Approximate accelerators, which consist of a large number of functional units, have been proposed in error-resilient applications, to speed-up regularly executing code elements while assuring defined quality constraints. However, with an approximate static design, while the average output quality constraint is satisfied, the quality of individual outputs varies significantly with dynamically changing inputs. Thus, quality assurance is an essential and nontrivial problem. State-of-the-art approaches in approximate computing address this problem by precisely re-evaluating those quality-violating accelerator invocations. However, such methods can significantly diminish or even cancel the benefits of approximation, especially when the rate of input data variations is high and approximate errors are considerably above a user given threshold, i.e., target output quality (TOQ).
As a general solution to this problem, in this thesis, we propose a novel methodology to enhance the quality of approximation by two approaches: 1) design adaptation by predicting the most suitable settings of the approximate design to execute the inputs; and/or 2) error compensation by predicting the error magnitude to use in adjusting the output results. The proposed method predicts the design settings, or the error magnitude based on the applied input data and user preferences, without losing the gains of approximations. We mostly consider the case of approximate accelerators built with approximate functional units such as approximate multipliers, where we design a library of approximate accelerators with 20 different settings of 8 and 16-bit approximate multipliers.
For the adaptive approximate computing, we use machine learning (ML) algorithms to build an efficient and lightweight design selector to adapt the approximate accelerators to meet a user-defined quality constraint. Compared with contemporary techniques, our approach is a fine-grained input-dependent approximation approach, with no missed approximation opportunities or rollback recovery overhead. The proposed method applies to any approximate accelerator with error-tolerant components, and it is flexible in adapting various error metrics. We fully automate the proposed methodology of quality assurance of approximate accelerators using ML-based models, for both software and hardware implementations. The obtained analysis results of image processing and audio applications showed that it is possible to satisfy the TOQ with an accuracy ranging from 80% to 85.7%. The hardware implementation is based on Field Programmable Gate Arrays (FPGA) approximate adaptive accelerator with constraints on size, cost, and power consumption, which rely on dynamic partial reconfiguration to assist in satisfying these requirements.
To ensure the quality of results for a single approximate design rather than a library, we build a decision tree-based model for error compensation. The proposed model detects the magnitude of approximation error based on design inputs. Then, enhance the accuracy of the approximate result by adding the error magnitude to it. The proposed methodology is able to enhance the quality of image processing applications with a negligible overhead.