Skip to main content
Thesis defences

Detection of Counterfeit Coins Using Multimodal GPT-4and Vision Tranformer


Date & time
Friday, September 20, 2024
11 a.m. – 1 p.m.
Speaker(s)

Dina Omidvar Tehrani

Cost

This event is free

Organization

Department of Computer Science and Software Engineering

Contact

Dr. Ching Suen

Where

ER Building
2155 Guy St.
Room ER-1222

Wheel chair accessible

Yes

Abstract

   The proliferation of counterfeit coins poses a substantial threat to the integrity of monetary systems and the stability of financial markets. Advanced counterfeiting techniques allow these fraudulent coins to closely mimic genuine ones, complicating the detection process and necessitating robust methods capable of discerning minute differences between genuine and fake coins. This thesis addresses the problem of counterfeit coin detection by introducing a comprehensive dataset comprising high-resolution images of both Danish and Chinese coins, categorized into genuine and counterfeit sets across multiple years.

To tackle the detection task, we employ two advanced approaches: a Vision Transformer (ViT) model and a multimodal GPT-4 model. The ViT model leverages its self-attention mechanisms to capture intricate patterns and details within the coin images, while the GPT-4 model integrates both visual and textual data, utilizing various prompting techniques to enhance its performance. Our results show that the ViT model outperforms previous method and the state-of-the-art in terms of accuracy and robustness, achieving a remarkable 99.31% accuracy. The GPT-4 model, although primarily designed for natural language processing, demonstrates promising capabilities in counterfeit detection, particularly when advanced prompting strategies like Chain-of-Thought and Generated Knowledge are employed.

This research does not only advance the current state-of-the-art in counterfeit coin detection but also highlights the potential of few-shot learning and transfer learning in achieving high accuracy with limited training data

Back to top

© Concordia University