Treffer: Optimizing Deepfake Image Classification with Transfer Learning: Insights from Models Inception V3 and Inception V4.
Weitere Informationen
In the recent past facial forgery had been widespread and for this there is no any requirement of possessing technical skills. Nowadays due to the development of Generative Adversarial Networks & diffusion models (DMs), generation of quality of deepfake has been intensified. Audio or video data if exists as it doesn't create any problems. In this digital world manipulating although easy, such data creates a lot of problems. They often lead to theft of identity, misinformation and cybercrime in various formats. The resultant technology of manipulating audio or video data using advanced technologies in other words known as Deep fake technology and its rise created a provision to new frontiers in current digital world. Although the advancement offers potentiality on the positive edge, also there are significant risks to the security as well integrity of the data & information. In one perspective, Deepfakes due to their malicious applications in order to attain political, economic and social reputation goals, the technology is becoming quite dishonourable. In this paper, deep fake image classification has been performed on a state-of-the-art dataset by employing transfer learning and ensemble models. Initially, image classification has been performed using Inception V3 model with variants of batch sizes 16, 32 and 64; subsequently Inception V4 is employed due to which the performance has been improved significantly. [ABSTRACT FROM AUTHOR]
Copyright of Advances in Consumer Research is the property of Association for Consumer Research and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)