Show simple item record

dc.contributor.author Jayakumar, K
dc.contributor.author Skandhakumar, N
dc.contributor.editor Sumathipala, KASN
dc.contributor.editor Ganegoda, GU
dc.contributor.editor Piyathilake, ITS
dc.contributor.editor Manawadu, IN
dc.date.accessioned 2023-09-05T07:47:59Z
dc.date.available 2023-09-05T07:47:59Z
dc.date.issued 2022-12
dc.identifier.citation ***** en_US
dc.identifier.uri http://dl.lib.uom.lk/handle/123/21371
dc.description.abstract “Deepfakes” have seen a dramatic rise in recent times and are becoming quite realistic and indistinguishable with the advancement of deepfake generation techniques. Promising strides have been made in the deepfake detection area even though it is a relatively new research domain. Majority of current deepfake detection solutions only classify a video as a deepfake without providing any explanations behind the prediction. However, these works fail in situations where transparency behind a tool’s decision is crucial, especially in a court of law, where digital forensic investigators maybe called to testify if a video is a deepfake with evidence; or where justifications behind tool decisions plays a key role in the jury’s verdict. Explainable AI (XAI) has the power to make deepfake detection more meaningful, as it can effectively help explain why the detection tool classified the video as a deepfake by highlighting forged super-pixels of the video frames. This paper proposes the use of “Anchors” XAI method, a model-agnostic high precision explainer to build the prediction explainer model, that can visually explain the predictions of a deepfake detector model built on top of the EfficientNet architecture. Evaluation results show that Anchors fair better than LIME in terms of producing visually explainable and easily interpretable explanations and produces an anchor affinity score of 70.23%. The deepfake detector model yields an accuracy of 91.92%. en_US
dc.language.iso en en_US
dc.publisher Information Technology Research Unit, Faculty of Information Technology, University of Moratuwa. en_US
dc.relation.uri https://icitr.uom.lk/past-abstracts en_US
dc.subject Deepfake detection en_US
dc.subject XAI en_US
dc.subject Computer vision en_US
dc.subject Deep neural networks en_US
dc.subject Anchors en_US
dc.subject Digital media forensics en_US
dc.title A visually interpretable forensic deepfake detection tool using anchors en_US
dc.type Conference-Abstract en_US
dc.identifier.faculty IT en_US
dc.identifier.department Information Technology Research Unit, Faculty of Information Technology, University of Moratuwa. en_US
dc.identifier.year 2022 en_US
dc.identifier.conference 7th International Conference in Information Technology Research 2022 en_US
dc.identifier.place Moratuwa, Sri Lanka en_US
dc.identifier.pgnos p. 47 en_US
dc.identifier.proceeding Proceedings of the 7th International Conference in Information Technology Research 2022 en_US
dc.identifier.email krishnakripa.j@iit.ac.lk en_US
dc.identifier.email krishnakripa.j@iit.ac.lk en_US


Files in this item

This item appears in the following Collection(s)

  • ICITR - 2022 [27]
    International Conference on Information Technology Research (ICITR)

Show simple item record