Show simple item record

dc.contributor.advisorO'Sullivan, Declanen
dc.contributor.authorBhardwaj, Peruen
dc.date.accessioned2022-09-16T10:50:34Z
dc.date.available2022-09-16T10:50:34Z
dc.date.issued2022en
dc.date.submitted2022en
dc.identifier.citationBhardwaj, Peru, Adversarial Robustness of Representation Learning for Knowledge Graphs, Trinity College Dublin.School of Computer Science & Statistics, 2022en
dc.identifier.otherYen
dc.identifier.urihttp://hdl.handle.net/2262/101176
dc.descriptionAPPROVEDen
dc.description.abstractKnowledge graphs represent factual knowledge about the world as relationships between concepts and are critical for intelligent decision making in enterprise applications. New knowledge is inferred from the existing facts in the knowledge graphs by encoding the concepts and relations into low-dimensional feature vector representations. The most effective representations for this task, called Knowledge Graph Embeddings (KGE), are learned through neural network architectures. Due to their impressive predictive performance, they are increasingly used in high-impact domains like healthcare, finance and education. However, are the black-box KGE models adversarially robust for use in domains with high stakes? This thesis argues that state-of-the-art KGE models are vulnerable to data poisoning attacks, that is, their predictive performance can be degraded by systematically crafted perturbations to the training knowledge graph. To support this argument, two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time. These attacks target the task of predicting the missing facts in knowledge graphs using Knowledge Graph Embeddings. To degrade the model performance through adversarial deletions, the use of model agnostic instance attribution methods is proposed. These methods are used to identify the training instances that are most influential to the KGE model s predictions on target instances. The influential triples are used as adversarial deletions. To poison the KGE models through adversarial additions, their inductive abilities are exploited. The inductive abilities of KGE models are captured through the relationship patterns like symmetry, inversion and composition in the knowledge graph. Specifically, to degrade the model s prediction confidence on target facts, this thesis proposes to improve the model s prediction confidence on a set of decoy facts. Thus, adversarial additions that can improve the model s prediction confidence on decoy facts through different relation inference patterns are crafted. Evaluation of the proposed adversarial attacks shows that they outperform state-of-the-art baselines against four KGE models for two publicly available datasets. Among the proposed methods, simpler attacks are competitive with or outperform the computationally expensive ones. The thesis contributions not only highlight and provide an opportunity to fix the security vulnerabilities of KGE models, but also help to understand the black-box predictive behaviour of these models.en
dc.publisherTrinity College Dublin. School of Computer Science & Statistics. Discipline of Computer Scienceen
dc.rightsYen
dc.subjectMachine Learningen
dc.subjectArtificial Intelligenceen
dc.subjectKnowledge Graphsen
dc.subjectAdversarial Machine Learningen
dc.subjectExplainable AIen
dc.subjectNatural Language Processingen
dc.subjectKnowledge Representation and Reasoningen
dc.titleAdversarial Robustness of Representation Learning for Knowledge Graphsen
dc.typeThesisen
dc.type.supercollectionthesis_dissertationsen
dc.type.supercollectionrefereed_publicationsen
dc.type.qualificationlevelDoctoralen
dc.identifier.peoplefinderurlhttps://tcdlocalportal.tcd.ie/pls/EnterApex/f?p=800:71:0::::P71_USERNAME:BHARDWAPen
dc.identifier.rssinternalid245651en
dc.rights.ecaccessrightsopenAccess
dc.contributor.sponsorAccenture Labsen
dc.contributor.sponsorADAPT SFI Centre for Digital Content Technologyen
dc.contributor.sponsorScience Foundation Ireland (SFI)en


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record