diff --git a/data/xml/2023.findings.xml b/data/xml/2023.findings.xml
index 3e91e17843..f9983cb45a 100644
--- a/data/xml/2023.findings.xml
+++ b/data/xml/2023.findings.xml
@@ -10085,7 +10085,7 @@
ZhuofengWuUniversity of Michigan
WeiPingNvidia
ChaoweiXiaoArizona State University
- V.G.VinodVydiswaranUniversity of Michigan
+ V.G. VinodVydiswaranUniversity of Michigan
8818-8833
Textual backdoor attack, as a novel attack model, has been shown to be effective in adding a backdoor to the model during training. Defending against such backdoor attacks has become urgent and important. In this paper, we propose AttDef, an efficient attribution-based pipeline to defend against two insertion-based poisoning attacks, BadNL and InSent. Specifically, we regard the tokens with larger attribution scores as potential triggers since larger attribution words contribute more to the false prediction results and therefore are more likely to be poison triggers. Additionally, we further utilize an external pre-trained language model to distinguish whether input is poisoned or not. We show that our proposed method can generalize sufficiently well in two common attack scenarios (poisoning training data and testing data), which consistently improves previous methods. For instance, AttDef can successfully mitigate both attacks with an average accuracy of 79.97% (56.59% up) and 48.34% (3.99% up) under pre-training and post-training attack defense respectively, achieving the new state-of-the-art performance on prediction recovery over four benchmark datasets.
2023.findings-acl.561
diff --git a/data/xml/2024.caldpseudo.xml b/data/xml/2024.caldpseudo.xml
index 111ce2f1e0..4d25d0815c 100644
--- a/data/xml/2024.caldpseudo.xml
+++ b/data/xml/2024.caldpseudo.xml
@@ -25,7 +25,7 @@
Handling Name Errors of a BERT-Based De-Identification System: Insights from Stratified Sampling and Markov-based Pseudonymization
DaltonSimancekDepartment of Learning Health Sciences, University of Michigan
- VG VinodVydiswaranSchool of Information, University of Michigan
+ V.G.VinodVydiswaranSchool of Information, University of Michigan
1-7
Missed recognition of named entities while de-identifying clinical narratives poses a critical challenge in protecting patient-sensitive health information. Mitigating name recognition errors is essential to minimize risk of patient re-identification. In this paper, we emphasize the need for stratified sampling and enhanced contextual considerations concerning Name Tokens using a fine-tuned Longformer BERT model for clinical text de-identifcation. We introduce a Hidden in Plain Sight (HIPS) Markov-based replacement technique for names to mask name recognition misses, revealing a significant reduction in name leakage rates. Our experimental results underscore the impact on addressing name recognition challenges in BERT-based de-identification systems for heightened privacy protection in electronic health records.
2024.caldpseudo-1.1
diff --git a/data/xml/2024.findings.xml b/data/xml/2024.findings.xml
index 65cefd28f6..ea5c962a04 100644
--- a/data/xml/2024.findings.xml
+++ b/data/xml/2024.findings.xml
@@ -21421,7 +21421,7 @@
Richard HeBaiApple
AonanZhangApple
JiataoGuApple (MLR)
- V.G.VinodVydiswaranUniversity of Michigan - Ann Arbor
+ VG VinodVydiswaranUniversity of Michigan - Ann Arbor
NavdeepJaitlyApple
YizheZhangApple
2572-2585
diff --git a/data/xml/2024.naacl.xml b/data/xml/2024.naacl.xml
index c26bd0920e..225bf6ddd8 100644
--- a/data/xml/2024.naacl.xml
+++ b/data/xml/2024.naacl.xml
@@ -2274,7 +2274,7 @@
JiazhaoLiUniversity of Michigan - Ann Arbor
YijinYang
ZhuofengWuUniversity of Michigan - Ann Arbor
- V.G.VinodVydiswaranUniversity of Michigan - Ann Arbor
+ V.G. VinodVydiswaranUniversity of Michigan - Ann Arbor
ChaoweiXiaoUniversity of Wisconsin - Madison and NVIDIA
2985-3004
Textual backdoor attacks, characterized by subtle manipulations of input triggers and training dataset labels, pose significant threats to security-sensitive applications. The rise of advanced generative models, such as GPT-4, with their capacity for human-like rewriting, makes these attacks increasingly challenging to detect. In this study, we conduct an in-depth examination of black-box generative models as tools for backdoor attacks, thereby emphasizing the need for effective defense strategies. We propose BGMAttack, a novel framework that harnesses advanced generative models to execute stealthier backdoor attacks on text classifiers. Unlike prior approaches constrained by subpar generation quality, BGMAttack renders backdoor triggers more elusive to human cognition and advanced machine detection. A rigorous evaluation of attack effectiveness over four sentiment classification tasks, complemented by four human cognition stealthiness tests, reveals BGMAttack’s superior performance, achieving a state-of-the-art attack success rate of 97.35% on average while maintaining superior stealth compared to conventional methods. The dataset and code are available: https://github.com/JiazhaoLi/BGMAttack.
diff --git a/data/xml/I17.xml b/data/xml/I17.xml
index ca9207081e..abf69b3fad 100644
--- a/data/xml/I17.xml
+++ b/data/xml/I17.xml
@@ -425,7 +425,7 @@
Identifying Usage Expression Sentences in Consumer Product Reviews
ShibamouliLahiri
- V.G.VinodVydiswaran
+ V. G. VinodVydiswaran
RadaMihalcea
394–403
I17-1040
diff --git a/data/xml/N10.xml b/data/xml/N10.xml
index 83b182e10b..faa894422b 100644
--- a/data/xml/N10.xml
+++ b/data/xml/N10.xml
@@ -1590,7 +1590,7 @@
Textual Entailment
MarkSammons
IdanSzpektor
- V.G.VinodVydiswaran
+ V.G. VinodVydiswaran
21–24
N10-4008
sammons-etal-2010-textual
diff --git a/data/yaml/name_variants.yaml b/data/yaml/name_variants.yaml
index 55b9332e91..c17a50a088 100644
--- a/data/yaml/name_variants.yaml
+++ b/data/yaml/name_variants.yaml
@@ -1545,6 +1545,12 @@
- canonical: {first: Jeong-Won, last: Cha}
variants:
- {first: Jeongwon, last: Cha}
+- canonical: {first: V. G. Vinod, last: Vydiswaran}
+ orcid: 0000-0002-3122-1936
+ degree: University of Illinois at Urbana-Champaign
+ variants:
+ - {first: V.G.Vinod, last: Vydiswaran}
+ - {first: VG Vinod, last: Vydiswaran}
- canonical: {first: Seungho, last: Cha}
id: seungho-cha
- canonical: {first: Joyce, last: Chai}