Skip to content

Files

Latest commit

9971d74 · May 15, 2023

History

History
117 lines (117 loc) · 13.6 KB

artifact.csv

File metadata and controls

117 lines (117 loc) · 13.6 KB
1
Paper TitleHas artifact?Code RunsYearHas Code?
2
Increasing Adversarial Uncertainty to Scale Private Similarity Testing0120221
3
Your Microphone Array Retains Your Identity: A Robust Voice Liveness Detection System for Smart Speakers0-120220
4
OVRseen: Auditing Network Traffic and Privacy Policies in Oculus VR1020221
5
Lumos: Identifying and Localizing Diverse Hidden IoT Devices in an Unfamiliar Environment0020221
6
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models0020225
7
PrivGuard: Privacy Regulation Compliance Made Easier1120221
8
DeepDi: Learning a Relational Graph Convolutional Network Model on Instructions for Fast and Accurate Disassembly1120221
9
Understanding and Improving Usability of Data Dashboards for Simplified Privacy Control of Voice Assistant Data0-120220
10
Rendering Contention Channel Made Practical in Web Browsers0020221
11
“OK, Siri” or “Hey, Google”: Evaluating Voiceprint Distinctiveness via Content-based PROLE Score0-120224
12
Online Website Fingerprinting: Evaluating Website Fingerprinting Attacks on Tor in the Real World0-120220
13
On the Security Risks of AutoML1120221
14
Towards More Robust Keyword Spotting for Voice Assistants0-120220
15
Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel0-120220
16
Augmenting Decompiler Output with Learned Variable Names and Types0-120220
17
Inference Attacks Against Graph Neural Networks0-120222
18
Synthetic Data – Anonymisation Groundhog Day1120221
19
WebGraph: Capturing Advertising and Tracking Information Flows for Robust Blocking1120221
20
Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning1020221
21
Automating Cookie Consent and GDPR Violation Detection1120221
22
Dos and Don’ts of Machine Learning in Computer Security0-120220
23
Hiding in Plain Sight? On the Efficacy of Power Side Channel-Based Control Flow Monitoring 0020221
24
SGXLock: Towards Efficiently Establishing Mutual Distrust Between Host Application and Enclave for SGX0-120228
25
Expected Exploitability: Predicting the Development of Functional Vulnerability Exploits1020221
26
Secure Poisson Regression0-120220
27
Watching the Watchers: Practical Video Identification Attack in LTE Networks0-120220
28
Automated Side Channel Analysis of Media Software with Manifold Learning 1120221
29
FOAP: Fine-Grained Open-World Android App Fingerprinting0020221
30
SkillDetective: Automated Policy-Violation Detection of Voice Assistant Applications in the Wild1020221
31
Hand Me Your PIN! Inferring ATM PINs of Users Typing with a Covered Hand0020221
32
Label Inference Attacks Against Vertical Federated Learning0-120220
33
Simc: ML Inference Secure Against Malicious Clients at Semi-Honest Cost1020221
34
Lend Me Your Ear: Passive Remote Physical Side Channels on PCs0-120220
35
99% False Positives: A Qualitative Study of SOC Analysts’ Perspectives on Security Alarms0-120220
36
Cheetah: Lean and Fast Secure Two-Party Deep Neural Network Inference1120223
37
Inferring Phishing Intention via Webpage Appearance and Dynamics: A Deep Vision Based Approach0120221
38
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier1120221
39
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis0-120220
40
Khaleesi: Breaker of Advertising and Tracking Request Chains1020221
41
DeepPhish: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks1020221
42
Seeing the Forest for the Trees: Understanding Security Hazards in the 3GPP Ecosystem through Intelligent Analysis on Change Requests0-120220
43
Leaky Forms: A Study of Email and Password Exfiltration Before Form Submission0020221
44
Security Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles0-120220
45
Automated Detection of Automated Traffic0-120220
46
Transferring Adversarial Robustness Through Robust Representation Matching1020221
47
Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era0-120220
48
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning0-120220
49
Membership Inference Attacks and Defenses in Neural Network Pruning1120221
50
Efficient Differentially Private Secure Aggregation for Federated Learning via Hardness of Learning with Errors0-120220
51
Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction0020221
52
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models1120221
53
How Machine Learning Is Solving the Binary Function Similarity Problem0020221
54
FLAME: Taming Backdoors in Federated Learning0-120220
55
Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks0120221
56
AutoDA: Automated Decision-based Iterative Adversarial Attacks1020221
57
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks0-120220
58
Teacher Model Fingerprinting Attacks Against Transfer Learning0020221
59
Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation0-120220
60
Piranha: A GPU Platform for Secure Computation1020221
61
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning0-120220
62
DnD: A Cross-Architecture Deep Neural Network Decompiler0020222
63
How to Peel a Million: Validating and Expanding Bitcoin Clusters0-120220
64
SAVIOR: Securing Autonomous Vehicles with Robust Physical Invariants0020201
65
SmartVerif: Push the Limit of Automation Capability of Verifying Security Protocols by Dynamic Strategies0020201
66
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models0320203
67
FuzzGuard: Filtering out Unreachable Inputs in Directed Grey-box Fuzzing through Deep Learning0020201
68
On Training Robust PDF Malware Classifiers0020201
69
Montage: A Neural Network Language Model-Guided JavaScript Engine Fuzzer1020201
70
PriSEC: A Privacy Settings Enforcement Controller0-120210
71
Evil Under the Sun: Understanding and Discovering Attacks on Ethereum Decentralized Applications0-120218
72
Devil’s Whisper: A General Approach for Physical Adversarial Attacks against Commercial Black-box Speech Recognition Devices0020201
73
Fantastic Four: Honest-Majority Four-Party Secure Computation With Malicious Security0320210
74
Android SmartTVs Vulnerability Discovery via Log-Guided Fuzzing0-120218
75
Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries0120211
76
Reducing Test Cases with Attention Mechanism of Neural Networks0-120210
77
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification0020211
78
CADE: Detecting and Explaining Concept Drift Samples for Security Applications1120211
79
SIGL: Securing Software Installations Through Deep Graph Learning0-120210
80
SyzVegas: Beating Kernel Fuzzing Odds with Reinforcement Learning0020211
81
ATLAS: A Sequence-based Learning Approach for Attack Investigation0020211
82
ELISE: A Storage Efficient Logging System Powered by Redundancy Reduction and Representation Learning0-120211
83
DeepReflect: Discovering Malicious Functionality through Binary Reconstruction0020211
84
Scalable Detection of Promotional Website Defacements in Black Hat SEO Campaigns0-120210
85
Compromised or Attacker-Owned: A Large Scale Classification and Study of Hosting Domains of Malicious URLs0120211
86
Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages0120211
87
Deep Entity Classification: Abusive Account Detection for Online Social Networks0-120210
88
SiamHAN: IPv6 Address Correlation Attacks on TLS Encrypted Traffic via Siamese Heterogeneous Graph Attention Network0120213
89
Mystique: Efficient Conversions for Zero-Knowledge Proofs with Applications to Machine Learning0-120210
90
HAWatcher: Semantics-Aware Anomaly Detection for Appified Smart Homes0-120210
91
Automatic Extraction of Secrets from the Transistor Jungle using Laser-Assisted Side-Channel Attacks0-120210
92
Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack0120211
93
Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring Interconnect Are Practical1-120210
94
Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning1020211
95
Charger-Surfing: Exploiting a Power Line Side-Channel for Smartphone Information Leakage0-120210
96
Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations0020211
97
Fuzzy Labeled Private Set Intersection with Applications to Private Real-Time Biometric Search0-120210
98
Systematic Evaluation of Privacy Risks of Machine Learning Models0020211
99
SmarTest: Effectively Hunting Vulnerable Transaction Sequences in Smart Contracts through Language Model-Guided Symbolic Execution1020211
100
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers0120211
101
Entangled Watermarks as a Defense against Model Extraction0120211
102
Blind Backdoors in Deep Learning Models0020211
103
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA0320211
104
Graph Backdoor0-120210
105
Adversarial Policy Training against Deep Reinforcement Learning1020211
106
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection0-120210
107
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations0120211
108
Dompteur: Taming Audio Adversarial Examples0220211
109
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion0-120210
110
Cost-Aware Robust Tree Ensembles for Security Applications0120211
111
WaveGuard: Understanding and Mitigating Audio Adversarial Examples0320211
112
Poisoning the Unlabeled Dataset of Semi-Supervised Learning0-120210
113
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking0320211
114
Muse: Secure Inference Resilient to Malicious Clients1220213
115
Double-Cross Attacks: Subverting Active Learning Systems0-120210
116
Finding Bugs Using Your Own Code: Detecting Functionally-similar yet Inconsistent Code0020211
117
GForce: GPU-Friendly Oblivious and Rapid Neural Network Inference0120213