Skip to content

Commit

Permalink
* update 2024-03-02 06:15:56
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Mar 1, 2024
1 parent 8977ef7 commit 983e2ae
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 1 deletion.
24 changes: 24 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -454,3 +454,27 @@

</details>

<details>

<summary>2024-02-28 19:28:01 - On the Leakage of Fuzzy Matchers</summary>

- *Axel Durbet, Kevin Thiry-Atighehchi, Dorine Chagnon, Paul-Marie Grollemund*

- `2307.13717v4` - [abs](http://arxiv.org/abs/2307.13717v4) - [pdf](http://arxiv.org/pdf/2307.13717v4)

> In a biometric authentication or identification system, the matcher compares a stored and a fresh template to determine whether there is a match. This assessment is based on both a similarity score and a predefined threshold. For better compliance with privacy legislation, the matcher can be built upon a threshold-based obfuscated distance (i.e., Fuzzy Matcher). Beyond the binary output ("yes" or "no"), most algorithms perform more precise computations, e.g., the value of the distance. Such precise information is prone to leakage even when not returned by the matcher. This can occur due to a malware infection or the use of a weakly privacy-preserving matcher, exemplified by side channel attacks or partially obfuscated designs. This paper provides an analysis of information leakage during distance evaluation, with an emphasis on threshold-based obfuscated distance. We provide a catalog of information leakage scenarios with their impacts on data privacy. Each scenario gives rise to unique attacks with impacts quantified in terms of computational costs, thereby providing a better understanding of the security level.

</details>

<details>

<summary>2024-02-29 10:38:56 - How to Train your Antivirus: RL-based Hardening through the Problem-Space</summary>

- *Jacopo Cortellazzi, Ilias Tsingenopoulos, Branislav Bošanský, Simone Aonzo, Davy Preuveneers, Wouter Joosen, Fabio Pierazzi, Lorenzo Cavallaro*

- `2402.19027v1` - [abs](http://arxiv.org/abs/2402.19027v1) - [pdf](http://arxiv.org/pdf/2402.19027v1)

> ML-based malware detection on dynamic analysis reports is vulnerable to both evasion and spurious correlations. In this work, we investigate a specific ML architecture employed in the pipeline of a widely-known commercial antivirus company, with the goal to harden it against adversarial malware. Adversarial training, the sole defensive technique that can confer empirical robustness, is not applicable out of the box in this domain, for the principal reason that gradient-based perturbations rarely map back to feasible problem-space programs. We introduce a novel Reinforcement Learning approach for constructing adversarial examples, a constituent part of adversarially training a model against evasion. Our approach comes with multiple advantages. It performs modifications that are feasible in the problem-space, and only those; thus it circumvents the inverse mapping problem. It also makes possible to provide theoretical guarantees on the robustness of the model against a particular set of adversarial capabilities. Our empirical exploration validates our theoretical insights, where we can consistently reach 0\% Attack Success Rate after a few adversarial retraining iterations.

</details>

Loading

0 comments on commit 983e2ae

Please sign in to comment.