Skip to content

Commit

Permalink
* update 2024-10-30 06:20:44
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Oct 29, 2024
1 parent b17ec8c commit 482ce23
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 1 deletion.
24 changes: 24 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -3222,3 +3222,27 @@

</details>

<details>

<summary>2024-10-25 18:22:04 - Nebula: Self-Attention for Dynamic Malware Analysis</summary>

- *Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli*

- `2310.10664v2` - [abs](http://arxiv.org/abs/2310.10664v2) - [pdf](http://arxiv.org/pdf/2310.10664v2)

> Dynamic analysis enables detecting Windows malware by executing programs in a controlled environment and logging their actions. Previous work has proposed training machine learning models, i.e., convolutional and long short-term memory networks, on homogeneous input features like runtime APIs to either detect or classify malware, neglecting other relevant information coming from heterogeneous data like network and file operations. To overcome these issues, we introduce Nebula, a versatile, self-attention Transformer-based neural architecture that generalizes across different behavioral representations and formats, combining diverse information from dynamic log reports. Nebula is composed by several components needed to tokenize, filter, normalize and encode data to feed the transformer architecture. We firstly perform a comprehensive ablation study to evaluate their impact on the performance of the whole system, highlighting which components can be used as-is, and which must be enriched with specific domain knowledge. We perform extensive experiments on both malware detection and classification tasks, using three datasets acquired from different dynamic analyses platforms, show that, on average, Nebula outperforms state-of-the-art models at low false positive rates, with a peak of 12% improvement. Moreover, we showcase how self-supervised learning pre-training matches the performance of fully-supervised models with only 20% of training data, and we inspect the output of Nebula through explainable AI techniques, pinpointing how attention is focusing on specific tokens correlated to malicious activities of malware families. To foster reproducibility, we open-source our findings and models at https://github.com/dtrizna/nebula.

</details>

<details>

<summary>2024-10-26 22:27:21 - Classification under strategic adversary manipulation using pessimistic bilevel optimisation</summary>

- *David Benfield, Stefano Coniglio, Martin Kunc, Phan Tu Vuong, Alain Zemkoho*

- `2410.20284v1` - [abs](http://arxiv.org/abs/2410.20284v1) - [pdf](http://arxiv.org/pdf/2410.20284v1)

> Adversarial machine learning concerns situations in which learners face attacks from active adversaries. Such scenarios arise in applications such as spam email filtering, malware detection and fake-image generation, where security methods must be actively updated to keep up with the ever improving generation of malicious data.We model these interactions between the learner and the adversary as a game and formulate the problem as a pessimistic bilevel optimisation problem with the learner taking the role of the leader. The adversary, modelled as a stochastic data generator, takes the role of the follower, generating data in response to the classifier. While existing models rely on the assumption that the adversary will choose the least costly solution leading to a convex lower-level problem with a unique solution, we present a novel model and solution method which do not make such assumptions. We compare these to the existing approach and see significant improvements in performance suggesting that relaxing these assumptions leads to a more realistic model.

</details>

Loading

0 comments on commit 482ce23

Please sign in to comment.