Skip to content

Commit

Permalink
* update 2024-11-27 06:21:49
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Nov 26, 2024
1 parent dc68c5f commit 585fffe
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 1 deletion.
36 changes: 36 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -3634,3 +3634,39 @@

</details>

<details>

<summary>2024-11-22 20:34:26 - Gen-AI for User Safety: A Survey</summary>

- *Akshar Prabhu Desai, Tejasvi Ravi, Mohammad Luqman, Mohit Sharma, Nithya Kota, Pranjul Yadav*

- `2411.06606v2` - [abs](http://arxiv.org/abs/2411.06606v2) - [pdf](http://arxiv.org/pdf/2411.06606v2)

> Machine Learning and data mining techniques (i.e. supervised and unsupervised techniques) are used across domains to detect user safety violations. Examples include classifiers used to detect whether an email is spam or a web-page is requesting bank login information. However, existing ML/DM classifiers are limited in their ability to understand natural languages w.r.t the context and nuances. The aforementioned challenges are overcome with the arrival of Gen-AI techniques, along with their inherent ability w.r.t translation between languages, fine-tuning between various tasks and domains. In this manuscript, we provide a comprehensive overview of the various work done while using Gen-AI techniques w.r.t user safety. In particular, we first provide the various domains (e.g. phishing, malware, content moderation, counterfeit, physical safety) across which Gen-AI techniques have been applied. Next, we provide how Gen-AI techniques can be used in conjunction with various data modalities i.e. text, images, videos, audio, executable binaries to detect violations of user-safety. Further, also provide an overview of how Gen-AI techniques can be used in an adversarial setting. We believe that this work represents the first summarization of Gen-AI techniques for user-safety.

</details>

<details>

<summary>2024-11-24 15:37:29 - ExAL: An Exploration Enhanced Adversarial Learning Algorithm</summary>

- *A Vinil, Aneesh Sreevallabh Chivukula, Pranav Chintareddy*

- `2411.15878v1` - [abs](http://arxiv.org/abs/2411.15878v1) - [pdf](http://arxiv.org/pdf/2411.15878v1)

> Adversarial learning is critical for enhancing model robustness, aiming to defend against adversarial attacks that jeopardize machine learning systems. Traditional methods often lack efficient mechanisms to explore diverse adversarial perturbations, leading to limited model resilience. Inspired by game-theoretic principles, where adversarial dynamics are analyzed through frameworks like Nash equilibrium, exploration mechanisms in such setups allow for the discovery of diverse strategies, enhancing system robustness. However, existing adversarial learning methods often fail to incorporate structured exploration effectively, reducing their ability to improve model defense comprehensively. To address these challenges, we propose a novel Exploration-enhanced Adversarial Learning Algorithm (ExAL), leveraging the Exponentially Weighted Momentum Particle Swarm Optimizer (EMPSO) to generate optimized adversarial perturbations. ExAL integrates exploration-driven mechanisms to discover perturbations that maximize impact on the model's decision boundary while preserving structural coherence in the data. We evaluate the performance of ExAL on the MNIST Handwritten Digits and Blended Malware datasets. Experimental results demonstrate that ExAL significantly enhances model resilience to adversarial attacks by improving robustness through adversarial learning.

</details>

<details>

<summary>2024-11-25 13:30:31 - A Study of Malware Prevention in Linux Distributions</summary>

- *Duc-Ly Vu, Trevor Dunlap, Karla Obermeier-Velazquez, Paul Gibert, John Speed Meyers, Santiago Torres-Arias*

- `2411.11017v2` - [abs](http://arxiv.org/abs/2411.11017v2) - [pdf](http://arxiv.org/pdf/2411.11017v2)

> Malicious attacks on open source software packages are a growing concern. This concern morphed into a panic-inducing crisis after the revelation of the XZ Utils backdoor, which would have provided the attacker with, according to one observer, a "skeleton key" to the internet. This study therefore explores the challenges of preventing and detecting malware in Linux distribution package repositories. To do so, we ask two research questions: (1) What measures have Linux distributions implemented to counter malware, and how have maintainers experienced these efforts? (2) How effective are current malware detection tools at identifying malicious Linux packages? To answer these questions, we conduct interviews with maintainers at several major Linux distributions and introduce a Linux package malware benchmark dataset. Using this dataset, we evaluate the performance of six open source malware detection scanners. Distribution maintainers, according to the interviews, have mostly focused on reproducible builds to date. Our interviews identified only a single Linux distribution, Wolfi OS, that performs active malware scanning. Using this new benchmark dataset, the evaluation found that the performance of existing open-source malware scanners is underwhelming. Most studied tools excel at producing false positives but only infrequently detect true malware. Those that avoid high false positive rates often do so at the expense of a satisfactory true positive. Our findings provide insights into Linux distribution package repositories' current practices for malware detection and demonstrate the current inadequacy of open-source tools designed to detect malicious Linux packages.

</details>

Loading

0 comments on commit 585fffe

Please sign in to comment.