Skip to content

Commit

Permalink
* update 2024-12-19 06:20:49
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Dec 18, 2024
1 parent 8d62020 commit 8ca3be1
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 1 deletion.
36 changes: 36 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -3880,6 +3880,18 @@

<details>

<summary>2024-12-16 01:54:07 - Comprehensive Survey on Adversarial Examples in Cybersecurity: Impacts, Challenges, and Mitigation Strategies</summary>

- *Li Li*

- `2412.12217v1` - [abs](http://arxiv.org/abs/2412.12217v1) - [pdf](http://arxiv.org/pdf/2412.12217v1)

> Deep learning (DL) has significantly transformed cybersecurity, enabling advancements in malware detection, botnet identification, intrusion detection, user authentication, and encrypted traffic analysis. However, the rise of adversarial examples (AE) poses a critical challenge to the robustness and reliability of DL-based systems. These subtle, crafted perturbations can deceive models, leading to severe consequences like misclassification and system vulnerabilities. This paper provides a comprehensive review of the impact of AE attacks on key cybersecurity applications, highlighting both their theoretical and practical implications. We systematically examine the methods used to generate adversarial examples, their specific effects across various domains, and the inherent trade-offs attackers face between efficacy and resource efficiency. Additionally, we explore recent advancements in defense mechanisms, including gradient masking, adversarial training, and detection techniques, evaluating their potential to enhance model resilience. By summarizing cutting-edge research, this study aims to bridge the gap between adversarial research and practical security applications, offering insights to fortify the adoption of DL solutions in cybersecurity.

</details>

<details>

<summary>2024-12-16 08:20:29 - Android App Feature Extraction: A review of approaches for malware and app similarity detection</summary>

- *Simon Torka, Sahin Albayrak*
Expand Down Expand Up @@ -3914,3 +3926,27 @@

</details>

<details>

<summary>2024-12-17 03:59:21 - Interpreting GNN-based IDS Detections Using Provenance Graph Structural Features</summary>

- *Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, Muhyun Kim, Feng Chen, Murat Kantarcioglu, Kangkook Jee*

- `2306.00934v5` - [abs](http://arxiv.org/abs/2306.00934v5) - [pdf](http://arxiv.org/pdf/2306.00934v5)

> Advanced cyber threats (e.g., Fileless Malware and Advanced Persistent Threat (APT)) have driven the adoption of provenance-based security solutions. These solutions employ Machine Learning (ML) models for behavioral modeling and critical security tasks such as malware and anomaly detection. However, the opacity of ML-based security models limits their broader adoption, as the lack of transparency in their decision-making processes restricts explainability and verifiability. We tailored our solution towards Graph Neural Network (GNN)-based security solutions since recent studies employ GNNs to comprehensively digest system provenance graphs for security critical tasks. To enhance the explainability of GNN-based security models, we introduce PROVEXPLAINER, a framework offering instance-level security-aware explanations using an interpretable surrogate model. PROVEXPLAINER's interpretable feature space consists of discriminant subgraph patterns and graph structural features, which can be directly mapped to the system provenance problem space, making the explanations human understandable. By considering prominent GNN architectures (e.g., GAT and GraphSAGE) for anomaly detection tasks, we show how PROVEXPLAINER synergizes with current state-of-the-art (SOTA) GNN explainers to deliver domain and instance-specific explanations. We measure the explanation quality using the fidelity+/fidelity- metric as used by traditional GNN explanation literature, and we incorporate the precision/recall metric where we consider the accuracy of the explanation against the ground truth. On malware and APT datasets, PROVEXPLAINER achieves up to 29%/27%/25% higher fidelity+, precision and recall, and 12% lower fidelity- respectively, compared to SOTA GNN explainers.

</details>

<details>

<summary>2024-12-17 08:14:12 - Exploring AI-Enabled Cybersecurity Frameworks: Deep-Learning Techniques, GPU Support, and Future Enhancements</summary>

- *Tobias Becher, Simon Torka*

- `2412.12648v1` - [abs](http://arxiv.org/abs/2412.12648v1) - [pdf](http://arxiv.org/pdf/2412.12648v1)

> Traditional rule-based cybersecurity systems have proven highly effective against known malware threats. However, they face challenges in detecting novel threats. To address this issue, emerging cybersecurity systems are incorporating AI techniques, specifically deep-learning algorithms, to enhance their ability to detect incidents, analyze alerts, and respond to events. While these techniques offer a promising approach to combating dynamic security threats, they often require significant computational resources. Therefore, frameworks that incorporate AI-based cybersecurity mechanisms need to support the use of GPUs to ensure optimal performance. Many cybersecurity framework vendors do not provide sufficiently detailed information about their implementation, making it difficult to assess the techniques employed and their effectiveness. This study aims to overcome this limitation by providing an overview of the most used cybersecurity frameworks that utilize AI techniques, specifically focusing on frameworks that provide comprehensive information about their implementation. Our primary objective is to identify the deep-learning techniques employed by these frameworks and evaluate their support for GPU acceleration. We have identified a total of \emph{two} deep-learning algorithms that are utilized by \emph{three} out of 38 selected cybersecurity frameworks. Our findings aim to assist in selecting open-source cybersecurity frameworks for future research and assessing any discrepancies between deep-learning techniques used in theory and practice.

</details>

Loading

0 comments on commit 8ca3be1

Please sign in to comment.