Skip to content

Commit

Permalink
* update 2024-11-02 06:20:15
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Nov 1, 2024
1 parent 2812fe6 commit 75c82af
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 1 deletion.
24 changes: 24 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -3294,3 +3294,27 @@

</details>

<details>

<summary>2024-10-31 12:53:56 - Metamorphic Malware Evolution: The Potential and Peril of Large Language Models</summary>

- *Pooria Madani*

- `2410.23894v1` - [abs](http://arxiv.org/abs/2410.23894v1) - [pdf](http://arxiv.org/pdf/2410.23894v1)

> Code metamorphism refers to a computer programming exercise wherein the program modifies its own code (partial or entire) consistently and automatically while retaining its core functionality. This technique is often used for online performance optimization and automated crash recovery in certain mission-critical applications. However, the technique has been misappropriated by malware creators to bypass signature-based detection measures instituted by anti-malware engines. However, current code mutation engines used by threat actors offer only a limited degree of mutation, which is frequently detectable via static code analysis. The advent of large language models (LLMs), such as ChatGPT 4.0 and Google Bard may lead to a significant evolution in this landscape. These models have demonstrated a level of algorithm comprehension and code synthesis capability that closely resembles human abilities. This advancement has sparked concerns among experts that such models could be exploited by threat actors to generate sophisticated metamorphic malware. This paper explores the potential of several prominent LLMs for software code mutation that may be used to reconstruct (with mutation) existing malware code bases or create new forms of embedded mutation engines for next-gen metamorphic malwares. In this work, we introduce a framework for creating self-testing program mutation engines based on LLM/Transformer-based models. The proposed framework serves as an essential tool in testing next-gen metamorphic malware detection engines.

</details>

<details>

<summary>2024-10-31 15:19:33 - Assessing the Impact of Packing on Machine Learning-Based Malware Detection and Classification Systems</summary>

- *Daniel Gibert, Nikolaos Totosis, Constantinos Patsakis, Giulio Zizzo, Quan Le*

- `2410.24017v1` - [abs](http://arxiv.org/abs/2410.24017v1) - [pdf](http://arxiv.org/pdf/2410.24017v1)

> The proliferation of malware, particularly through the use of packing, presents a significant challenge to static analysis and signature-based malware detection techniques. The application of packing to the original executable code renders extracting meaningful features and signatures challenging. To deal with the increasing amount of malware in the wild, researchers and anti-malware companies started harnessing machine learning capabilities with very promising results. However, little is known about the effects of packing on static machine learning-based malware detection and classification systems. This work addresses this gap by investigating the impact of packing on the performance of static machine learning-based models used for malware detection and classification, with a particular focus on those using visualisation techniques. To this end, we present a comprehensive analysis of various packing techniques and their effects on the performance of machine learning-based detectors and classifiers. Our findings highlight the limitations of current static detection and classification systems and underscore the need to be proactive to effectively counteract the evolving tactics of malware authors.

</details>

Loading

0 comments on commit 75c82af

Please sign in to comment.