Skip to content

Commit

Permalink
* update 2024-11-20 06:21:00
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Nov 19, 2024
1 parent 3fc14d4 commit 8ba3b43
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 1 deletion.
36 changes: 36 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -3538,3 +3538,39 @@

</details>

<details>

<summary>2024-11-17 09:42:08 - A Study of Malware Prevention in Linux Distributions</summary>

- *Duc-Ly Vu, Trevor Dunlap, Karla Obermeier-Velazquez, Paul Gilbert, John Speed Meyers, Santiago Torres-Arias*

- `2411.11017v1` - [abs](http://arxiv.org/abs/2411.11017v1) - [pdf](http://arxiv.org/pdf/2411.11017v1)

> Malicious attacks on open source software packages are a growing concern. This concern morphed into a panic-inducing crisis after the revelation of the XZ Utils backdoor, which would have provided the attacker with, according to one observer, a "skeleton key" to the internet. This study therefore explores the challenges of preventing and detecting malware in Linux distribution package repositories. To do so, we ask two research questions: (1) What measures have Linux distributions implemented to counter malware, and how have maintainers experienced these efforts? (2) How effective are current malware detection tools at identifying malicious Linux packages? To answer these questions, we conduct interviews with maintainers at several major Linux distributions and introduce a Linux package malware benchmark dataset. Using this dataset, we evaluate the performance of six open source malware detection scanners. Distribution maintainers, according to the interviews, have mostly focused on reproducible builds to date. Our interviews identified only a single Linux distribution, Wolfi OS, that performs active malware scanning. Using this new benchmark dataset, the evaluation found that the performance of existing open-source malware scanners is underwhelming. Most studied tools excel at producing false positives but only infrequently detect true malware. Those that avoid high false positive rates often do so at the expense of a satisfactory true positive. Our findings provide insights into Linux distribution package repositories' current practices for malware detection and demonstrate the current inadequacy of open-source tools designed to detect malicious Linux packages.

</details>

<details>

<summary>2024-11-18 04:12:26 - DeUEDroid: Detecting Underground Economy Apps Based on UTG Similarity</summary>

- *Zhuo Chen, Jie Liu, Yubo Hu, Lei Wu, Yajin Zhou, Yiling He, Xianhao Liao, Ke Wang, Jinku Li, Zhan Qin*

- `2209.01317v2` - [abs](http://arxiv.org/abs/2209.01317v2) - [pdf](http://arxiv.org/pdf/2209.01317v2)

> In recent years, the underground economy is proliferating in the mobile system. These underground economy apps (UEware) make profits from providing non-compliant services, especially in sensitive areas such as gambling, pornography, and loans. Unlike traditional malware, most of them (over 80%) do not have malicious payloads. Due to their unique characteristics, existing detection approaches cannot effectively and efficiently mitigate this emerging threat. To address this problem, we propose a novel approach to effectively and efficiently detect UEware by considering their UI transition graphs (UTGs). Based on the proposed approach, we design and implement a system named DeUEDroid to perform the detection. To evaluate DeUEDroid, we collect 25,717 apps and build the first large-scale ground-truth dataset (1,700 apps) of UEware. The evaluation result based on the ground-truth dataset shows that DeUEDroid can cover new UI features and statically construct precise UTG. It achieves 98.22% detection F1-score and 98.97% classification accuracy, significantly outperforming traditional approaches. The evaluation involving 24,017 apps demonstrates the effectiveness and efficiency of UEware detection in real-world scenarios. Furthermore, the result reveals that UEware are prevalent, with 54% of apps in the wild and 11% of apps in app stores being UEware. Our work sheds light on future work in analyzing and detecting UEware.

</details>

<details>

<summary>2024-11-18 09:28:58 - The Dark Side of Trust: Authority Citation-Driven Jailbreak Attacks on Large Language Models</summary>

- *Xikang Yang, Xuehai Tang, Jizhong Han, Songlin Hu*

- `2411.11407v1` - [abs](http://arxiv.org/abs/2411.11407v1) - [pdf](http://arxiv.org/pdf/2411.11407v1)

> The widespread deployment of large language models (LLMs) across various domains has showcased their immense potential while exposing significant safety vulnerabilities. A major concern is ensuring that LLM-generated content aligns with human values. Existing jailbreak techniques reveal how this alignment can be compromised through specific prompts or adversarial suffixes. In this study, we introduce a new threat: LLMs' bias toward authority. While this inherent bias can improve the quality of outputs generated by LLMs, it also introduces a potential vulnerability, increasing the risk of producing harmful content. Notably, the biases in LLMs is the varying levels of trust given to different types of authoritative information in harmful queries. For example, malware development often favors trust GitHub. To better reveal the risks with LLM, we propose DarkCite, an adaptive authority citation matcher and generator designed for a black-box setting. DarkCite matches optimal citation types to specific risk types and generates authoritative citations relevant to harmful instructions, enabling more effective jailbreak attacks on aligned LLMs.Our experiments show that DarkCite achieves a higher attack success rate (e.g., LLama-2 at 76% versus 68%) than previous methods. To counter this risk, we propose an authenticity and harm verification defense strategy, raising the average defense pass rate (DPR) from 11% to 74%. More importantly, the ability to link citations to the content they encompass has become a foundational function in LLMs, amplifying the influence of LLMs' bias toward authority.

</details>

Loading

0 comments on commit 8ba3b43

Please sign in to comment.