Skip to content

Commit

Permalink
* update 2024-01-25 06:17:09
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jan 24, 2024
1 parent 3f46c6f commit f8517c0
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 1 deletion.
24 changes: 24 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,3 +126,27 @@

</details>

<details>

<summary>2024-01-22 22:12:05 - GenAI Against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models</summary>

- *Emilio Ferrara*

- `2310.00737v3` - [abs](http://arxiv.org/abs/2310.00737v3) - [pdf](http://arxiv.org/pdf/2310.00737v3)

> Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are marvels of technology; celebrated for their prowess in natural language processing and multimodal content generation, they promise a transformative future. But as with all powerful tools, they come with their shadows. Picture living in a world where deepfakes are indistinguishable from reality, where synthetic identities orchestrate malicious campaigns, and where targeted misinformation or scams are crafted with unparalleled precision. Welcome to the darker side of GenAI applications. This article is not just a journey through the meanders of potential misuse of GenAI and LLMs, but also a call to recognize the urgency of the challenges ahead. As we navigate the seas of misinformation campaigns, malicious content generation, and the eerie creation of sophisticated malware, we'll uncover the societal implications that ripple through the GenAI revolution we are witnessing. From AI-powered botnets on social media platforms to the unnerving potential of AI to generate fabricated identities, or alibis made of synthetic realities, the stakes have never been higher. The lines between the virtual and the real worlds are blurring, and the consequences of potential GenAI's nefarious applications impact us all. This article serves both as a synthesis of rigorous research presented on the risks of GenAI and misuse of LLMs and as a thought-provoking vision of the different types of harmful GenAI applications we might encounter in the near future, and some ways we can prepare for them.

</details>

<details>

<summary>2024-01-23 14:25:43 - MORPH: Towards Automated Concept Drift Adaptation for Malware Detection</summary>

- *Md Tanvirul Alam, Romy Fieblinger, Ashim Mahara, Nidhi Rastogi*

- `2401.12790v1` - [abs](http://arxiv.org/abs/2401.12790v1) - [pdf](http://arxiv.org/pdf/2401.12790v1)

> Concept drift is a significant challenge for malware detection, as the performance of trained machine learning models degrades over time, rendering them impractical. While prior research in malware concept drift adaptation has primarily focused on active learning, which involves selecting representative samples to update the model, self-training has emerged as a promising approach to mitigate concept drift. Self-training involves retraining the model using pseudo labels to adapt to shifting data distributions. In this research, we propose MORPH -- an effective pseudo-label-based concept drift adaptation method specifically designed for neural networks. Through extensive experimental analysis of Android and Windows malware datasets, we demonstrate the efficacy of our approach in mitigating the impact of concept drift. Our method offers the advantage of reducing annotation efforts when combined with active learning. Furthermore, our method significantly improves over existing works in automated concept drift adaptation for malware detection.

</details>

Loading

0 comments on commit f8517c0

Please sign in to comment.