Skip to content

Commit

Permalink
* update 2024-12-25 06:19:54
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Dec 24, 2024
1 parent adba2ce commit 30c7763
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 1 deletion.
24 changes: 24 additions & 0 deletions arXiv_db/Malware/2024.md
Original file line number Diff line number Diff line change
Expand Up @@ -4010,3 +4010,27 @@

</details>

<details>

<summary>2024-12-23 10:19:29 - A Temporal Convolutional Network-based Approach for Network Intrusion Detection</summary>

- *Rukmini Nazre, Rujuta Budke, Omkar Oak, Suraj Sawant, Amit Joshi*

- `2412.17452v1` - [abs](http://arxiv.org/abs/2412.17452v1) - [pdf](http://arxiv.org/pdf/2412.17452v1)

> Network intrusion detection is critical for securing modern networks, yet the complexity of network traffic poses significant challenges to traditional methods. This study proposes a Temporal Convolutional Network(TCN) model featuring a residual block architecture with dilated convolutions to capture dependencies in network traffic data while ensuring training stability. The TCN's ability to process sequences in parallel enables faster, more accurate sequence modeling than Recurrent Neural Networks. Evaluated on the Edge-IIoTset dataset, which includes 15 classes with normal traffic and 14 cyberattack types, the proposed model achieved an accuracy of 96.72% and a loss of 0.0688, outperforming 1D CNN, CNN-LSTM, CNN-GRU, CNN-BiLSTM, and CNN-GRU-LSTM models. A class-wise classification report, encompassing metrics such as recall, precision, accuracy, and F1-score, demonstrated the TCN model's superior performance across varied attack categories, including Malware, Injection, and DDoS. These results underscore the model's potential in addressing the complexities of network intrusion detection effectively.

</details>

<details>

<summary>2024-12-23 14:36:37 - Emerging Security Challenges of Large Language Models</summary>

- *Herve Debar, Sven Dietrich, Pavel Laskov, Emil C. Lupu, Eirini Ntoutsi*

- `2412.17614v1` - [abs](http://arxiv.org/abs/2412.17614v1) - [pdf](http://arxiv.org/pdf/2412.17614v1)

> Large language models (LLMs) have achieved record adoption in a short period of time across many different sectors including high importance areas such as education [4] and healthcare [23]. LLMs are open-ended models trained on diverse data without being tailored for specific downstream tasks, enabling broad applicability across various domains. They are commonly used for text generation, but also widely used to assist with code generation [3], and even analysis of security information, as Microsoft Security Copilot demonstrates [18]. Traditional Machine Learning (ML) models are vulnerable to adversarial attacks [9]. So the concerns on the potential security implications of such wide scale adoption of LLMs have led to the creation of this working group on the security of LLMs. During the Dagstuhl seminar on "Network Attack Detection and Defense - AI-Powered Threats and Responses", the working group discussions focused on the vulnerability of LLMs to adversarial attacks, rather than their potential use in generating malware or enabling cyberattacks. Although we note the potential threat represented by the latter, the role of the LLMs in such uses is mostly as an accelerator for development, similar to what it is in benign use. To make the analysis more specific, the working group employed ChatGPT as a concrete example of an LLM and addressed the following points, which also form the structure of this report: 1. How do LLMs differ in vulnerabilities from traditional ML models? 2. What are the attack objectives in LLMs? 3. How complex it is to assess the risks posed by the vulnerabilities of LLMs? 4. What is the supply chain in LLMs, how data flow in and out of systems and what are the security implications? We conclude with an overview of open challenges and outlook.

</details>

Loading

0 comments on commit 30c7763

Please sign in to comment.