Skip to content

Commit

Permalink
* update 2023-12-28 06:16:15
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Dec 27, 2023
1 parent a2058d7 commit b84ca5a
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 1 deletion.
12 changes: 12 additions & 0 deletions arXiv_db/Malware/2023.md
Original file line number Diff line number Diff line change
Expand Up @@ -3566,3 +3566,15 @@

</details>

<details>

<summary>2023-12-25 21:25:55 - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits!</summary>

- *Tirth Patel, Fred Lu, Edward Raff, Charles Nicholas, Cynthia Matuszek, James Holt*

- `2312.15813v1` - [abs](http://arxiv.org/abs/2312.15813v1) - [pdf](http://arxiv.org/pdf/2312.15813v1)

> Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines, meaning a 0.1\% change can cause an overwhelming number of false positives. However, academic research is often restrained to public datasets on the order of ten thousand samples and is too small to detect improvements that may be relevant to industry. Working within these constraints, we devise an approach to generate a benchmark of configurable difficulty from a pool of available samples. This is done by leveraging malware family information from tools like AVClass to construct training/test splits that have different generalization rates, as measured by a secondary model. Our experiments will demonstrate that using a less accurate secondary model with disparate features is effective at producing benchmarks for a more sophisticated target model that is under evaluation. We also ablate against alternative designs to show the need for our approach.
</details>

Loading

0 comments on commit b84ca5a

Please sign in to comment.