--- license: apache-2.0 task_categories: - text-classification language: - en tags: - cybersecurity - document-classification - sft - lora size_categories: - 10K Dobrovolskyi, I. *Security Document Classification with a Fine-Tuned Local > Large Language Model: Benchmark Data and an Open-Source System.* Journal of > Information Security and Applications, 2026. ## Dataset - **78,358 balanced samples** (95 / 5 split → 74,441 train + 3,917 validation) - Alpaca format: `instruction`, `input`, `output` - Output is a JSON array of findings, each with `category`, `subcategory`, `severity`, `explanation` - Stratified across 7 categories × 51 subcategories ## Composition The initial corpus contained 116,956 raw samples drawn from 13 publicly available sources. To remove the NVD-dominated skew, each subcategory was capped at 5,000 samples and underrepresented subcategories were augmented with GPT-4–generated synthetic data and hard-negative boundary cases. | Source | Raw | Balanced | License | Categories | |---|---:|---:|---|---| | NVD CVE Database | 50,000 | 8,475 | Public domain | `malicious.exploit` | | Synthetic augmentation| 33,100 | 39,754 | Generated (GPT-4) | All categories | | Hard negatives | 6,400 | 6,134 | Generated (GPT-4) | Boundary cases | | AI4Privacy | 5,000 | 4,851 | Apache 2.0 | `pii.*` | | Fenrir v2.0 | 5,000 | 4,573 | Apache 2.0 | `malicious.*` | | SEC EDGAR | 3,000 | 3,000 | Public domain | `financial.*` | | SecLists | 3,229 | 1,708 | MIT | `malicious.injection` | | Phishing Dataset | 3,000 | 2,796 | Apache 2.0 | `malicious.phishing` | | NIST Training | 3,000 | 2,761 | Public domain | `safe.documentation` | | Enron Email Corpus | 2,000 | 1,902 | Public domain | `pii.*`, `credentials.*` | | MITRE ATT&CK v14 | 1,620 | 871 | Royalty-free | `malicious.malware` | | Loghub | 1,280 | 1,280 | Research-free | `safe.config` | | Other (3 sources) | 327 | 253 | Permissive | Multiple | | **Total** | **116,956** | **78,358** | | | All sources have been verified safe for AI training. Copyleft-licensed (GPL/LGPL) and ShareAlike-licensed (CC BY-SA) materials are excluded so the corpus is suitable for commercial training use. Of the final 78,358 samples, 39,754 (50.7%) are GPT-4–generated synthetic augmentation and 6,134 (7.8%) are hard-negative boundary cases. The remaining 32,470 (41.5%) come from the 13 external sources listed above. ## Structure ``` sft/ ├── train_alpaca.jsonl # 74,441 samples └── val_alpaca.jsonl # 3,917 samples processed/ # intermediate per-source files synthetic/ # GPT-4 generations and hard negatives ``` ## LoRA training configuration | Parameter | Value | |---|---| | Base model | Qwen 3.5 27B (dense) | | LoRA rank (r) | 128 | | LoRA alpha (α) | 256 | | Target modules | q/k/v/o_proj, gate/up/down_proj | | Dropout | 0.05 | | Learning rate | 2 × 10⁻⁵, cosine decay, 10% warmup | | Effective batch size | 16 (4 × 4 gradient accumulation) | | Epochs | 5 | | Precision | bf16 | | Max sequence length | 4,096 tokens | | Hardware | 8× NVIDIA A100 80GB SXM4 | | Wall-clock time | 10.5 hours | Library versions: `trl == 0.11.4`, `transformers == 4.45.2`, `peft == 0.13.2`. ## License Apache 2.0. ## Companion artifacts - Benchmark: [`torchsight/cybersecurity-classification-benchmark`](https://huggingface.co/datasets/torchsight/cybersecurity-classification-benchmark) - Models: [`torchsight/beam-q4_K_M`](https://huggingface.co/torchsight/beam-q4_K_M), [`torchsight/beam-q8_0`](https://huggingface.co/torchsight/beam-q8_0), [`torchsight/beam-f16`](https://huggingface.co/torchsight/beam-f16) - Source: [github.com/IvanDobrovolsky/torchsight](https://github.com/IvanDobrovolsky/torchsight)