Skip to content

CSJianYang/SEevenLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SEevenLLM

Introduce

We introduce SEVENLLM, a comprehensive framework developed to enhance the understanding and task execution capabilities of Large Language Models (LLMs) in the analysis of cybersecurity events. SEVENLLM not only improves the understanding and operational task execution of LLMs within this domain but also proposes a set of evaluation methods tailored for these purposes. The various experiments conducted under this framework offer valuable references for your work in this area.

You can learn more about our work in SEvenLLM: Benchmarking, Eliciting, and Enhancing Abilities of Large Language Models in Cyber Threat Intelligence.

Dataset

We provide two datasets, SEVENLLM-Instruct and SEVENLLM-Bench, specifically designed for training and evaluating the capabilities of models. These datasets utilize the Select-Instruct method and have been vetted by experts in the relevant fields. image

Our main data generation process is shown in the figure above. You can access detailed information and download them through this link.

Experimentation and Evaluation

Replication

You can replicate our work as following. Replace the model base and dataset paths with your own file paths.

sh scripts/llama_train.sh  

sh scripts/infer.sh  

Evaluation

We have employed five different methods to comprehensively evaluate the capabilities of the model: GPT-4 Score,Rouge-L Score,Semantic Score, multiple-choice questions, and human expert evaluation. You can refer to our code at code/score, and some of our results are available for your reference at SevenLLM-result.

Citation

If our work has been helpful to you, please cite us.

@article{ji2024sevenllm,
  title={SEvenLLM: Benchmarking, Eliciting, and Enhancing Abilities of Large Language Models in Cyber Threat Intelligence},
  author={Ji, Hangyuan and Yang, Jian and Chai, Linzheng and Wei, Chaoren and Yang, Liqun and Duan, Yunlong and Wang, Yunli and Sun, Tianzhen and Guo, Hongcheng and Li, Tongliang and others},
  journal={arXiv preprint arXiv:2405.03446},
  year={2024}
}

Contact

Your interest and attention to SEVENLLM are the greatest encouragement for us. If you have any questions or need further information, please feel free to contact us at the following email: [email protected]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published