Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Paper
β’
2406.11736
β’
Published
β’
6
Paper Link: https://arxiv.org/abs/2406.11736
Code Repo: https://github.com/xufangzhi/ENVISIONS
The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review.
Write Python code to solve the question.
The question is: <question>
The solution code is:
If you find it helpful, please kindly cite the paper.
@misc{xu2024interactive,
title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models},
author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu},
year={2024},
eprint={2406.11736},
archivePrefix={arXiv},
}