ScholarCopilot: Training Large Language Models for Academic Writing with Accurate Citations

1️Yubo Wang, 1Xueguang Ma, 3️Ping Nie, 1️Huaye Zeng, 1️Zhiheng Lyu, 1️Yuxuan Zhang, 1️Benjamin Schneider, 1️Yi Lu, 2Xiang Yue, 1Wenhu Chen
1️University of Waterloo, 2Carnegie Mellon University 3️Independent Researcher,
yubo.wang.sunny@gmail.com, wenhuchen@uwaterloo.ca

Abstract

Academic writing requires both coherent text generation and precise citation of relevant literature. Although recent Retrieval-Augmented Generation (RAG) systems have significantly improved factual accuracy in general-purpose text generation, their ability to support professional academic writing remains limited. In this work, we introduce ScholarCopilot, a unified framework designed to enhance existing large language models for generating professional academic articles with accurate and contextually relevant citations. ScholarCopilot dynamically determines when to retrieve scholarly references by generating a retrieval token [RET], which is then used to query a citation database. The retrieved references are fed into the model to augment the generation process. We jointly optimize both the generation and citation tasks within a single framework to improve efficiency. Our model is built upon Qwen-2.5-7B and trained on 500K papers from arXiv. It achieves a top-1 retrieval accuracy of 40.1% on our evaluation dataset, outperforming baselines such as E5-Mistral-7B-Instruct (15.0%) and BM25 (9.8%). On a dataset of 1,000 academic writing samples, ScholarCopilot scores 16.2/25 in generation quality--measured across relevance, coherence, academic rigor, completeness, and innovation--significantly surpassing all existing models, including much larger ones like the Retrieval-Augmented Qwen2.5-72B-Instruct. Human studies further demonstrate that ScholarCopilot, despite being a 7B model, significantly outperforms ChatGPT, achieving 100% preference in citation quality and over 70% in overall usefulness.

CFT

Figure 1: Comparison of traditional Retrieval-Augmented Generation (RAG) systems and our proposed ScholarCopilot. Traditional RAG systems (left) separately perform retrieval and generation, leading to representation misalignment. In contrast, ScholarCopilot (right) dynamically generates retrieval tokens ([RET]) during text generation for integrated and context-aware reference retrieval.

Traditional RAG vs. ScholarCopilot

We introduce ScholarCopilot, an agentic RAG framework for academic writing that dynamically integrates text generation and citation retrieval. Unlike traditional approaches with separate retrieval and generation stages, our system generates special retrieval tokens ([RET]) based on evolving context, pauses generation to retrieve relevant references, and incorporates their content into subsequent steps. Retrieval token representations are optimized through contrastive learning for efficient similarity search. ScholarCopilot also supports optional user refinement during the iterative process, enhancing citation accuracy and content coherence without additional overhead.

intro

Dataset Curation

We built a large-scale dataset of 500K arXiv computer science papers, with 10M citations matched from arXiv and 6.8M from Semantic Scholar (papers may be cited multiple times). Dataset creation involved five stages: 1) paper collection, 2) structure parsing, 3) citation extraction, 4) reference matching, and 5) dataset integration. Each paper averages 38 citations with 87% successfully matched to academic databases.

dataset

Training Method

ScholarCopilot jointly optimizes generation and citation retrieval through two objectives: next token prediction for text generation and contrastive learning for citation retrieval. For generation, it uses standard autoregressive language modeling to maximize token likelihood conditioned on previous tokens and retrieved content. For retrieval, it employs contrastive learning to optimize retrieval token representations, increasing similarity between these tokens and relevant citations while decreasing similarity with irrelevant ones. Positive citations come from ground-truth papers, while negative examples are obtained through in-batch sampling. The system minimizes a combined loss function (L_total = L_g + L_r).

method

Generation Quality Evaluation

We compare the generation quality of different baseline models. Key findings: (1) ScholarCopilot scores 16.21/25, outperforming models with 10x more parameters, (2) Shows particular strength in Relevance (3.63) and Coherence (3.66), comparable to 72B-parameter models, (3) Significantly improves Academic Rigor (2.87 vs. 2.26) through our unified generation and citation approach.

main_result

Citation Accuracy Evaluation

Here we compare citation retrieval performance across different methods. ScholarCopilot significantly outperforms baselines like E5-Mistral-7B-Instruct and BM25, achieving 40.1% top-1 recall and 64.8% recall@10.

retrieval

Human Study

To evaluate ScholarCopilot's practical utility, we conducted a user study with 10 academic participants (5 PhD, 4 master's, 1 undergraduate) having 4.2 years average writing experience. Participants drafted academic sections using our system and provided ratings across multiple dimensions. ScholarCopilot received highest scores for citation accuracy (4.6/5), interface clarity (4.5/5), and writing style (4.5/5), with citation quality metrics averaging 4.3/5. User experience averaged 3.9/5, with response time rated lowest (3.3/5) due to single-GPU deployment constraints. Content quality metrics showed strong performance in writing style (4.5/5) and factual accuracy (4.3/5), while innovation scored lowest (2.5/5), suggesting the system excels at generating academically sound content but less at proposing novel ideas.

human_study

Reference

Please kindly cite our paper if you use our code or results:
@article{wang2024scholarcopilot,
  title={ScholarCopilot: Training Large Language Models for Academic Writing with Accurate Citations},
  author = {Wang, Yubo and Ma, Xueguang and Nie, Ping and Zeng, Huaye and Lyu, Zhiheng and Zhang, Yuxuan and Schneider, Benjamin and Lu, Yi and Yue, Xiang and Chen, Wenhu},
  journal={arXiv preprint arXiv:2504.00824},
  year={2025}
}
}