top of page

Google AI Introduces AGREE: A Machine Learning Framework that Enables LLMs to Self-Ground the Claims in their Responses and Provide Precise Citations

Otto Williams

May 29, 2024

Exciting advancements from Google AI! Discover how the new AGREE framework enhances the accuracy and reliability of Large Language Models by enabling self-grounded responses with precise citations. At Spectro Agency, we're at the forefront of integrating cutting-edge AI solutions into our services. Join us at and let's revolutionize the digital landscape together.

#ArtificialIntelligence #MachineLearning #LLMs #AIInnovation #TechNews #AIResearch #DigitalTransformation #AIFramework #TechAdvancements #FutureTech #DataScience #TechTrends #AIDevelopment #AIApplications #NaturalLanguageProcessing #AIAccuracy #AGREEFramework #AIProgress #DeepLearning #AICitations #AIQuality #TechSolutions #AIEnhancement #AIInEducation #AIInNews

Maintaining the accuracy of Large Language Models (LLMs), such as GPT, is crucial, particularly in cases requiring factual accuracy, like news reporting or educational content creation. Despite their impressive capabilities, LLMs are prone to generating plausible but nonfactual information, known as “hallucinations,” usually when faced with open-ended queries that require broad world knowledge.

Google AI Researchers introduced AGREE to address the issue of “hallucination,” where LLMs generate a response that is factually incorrect, nonsensical, or disconnected from the input prompt.

Existing approaches to preventing hallucinations in LLMs primarily include two methods: post-hoc citing and prompting-based grounding. Post-hoc citing involves adding citations after generating responses, often using natural language inference (NLI) models. However, this method relies heavily on the knowledge within the LLM’s embeddings and faces challenges with facts beyond its training data. While prompting-based grounding leverages the instruction-following and in-context learning capabilities of LLMs, it is often ineffective, particularly in real-world scenarios requiring high factual accuracy.

The proposed solution, AGREE (Adaptation for GRounding EnhancEment), introduces a learning-based framework that enables LLMs to self-ground their responses and provide accurate citations. AGREE takes a holistic approach by combining both learning-based adaptation and test-time adaptation (TTA).

During training, AGREE fine-tunes LLMs using synthetic data from unlabeled queries, enabling them to self-ground their claims by adding citations to their responses. AGREE uses an iterative inference strategy during test time, which lets LLMs actively seek more information based on self-generated citations, helping them improve their answers continuously.

At the training stage, AGREE involves collecting synthetic data from unlabeled queries, retrieving relevant passages from reliable sources using a retriever model, and fine-tuning a base LLM to self-ground its claims. The fine-tuning process utilizes an NLI model to judge the support for each claim and add citations accordingly. Experiments across five datasets demonstrate AGREE’s effectiveness in improving grounding and citation precision compared to baseline methods. AGREE outperforms prompting-based and post-hoc citing approaches, achieving relative improvements of over 30% in grounding quality. Additionally, AGREE can work with out-of-domain data, suggesting its robustness across different question types, including knowledge out-of-domain. The inclusion of TTA in AGREE also leads to improvements in both grounding and answer correctness.

In conclusion, AGREE has effectively improved the issue of hallucination in LLMs by enhancing their factuality and verifiability. By enabling LLMs to self-ground their responses and provide accurate citations, AGREE enhances their reliability, particularly in domains requiring high factual accuracy. AGREE’s approach of combining learning-based adaptation with test-time adaptation provides a strong solution that surpasses current approaches and can be used across a wide range of datasets. Overall, AGREE possesses the potential to promote reliable language models suitable for real-world applications requiring high factual accuracy.

At Spectro Agency, we recognize the transformative power of advanced AI solutions like AGREE. Our high-end digital marketing, app creation, AI-powered solutions, chatbots, software creation, and website creation services leverage cutting-edge technologies to deliver exceptional results.

Visit us at to discover how we can help your business thrive in the digital age.

Source: [MarkTechPost](

bottom of page