Exclusive Content:

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

“Revealing Weak Infosec Practices that Open the Door for Cyber Criminals in Your Organization” • The Register

Warning: Stolen ChatGPT Credentials a Hot Commodity on the...

Navigating Reward Signal Challenges: Implementing Verifiable Rewards-Based Reinforcement Learning with GRPO on SageMaker AI

Enhancing Large Language Model Training with Reinforcement Learning and Verifiable Rewards


Introduction

Technical Overview

Solution Overview

Prerequisites

Environment Setup
Prepare the Dataset for Fine-tuning

The Verifiable Reward Function

Integrating RLVR with GRPO

Results

Extending RLVR to Other Domains

Code Generation with Execution-Based Rewards

Domain-Specific Text Generation with Semantic Validation

Cleaning Up

Conclusion

About the Authors

Training Large Language Models with Reinforcement Learning and Verifiable Rewards

Training large language models (LLMs) hinges on accurate feedback signals, yet traditional reinforcement learning (RL) frequently grapples with unreliable reward signals. The quality of these signals not only affects how models learn but also shapes their decision-making processes. Developing robust feedback mechanisms is complex and fraught with pitfalls such as hidden biases, unintended incentives, and ambiguous success criteria. These issues often derail the learning process, resulting in models that behave unpredictably or do not meet desired objectives.

In this blog post, we’ll explore how to implement Reinforcement Learning with Verifiable Rewards (RLVR) to introduce verification and transparency into reward signals, thereby enhancing training performance. This methodology is particularly effective when outputs can be objectively verified, such as in mathematical reasoning, code generation, or symbolic manipulation tasks. We’ll use the GSM8K dataset, containing grade school math problems, to improve accuracy in solving math problems, but the techniques can be adapted to various use cases.

Technical Overview

Understanding RL concepts is crucial before implementing RLVR. At its core, RL addresses issues in model training through a structured feedback system, using reward signals to guide models toward optimal behaviors. The structured feedback enables highly adaptive learning, which is particularly beneficial for models interacting with users and adjusting their behavior based on outcomes.

Traditional RL has highlighted the importance of reward signal quality. Poorly designed reward functions can lead to "reward hacking," where models find unanticipated ways to maximize rewards without achieving intended goals. This recognition has spurred the development of more rigorous approaches focused on creating reliable and well-defined reward functions.

Introducing RLVR

RLVR tackles the issue of reward hacking through rule-based feedback defined by model tuners. By employing programmatic reward functions that automatically score outputs based on specific criteria, RLVR allows rapid iteration without the bottleneck of human ratings. The "verifiable" nature of these rewards stems from objective, reproducible rules, rendering RLVR flexible for evolving requirements.

Group Relative Policy Optimization (GRPO) is an RL algorithm that enhances model learning by evaluating performance within groups rather than across all data simultaneously. This method involves organizing training data into meaningful groups and optimizing performance relative to each group’s baseline, ensuring balanced attention to various categories. By combining RLVR with GRPO, we craft a framework where automated rewards facilitate learning, and group-relative optimization propels consistent model performance across categories.

Solution Overview

In this section, we will discuss how to fine-tune a Qwen2.5-0.5B model using Amazon SageMaker. SageMaker Training Jobs support distributed multi-GPU configurations, allowing you to swiftly train billion-parameter models and efficiently manage resources.

Prerequisites

Before diving into the implementation, ensure you have the following setups:

Environment Setup

You can use an IDE like Visual Studio Code or PyCharm. If you opt for Amazon SageMaker Studio, follow these steps to set up your JupyterLab environment:

  1. In the SageMaker console, navigate to Domains and open your domain.
  2. Choose Studio under Applications and IDEs.
  3. Launch an ml.t3.medium JupyterLab notebook instance with a minimum of 50 GB storage.

Next, clone the GitHub repository and navigate to the path 3_distributed_training/reinforcement-learning/grpo-with-verifiable-reward. Open the notebook named model-finetuning-grpo-rlvr.ipynb with Python 3.12 or higher.

Prepare the Dataset for Fine-Tuning

To use GRPO with RLVR effectively, you must prepare your dataset, specifically ensuring that each question includes the final answer. Execute the following code snippet to structure your data:

dataset = GSM8K(split="train", include_answer=False, include_reasoning=True, few_shot=True, num_shots=8, seed=None, cot=True).dataset.shuffle(seed=42)

This preparation includes utilizing few-shot examples to enhance training performance, instructing your model what good outputs should resemble.

Verifiable Reward Functions

The GRPO implementation utilizes a dual-reward system offering objective, verifiable feedback:

  1. Format Reward Function: This function validates proper response structure based on specific patterns.

    def format_reward_func_qa(completions, **kwargs):
       pattern = r"\n#### The final answer is \d+"
       completion_contents = [completion for completion in completions]
       matches = [re.search(pattern, content) for content in completion_contents]
       return [0.5 if match else 0.0 for match in matches]
  2. Correctness Reward Function: This function ensures mathematical accuracy by extracting answers and comparing them against ground truth values.

    def correctness_reward_func_qa(completions, final_answer, **kwargs):
       rewards = []
       for completion, ground_truth in zip(completions, final_answer):
           try:
               match = re.search(r'####.*?([\d,]+(?:\.\d+)?)', completion)
               if match:
                   answer = match.group(1)
                   # Normalization and precision comparison logic here

Integrating RLVR with GRPO

You integrate these reward functions into the GRPO training pipeline via the GRPOTrainer class:

trainer = GRPOTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=test_dataset,
    processing_class=tokenizer,
    peft_config=peft_config,
    reward_funcs=[format_reward_func_qa, correctness_reward_func_qa],
)

During the training phase, the model generates multiple outputs for each problem. The rewards for each response are then calculated, leading to optimization based on performance relative to group benchmarks.

Execution and Results

After evaluating the model on 100 test samples, the 8-shot GRPO-trained model achieved 41% accuracy, a significant leap from the base model’s 11%. This improvement underscores GRPO’s strengths in leveraging group comparisons for enhanced reasoning capabilities.

Extending RLVR to Other Domains

The RLVR framework is not limited to mathematical reasoning. It can be adapted to other fields with objective verification such as:

  • Code Generation: Utilizing execution-based rewards where correctness is checked against successful compilation and completion of unit tests.
  • Domain-Specific Text Generation: Implementing keyword-based rewards, particularly useful in specialized fields like healthcare, ensuring outputs adhere to specific terminologies and semantic structures.

Conclusion

In this exploration, we’ve illustrated how to train a Qwen2.5-0.5B model using GRPO with verifiable rewards, significantly improving performance for mathematical reasoning tasks. The success of this training approach highlights its potential for wider applications across various domains demanding accurate and verifiable outputs.

For more information and detailed implementation guidance, visit the SageMaker AI documentation. You can also find the complete codes referenced in this post on GitHub.


About the Authors:

Surya Kari, Giuseppe Zappia, and Amin Dashti bring a wealth of experience in machine learning and AI, specializing in optimizing language models for various applications at AWS. Their collaborative efforts aim to help customers navigate the complexities of AI model training and performance enhancement.

Latest

OpenAI Introduces ChatGPT 5.5 Instant as the New Default Model for All Users

OpenAI Unveils ChatGPT 5.5 Instant: A Groundbreaking Update for...

Serve Robotics Falls Short: Here’s How to Navigate the Situation

The Challenges Facing Serve Robotics Amidst a Robotics Boom The...

Anthropic’s NLAs Show Claude Strategically Planned Rhymes in Couplet Completions

Unlocking AI Insights: Anthropic's Natural Language Autoencoders Peering into Claude's...

Essex Residents Split on Their Views About AI Bots

The Dual Faces of AI: Embracing Innovation vs. Protecting...

Don't miss

Haiper steps out of stealth mode, secures $13.8 million seed funding for video-generative AI

Haiper Emerges from Stealth Mode with $13.8 Million Seed...

Running Your ML Notebook on Databricks: A Step-by-Step Guide

A Step-by-Step Guide to Hosting Machine Learning Notebooks in...

VOXI UK Launches First AI Chatbot to Support Customers

VOXI Launches AI Chatbot to Revolutionize Customer Services in...

Investing in digital infrastructure key to realizing generative AI’s potential for driving economic growth | articles

Challenges Hindering the Widescale Deployment of Generative AI: Legal,...

Optimize Short-Term GPU Resources for ML Workloads with EC2 Capacity Blocks...

Navigating GPU Capacity Challenges for Machine Learning Workloads Overview of Short-Term GPU Options on AWS On-Demand GPU Instances Spot GPU Instances Amazon EC2 Capacity Blocks for ML Amazon SageMaker...

Techniques and Python Examples for Feature Engineering with LLMs

Revolutionizing Feature Engineering: The Role of Large Language Models (LLMs) in Modern Machine Learning Introduction to Feature Engineering with LLMs Feature engineering is critical for effective...

Enhancing Generative AI Development with MLflow v3.10 on Amazon SageMaker AI

Announcing MLflow Version 3.10 Support in Amazon SageMaker AI MLflow Apps: Elevate Your Generative AI Development Unlock Enhanced Experiment Tracking and Observability for Generative AI...