SAAS - Solving Ability Amplification Strategy for Enhanced Mathematical Reasoning in Large Language Models

26/June/2024 | Written By: YoungHoon Jeon, Hyeonwoo Kim, Gyoungjin Gim, Yungi Kim, Jihoo Kim, Byungju Kim, Wonseok Lee, Chanjun Park
 

We are pleased to announce the publication of our latest research paper, "SAAS: Solving Ability Amplification Strategy for Enhanced Mathematical Reasoning in Large Language Models," a collaborative effort between Upstage AI and Mathpresso Inc., and KT Corp. This paper introduces a novel learning approach designed to significantly enhance the mathematical reasoning and problem-solving abilities of Large Language Models (LLMs). Through the strategic integration of Chain-of-Thought (CoT) and Program-of-Thought (PoT) learning methodologies, SAAS marks a significant advancement in the field of AI-driven mathematical reasoning.

What is SAAS?

SAAS (Solving Ability Amplification Strategy) is a sequential learning strategy aimed at first enhancing logical reasoning skills through CoT learning, followed by bolstering problem-solving capabilities with PoT learning. CoT learning involves decomposing complex problems into intermediate reasoning steps, which enhances logical thinking but can lead to arithmetic errors with large numbers. PoT learning, conversely, represents these reasoning steps as code, delegating computations to a code interpreter, thereby reducing calculation errors but potentially lacking in logical reasoning. SAAS effectively combines these two methodologies, initially improving logical reasoning with CoT learning and subsequently enhancing problem-solving abilities with PoT learning.

Overview of SAAS with two core categories: 1) sequential learning strategy; 2) cognitive retention strategy

Architecture of SAAS

The SAAS architecture is meticulously designed to optimize both reasoning and computational capabilities of LLMs through a structured learning process. Here is a detailed breakdown of its components:

Chain-of-Thought Learning (CoT)

  • Objective: To enhance the logical reasoning ability of LLMs.

  • Method: The model is trained on data comprising CoT rationales, where each problem is decomposed into a series of intermediate steps that mimic human problem-solving processes. This phase focuses on minimizing the negative log-likelihood to optimize the model parameters for generating consistent and logically sound reasoning steps.

  • Outcome: The model learns to produce logical, step-by-step explanations, setting a solid foundation for complex mathematical reasoning.

Program-of-Thought Learning (PoT)

  • Objective: To improve problem-solving ability by reducing computational errors.

  • Method: Building upon the logical reasoning skills developed in the CoT phase, the model is further trained using PoT rationales, where reasoning steps are expressed as code. This phase incorporates a cognitive retention strategy, integrating a mix of CoT and PoT rationales to prevent the model from forgetting its previously acquired reasoning skills.

  • Outcome: The model achieves enhanced computational accuracy while retaining its logical reasoning capabilities, leading to superior performance in solving challenging mathematical problems.

Research Findings

Our empirical studies have demonstrated that SAAS achieves state-of-the-art (SOTA) performance across several benchmarks, including GSM8K and MATH. Notably, SAAS has outperformed all other models of similar size, highlighting the effectiveness of our sequential learning approach.

Accuracies of different learning strategies. All improvements are statistically significant with p-value ≤ 0.001.

The SAAS models consistently delivered superior results, indicating a significant improvement over existing methodologies. For instance, the SAAS with 10.7B parameters exhibited performance comparable to 70B models, underscoring the efficiency of our approach.

Conclusion

In conclusion, SAAS represents a major advancement in enhancing the mathematical reasoning and problem-solving abilities of Large Language Models. By combining the strengths of Chain-of-Thought and Program-of-Thought learning, SAAS offers a comprehensive solution to the challenges faced in mathematical problem-solving. Our research demonstrates that a sequential learning approach, complemented by a cognitive retention strategy, significantly amplifies the solving abilities of LLMs, marking a milestone in the field of AI and mathematical reasoning.

Discover how SAAS can enhance your LLM's mathematical reasoning and problem-solving capabilities. Connect with our experts to learn more!

Previous
Previous

Upstage Signs Multi-Year Strategic Collaboration Agreement with AWS to Accelerate Generative AI Innovation

Next
Next

Solar LLM with Predibase: The Best LLM for Fine-Tuning that beats GPT-4