Illustration: Ruby Chen
In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning. However, even state-of-the-art models still produce logical mistakes, often called hallucinations. Mitigating hallucinations is a critical step towards building aligned AGI.
We can train reward models to detect hallucinations using either outcome supervision, which provides feedback based on a final result, or process supervision, which provides feedback for each individual step in a chain-of-thought. Building on previous work[^reference-1], we conduct a detailed comparison of these two methods using the MATH dataset[^reference-2] as our testbed. We find that process supervision leads to significantly better performance, even when judged by outcomes. To encourage related research, we release our full dataset of process supervision.
Alignment impact
Process supervision has several alignment advantages over outcome supervision. It directly rewards the model for following an aligned chain-of-thought, since each step in the process receives precise supervision. Process supervision is also more likely to produce interpretable reasoning, since it encourages the model to follow a human-approved process. In contrast, outcome supervision may reward an unaligned process, and it is generally harder to scrutinize.
In some cases, safer methods for AI systems can lead to reduced performance[^reference-3], a cost which is known as an alignment tax. In general, any alignment tax may hinder the adoption of alignment methods, due to pressure to deploy the most capable model. Our results below show that process supervision in fact incurs a negative alignment tax, at least in the math domain. This could increase the adoption of process supervision, which we believe would have positive alignment side-effects.
Solving MATH problems
We evaluate our process-supervised and outcome-supervised reward models using problems from the MATH test set. We generate many solutions for each problem and then pick the solution ranked the highest by each reward model. The graph shows the percentage of chosen solutions that reach the correct final answer, as a function of the number of solutions considered. Not only does the process-supervised reward model perform better across the board, but the performance gap widens as we consider more solutions per problem. This shows us that the process-supervised reward model is much more reliable.
We showcase 10 problems and solutions below, along with commentary about the reward model’s strengths and weaknesses.
This challenging trigonometry problem requires applying several identities in a not-at-all obvious succession. Most solution attempts fail, because it is hard to choose which identities are actually helpful. Although GPT-4 usually can’t solve this problem (only .1%.1\%.1% of solution attempts reach the correct answer), the reward model correctly recognizes that this solution is valid.
I notice that the angle 100 degrees is not a special angle, so I will need to use some trigonometric identities to simplify this expression.
I recall that tanθ=sinθcosθ\tan \theta = \frac{\sin \theta}{\cos \theta}tanθ=cosθsinθ and sin2θ=2sinθcosθ.\sin 2\theta = 2 \sin \theta \cos \theta.sin2θ=2sinθcosθ.
I can use these identities to rewrite the expression as sin100∘cos100∘+4sin100∘.\frac{\sin 100^\circ}{\cos 100^\circ} + 4 \sin 100^\circ.cos100∘sin100∘+4sin100∘.
I can simplify the numerator by using the double angle identity again, but in reverse.
I notice that 200 degrees is the same as 180 degrees plus 20 degrees, so I can use the sum angle identity for sine.
I get sin100∘+2(sin180∘cos20∘+cos180∘sin20∘)cos100∘.\frac{\sin 100^\circ + 2 (\sin 180^\circ \cos 20^\circ + \cos 180^\circ \sin 20^\circ)}{\cos 100^\circ}.cos100∘sin100∘+2(sin180∘cos20∘+cos180∘sin20∘).
I recall that sin180∘=0\sin 180^\circ = 0sin180∘=0 and cos180∘=−1,\cos 180^\circ = -1,cos180∘=−1, so I can simplify further.
I notice that 100 degrees is the same as 90 degrees plus 10 degrees, so I can use the sum angle identity for sine and cosine.
I get (sin90∘cos10∘+cos90∘sin10∘)−2sin20∘(cos90∘cos10∘−sin90∘sin10∘).\frac{(\sin 90^\circ \cos 10^\circ + \cos 90^\circ \sin 10^\circ) - 2 \sin 20^\circ}{(\cos 90^\circ \cos 10^\circ - \sin 90^\circ \sin 10^\circ)}.(cos90∘cos10∘−sin90∘sin10∘)(sin90∘cos10∘+cos90∘sin10∘)−2sin20∘.
I recall that sin90∘=1\sin 90^\circ = 1sin90∘=1 and cos90∘=0,\cos 90^\circ = 0,cos90∘=0, so I can simplify further.
I notice that 20 degrees is the same as 30 degrees minus 10 degrees, so I can use the difference angle identity for sine.
I get 2(sin30∘cos10∘−cos30∘sin10∘)−cos10∘sin10∘.\frac{2 (\sin 30^\circ \cos 10^\circ - \cos 30^\circ \sin 10^\circ) - \cos 10^\circ}{\sin 10^\circ}.sin10∘2(sin30∘cos10∘−cos30∘sin10∘)−cos10∘.
I recall that sin30∘=12\sin 30^\circ = \frac{1}{2}sin30∘=21 and cos30∘=32,\cos 30^\circ = \frac{\sqrt{3}}{2},cos30∘=23
, so I can simplify further.
It is unknown how broadly these results will generalize beyond the domain of math, and we consider it important for future work to explore the impact of process supervision in other domains. If these results generalize, we may find that process supervision gives us the best of both worlds – a method that is both more performant and more aligned than outcome supervision.