Oracles and consensus – not the same ballgame
In PoW and PoS consensus mechanisms, dishonest behavior is clearly defined and objectively verifiable. For instance, attempting to double-spend in Bitcoin or proposing an invalid block in Ethereum triggers deterministic penalties, such as block orphaning or stake slashing. These systems rely on binary, deterministic rules: a block is either valid or invalid, leaving little room for ambiguity. Honest participants avoid penalties and receive probabilistic rewards proportional to their hashrate or stake. Oracles, however, function differently.
Assume, for now, that the concept of a “ground truth” (GT) price exists. We will discuss this philosophical question in another post. In a permissionless oracle system, anyone with a stake can participate. For A1 to hold, we must ask; what incentivizes honest reporting ? The answer will often be “to avoid stake loss”, so choosing how to slash players is a critical task. Protocols tend to penalize submissions deviating significantly from an aggregated result, such as the median. The median is chosen because if A1 holds, the final output will lie within honest votes. However, distinguishing honest from dishonest submissions is challenging when votes are continuous and non-deterministic, and voter behaviour/incentives should be considered very carefully.
The oracle's temptation
Lying in price oracles is more tempting given the game setup. If the oracle itself doesn’t know the price, it’ll have trouble finding subtle lies. A basic attack involves a malicious actor submitting a slightly skewed price that influences the aggregated result without deviating enough to trigger slashing. Researchers might argue that the price range of honest voters is tight, and the median always lies within it, so it doesn’t matter what the attackers do. While this may hold in ideal scenarios, what happens if one day honest players have large observation errors? And what incentivizes honesty enough for you to say there is a tight honest range to begin with ? If I can skew the median without incurring losses, then I have an incentive to do so, meaning honesty is not a Nash equilibrium. In expectation, deviating by a small ε relative to the size of the honest bell (the distribution of honest votes around a presumed GT), increases the reward.
Let’s imagine an attack. We consider 100 players providing price feeds, and 30 nodes collude. In a consensus mechanism like Flare’s, these nodes would all be slashed for publishing malicious blocks. In an oracle, however, these nodes could submit coordinated, incorrect prices to manipulate the median, if their votes are strategically close to the boundary of the honest bell. Alarmingly, this could lead to honest nodes being flagged as outliers if their truthful submissions fall outside the skewed median. If this process repeats, with colluding nodes consistently skewing prices upward, honest nodes may begin to believe their votes are systematically undervalued relative to the GT. To compensate, they might vote higher, allowing attackers to further increase their submissions, creating a spiral of oracle inflation. This property is known as “translation invariance”, the game doesn’t depend on how close you are from the GT. This means that for a given distribution of votes, the reward is independent of the actual price outputted by the oracle. So if everyone lies and votes BTC = $1M, some still get rewards. That’s because the oracle doesn’t know the GT, it only knows how participants vote.
Hence, A1 for oracles is a strictly stronger assumption than for consensus mechanisms. Not only can it not be made as easily, but I hope to show it shouldn’t hold at all in future research.