How proper scoring rules are like order books

Intended audience: my own later reference. Might be useful for traders thinking about general scoring rules and their constraints


(1)

The following proposition is paraphrased from Savage 1971 (pdf), the original paper that introduced proper scoring rules way back when. Savage characterized the space of all possible scoring rules two ways -- in the now-common form using a general convex function and its subgradient, and in the equivalent "schedule of demands" form that has a natural interpretation as being given the opportunity to trade into a limit order book.

Proposition 1. Any (strictly) proper scoring rule \(S(p,q)\) can be written as \(S(p,q):=\alpha+S^*(p,q)\), where \(\alpha\in\mathbb R\) is a constant and \(S^*(p,q)\) is the profits-or-losses from trading into a limit order book (with nonzero size available at each price) when your initial position is some \(\beta\), your fair value is \(p\), and the contract value resolves to \(q\).

Proof. Let \(\phi:[0,1]\to\mathbb R\) be a (strictly) monotone-increasing function with \(\phi(0)<0\) and \(\phi(1)>0\). We will interpret \(\phi\) as a limit order book, with prices \(a\in[0,1]\):

  • \(\phi(a)\) is the cumulative size offered (available to buy) at or below \(a\).
  • \(-\phi(a)\) is the cumulative size bid (available to sell) at or above \(a\).

Given \(\phi\), we want to know the profits or losses (PnL) of trading towards a fair value of \(p\).

Let \(a_0\) be either the point where \(\phi\) crosses \(0\), the first point after \(\phi\) jumps across \(0\), or the last point before \(\phi\) jumps across \(0\). This means that \(a_0\) is the best bid, the best offer, or somewhere between them -- and if we believe \(a_0\) is fair, we have no trades to do.

Define \(\Phi(a):=\int_{a_0}^a\phi(y)dy\) to be the integral of \(\phi\). We can interpret \(\Phi(a)\) as the marked-to-market profits of trading full size up/down to a price of \(a\).

Now say you start with position \(\beta\), have fair value \(p\), and do all available trades good to your fair. At the point you stop trading, your marked-to-market profits are \(\Phi(p)\) and your position is \(\beta+\phi(p)\). The contract settles to \(x\), so your further PnL from resolution is \(\beta x+\phi(p)\cdot(x-p)\). Your total PnL is \(\beta x+\Phi(p)+\phi(p)\cdot(x-p)\).

A standard result (also from Savage 1971) is that a scoring rule \(S(p,q)\) is (strictly) proper iff there exists a (strictly) convex function \(G:[0,1]\to\mathbb R\) such that \(S(p,x)=\alpha+\beta x+G(p)+dG(p)\cdot(x-p)\), where \(dG\) is a subgradient of \(G\). So we can set \(\Phi:=G\) to be (strictly) convex and have \(\phi:=dG\) (strictly) increasing for any (strictly) proper scoring rule. \(\Box\)


(2)

Definition 2. An agent with utility \(u:\mathbb R^n\to\mathbb R\) has delta to \(v\) of \(\frac{du}{dv}\).

The following proposition is paraphrased from Shi, Conitzer, and Guo 2009 (pdf). In their setting, agents not only make a prediction, but also have opportunities to affect the outcome of the event. (The most succint motivation for this problem is "How can we elicit predictions about terrorist attacks without incentivizing terrorist attacks?")

Proposition 3. A scoring rule is aligned with a given vector \(v\) if and only if, for all \(a\in[0,1]\), \(\beta+\phi(a)\) has a positive delta to \(v\) (defining \(\phi\) and \(\beta\) as above).

In a multi-agent setting, all agents must have positive delta to \(v\) at their post-trading positions for all possible fair values. The natural way to accomplish this is to grant each participating agent an initial \(\beta\) that's sufficiently aligned to cover the positions from any possible \(\phi(a)\).

In a multi-agent setting with multiple rounds of trading (and no further constraints on information structure), even this becomes insufficient and the designer will need to impose position limits on agents that stop them from picking up a net negative delta to \(v\). However, these position limits are sufficient to achieve alignment.


(3)

Some open questions (or at least, questions where I'm not aware of any answers in the literature):

  • If agents can choose to learn additional information about event probabilities but face costs or tradeoffs to doing so, what scoring rules incentivize the agents to make choices aligned with the principal's utility function over better predictions?
  • If agents have risk aversion over payments, how should scoring rules be adjusted? What if the principal/designer doesn't know the risk aversion of the agents?

My thoughts so far:

  • The investment-in-information from a risk-neutral agent has a fundamental principal--agent issue, and the only fully-aligned solution is to grant the agent exactly the principal's profits (up to some \(\alpha,\beta\)). This is relatively boring.
  • If we just care about a risk-neutral agent making good choices between equally-costly options, then it's sufficient to grant the agent an incentive proportional to the principal's profits (up to some \(\alpha,\beta\)), but we can't do anything else.
  • What can we do if the incentive is also a function of cetain verifiable costs? (The principal can subsidize costs to the degree that they're collecting profits, right? Combining with the above, we can get an incentive with the right absolute level of verifiable investment, and the right tradeoffs between non-verified investment.)
  • What can we do with a risk-averse agent / risk-neutral principal? (If the agent's risk aversion is known, then we can just invert it to get an incentive function that operates properly in utility terms. If it's not...then what? Can we at least get bounds on predictions if we know bounds on the risk aversion?)