# monotonic solution concepts

Anthony • Dec 17, 2021

A subclass of arising in the theory of . There are technical definitions, and I'll add my own below, but the gist is that with a monotonic solution concept, as information increases, one never "flip flops" about a possible solution. If one believes $\alpha$ to be a solution at one state of information and then, in a larger state of information believes $\alpha$ is not a solution anymore, then there is no larger context in which one will once again believe $\alpha$ is a solution. Once rejected, a possible solution is rejected forever. Since are prone to cycling, among other pathological dynamics, having rejection be guaranteed is a nice property to have.

Best worst-case does not have this quality if the set of metrics over which the worst-case is being taken can increase. The simple reason is that if I have two possible solutions $\alpha$ and $\beta$, which one appears better can flip flop as new metrics arise and lower the perceived best worst case of both, as in:

\begin{array}[c | c | c | c] a & t_1 & t_2 & t_3 \\ \alpha &5 & 3 & 3 \\ \beta &4 & 4 & 2 \\ \end{array}

In the context of ${t_1}$, $\alpha$ has best worst case of 5 while $\beta$ has 4 and $\alpha$ looks better. In the context of $\{t_1,t_2\}$, though, $\alpha$'s value drops to 3 while $\beta$'s remains at 4 and $\beta$ looks better. However, in the context of $\{t_1,t_2,t_3\}$, $\alpha$ looks better again because its value stays at 3 but $\beta$'s drops to 2. Thus, in a situation where we were only increasing information (adding $t_i$), we changed our minds about $\alpha$ twice–the hallmark of a non-monotonic solution concept. Flips flops like this can make the dynamics of difficult to understand and manage.

Work on directly considered best worst-case and developed techniques for working with these non-monotonic . One dangling thread from that work is that the techniques only make sense if there are a finite number of possible fitness values. Looking at the example above, it's clear that if there were a lower bound on the fitness values $\alpha$ and $\beta$ could take, then the flip flopping would eventually stop. There'd be theorems bounding how many times flip flopping could occur in an information-increasing path, and so there might be some extended notion of monotonicity that might apply. Travis Service started into these ideas with his notion of biased solution concepts.

I did some work arguing that monotonic solutions concepts can/should be thought of as convex for a certain interpretation of that word, and that they relate to something called the lower order on powersets of partial orders. Here's my thinking, beginning with the lower order. Let $\partial$ be the solution concept, and write $s\in\partial C_U$ to mean that $s$ is a solution in the set of configurations buildable from $U$, $C_U$, where $U\subset S$ and $C$ is some configuration-building functor. Sevan introduced a notion he called weak preference, which you can define like so: solution $\beta$ is weakly preferred to solution $\alpha$ if, for every context in which $\alpha$ is a solution, there is always a larger context in which $\beta$ is a solution. Using the notation just introduced, $\alpha\in\partial C_U$ implies there is a $U\subset V$ such that $\beta\in\partial C_V$. For any solution $\alpha$, define

$\mathscr{U}_{\alpha} = \{C_U\ |\ \alpha\in\partial C_U\}$

In other words, $\mathscr{U}_{\alpha}$ is the set of all configuration sets that have $\alpha$ as a solution.

We have that $\beta$ is weakly preferred to $\alpha$ if $\mathscr{U}_{\alpha}\leq^{\flat}\mathscr{U}_{\beta}$, where $\leq^{\flat}$ is the lower order on sets of sets coming from $\subset$ The lower order means that for any set in the lower guy, there exists a superset in the larger guy

Now about convexity and monotonicity. One way to state that $\partial$ is non-monotonic is that there is a sequence $C_U\subset C_V\subset C_W$ and a solution $\alpha$ such that $\alpha\in C_U$, $\alpha\not\in C_V$, and $\alpha\in C_W$. Thinking in terms of the map $\alpha\mapsto\mathscr{U}_{\alpha}$, $C_U$ and $C_W$ are in $\mathscr{U}_{\alpha}$ while $C_V$ is not. A convex subset of an ordered set is such that this does not happen: if $a$ and $c$ are in the set, then so are all $b$ that lie between $a$ and $b$ in the order. So, a non-monotonic solution concept has at least one $\alpha$ for which $\alpha\mapsto\mathscr{U}_{\alpha}$ is not convex, and conversely, if all $\mathscr{U}_\alpha$ are convex then the solution concept is monotonic.

I like this formulation in part because it allows us to talk about local monotonicity. Solution concepts can be monotonic in some places and non-monotonic in others. It seems likely that some amount of non-monotonicity does not affect algorithm performance appreciably.

I published this work in a paper, Thoughts On Solution Concepts in 2007. There have been a few followups of note. I especially like the papers by Travis Service and by Achim Jung and Jon Rowe. Service introduces a notion he calls biased solution concepts and connects those to . Jung and Rowe develop a suggestion I'd made that and could be fruitfully conceived of in domain-theoretic terms.