There is a lot to say about these, but primarily I am coming from the perspective of coevolutionary algorithmscoevolutionary algorithms

A part of complexity science that deals directly with nature-inspired evolutionary processes involving interaction in the fitness function. A definition I've used in the past is that these algorith... theory, where the notion is meant to be "arg min for interactive domainsinteractive domains

A collection of one or more functions, called metrics, of the form $p\colon X_1\times X_2\times\cdots\times X_n\rightarrow R$, where

each $i$ with $1\leq i\leq n$ is a domain role

an element $...". Below are some details.

First, a punchline: coevolutionary algorithmscoevolutionary algorithms

A part of complexity science that deals directly with nature-inspired evolutionary processes involving interaction in the fitness function. A definition I've used in the past is that these algorith... should consist of the specification of an interactive domain, a solution concept, and some form of evolutionary dynamics over the interactive domain that aims to find solutions as specified by the solution concept.

When optimizing functions, there are a number of possibilities for specifying exactly what you're trying to accomplish. If you have some function $f\colon S\rightarrow\mathbb{R}$, "maximizing" it could mean:

- Finding the value $r\in\mathbb{R}$ that $f$ actually takes on some unspecified input that is larger than or equal to any other value the function takes on other inputs
- Finding an element $s\in S$ such that $f(s) = r$
- Finding all elements in $S$ that $f$ gives value $r$

The last, or possibly the second to last, are sometimes called arg max. All this puts aside things like suprema, or the possibility that $f$ never takes a maximum value.

When it comes to a two-input function like $p\colon S\times T\rightarrow R$, a simplest-possible interactive domain, a new layer of complexity is added

- $R$ may have incomparable elements
- It is not specified whether we are seeking an $s\in S$, a $t\in T$, a pair $(s,t)\in S\times T$, or something else such as an ensemble

What we do with both these points needs to be clarified before we can begin to talk about optimizing. The interactive domain definition specifies who gets the value from a function like $p$ (second point), while the solution concept specifies what solutions are selected given all this information.

As with max or arg max, solution concepts are in a sense polymorphic. Rather than try to write down what they are in full generality using symbols, I'll just say that a solution concept should be such that it specifies a well-defined collection of solutions for any view/state of an interactive domain.

In the case of a single function $p\colon S\times T\rightarrow R$ interpreted as giving values to elements of $S$ such that higher up $R$'s order is better, then a solution concept might take a form like $\partial\colon\mathscr{P}(S)\times \mathscr{P}(T)\rightarrow\mathscr{P}(S)$. In algorithms that seek Nash equilibria, the output might be $\mathscr{P}(\Lambda^S)$ (sets of mixtures of elements of $S$) instead. In algorithms that seek Pareto non-dominated fronts, the output might be $\mathscr{P}(\mathscr{P}(S))$ (there are technical reasons why it does not suffice to output a single Pareto front). So a slight generalization is to say the solution concept maps the "raw material" from the interactive domain or search algorithm to a collection of configurations; the solution concept, or some other device, will have to specify what these configurations are then.

Sevan Ficici's PhD dissertation develops a notion of monotonic solution concept and shows these have nice ratcheting properties. I've done some work on coevolutionary free lunchescoevolutionary free lunches

Refers to the general idea that for a fixed class of problems, there can be [[coevolutionary algorithms]] that perform better than others on all instances of that class. It is an extension of a ser... that develops techniques for (some) non-monotonic solution concepts.