# Consensus Experiments

See Consensus Dynamics Notes for background.

## Model

Here is the present definition of the model. I hope to keep this up to date.

Some number

WorkingWiki messages

(debug message) User agent is 'CCBot/2.0', relying on mathjax

$N_{i}$ of individuals are faced with a common situation and they want to find a proposal that satisfies everyone. Proposals are modeled as strings of $N_{b}$ bits. Each individual has a different way of valuing proposals. This is modeled by associating with each individual a function assigning numerical utility to each possible proposal, and a "watermark" value such that proposals whose utility is no less than the watermark are considered acceptable.

The primary research program is to use this framework, or others like it, to investigate what processes work well for seeking an outcome that all individuals consider acceptable — that is, for seeking consensus.

### Fitness Landscapes

Fitness Landscape is another term for a function assigning numerical value (also called fitness, because of historical uses of this modeling strategy in evolutionary biology) to a given proposal. Proposals are instantiated by the BitString class in the code, and fitness landscapes by the FitnessLandscape class. Every BitString has neighbors, which are all the bit strings that differ from it in only one position. We traverse the landscape by changing one bit at a time while searching for a good proposal, that is, by moving from neighbor to neighbor. A local fitness peak, or local maximum, is a proposal whose value is greater than any of its neighbor's values. A global peak is a point that is valued higher than anything else in the landscape, but we aren't really concerned with that, because we're interested in problem spaces that are too big to be explored exhaustively.

Each individual (represented by the Individual class) has a fitness landscape that it uses to evaluate proposals. The research question is how to reconcile different people's fitness valuations, finding a compromise proposal that is acceptable to all.

#### Intra-landscape correlation

An uncorrelated landscape simply assigns independent random values to all bit strings. In this case, there are a large number of local maxima - locations on the landscape whose fitness exceeds all their neighbors' fitnesses - and starting from a random location it is typically a short walk to a local maximum (Kauffman and Levin 1987).

We model both uncorrelated and correlated landscapes. An uncorrelated landscape is implemented using SHA-1, which is a cryptographic hash function. These functions have a number of useful properties, but what matters for our purpose is that for any given pair of bit strings, the hash function produces outputs that are uncorrelated. If this were not true, the correlation could be used to find preimages of hash values. Since the infeasibility of finding preimages is a key criterion for hash functions, we are guaranteed uncorrelated outputs. We use this function as follows: A particular instance of a fitness landscape is specified by a seed value, which is a fixed string of bits - we use a random number generated when the simulation starts. For a given proposal (bit string) we join its bits to the seed value's bits end to end and calculate the hash value of the resulting string of bits. This hash value is a long string of bits, which we interpret as a series of unsigned integer values between $0$ and $2^{{32}}-1$; we add them all together modulo $2^{{32}}$ and divide the result by $2^{{32}}$ to obtain a positive real number less than 1.

A correlated landscape is implemented by dividing each proposal's bits into a number of different blocks of bits, using the uncorrelated-fitness function above on each block, and averaging the blocks' fitness (Perelson and Macken 1995). The fitness function for each block is differentiated by the others by making a composite seed consisting of the overall landscape's seed joined to the block number. In the resulting fitness landscape, the fitnesses of neighboring bit strings (those that differ by only a small number of bits) are more likely similar than the fitness of unrelated bit strings. This landscape typically has fewer fitness peaks, and longer distances from a random starting point to a local peak.

#### Inter-landscape correlation

It is also important to consider whether one individual's value function has anything in common with another's. We consider both uncorrelated and correlated individuals. In the uncorrelated case, we associate each individual with a fitness landscape as described above, either with or without correlations among the various values it assigns, and use a different seed for each individual.

In the correlated case, an individual's fitness function is a weighted sum of their private fitness function with the shared one:

$f_{{i}}(x)=wf_{s}(x)+(1-w)f_{{p(i)}}(x)$,

where $x$ is a proposal, $f_{{i}}$ is individual $i$'s fitness function, $f_{s}$ is the shared fitness function, $w$ is the weight given to the shared fitness function, and $f_{{p(i)}}$ is individual $i$'s private fitness function. The private fitness functions and shared fitness function are all independent functions (with correlated values or not) as described above, each having a unique seed value.

The relation between correlation and weight is as follows, where $C$ stands for Correlation:

$C(f_{i}(x),f_{j}(x))=E[(f_{i}(x)-\mu _{i})(f_{j}(x)-\mu _{j})]/\sigma _{i}\sigma _{j}$

where $\mu$ is mean and $\sigma$ is standard deviation;

$C(f_{i}(x),f_{j}(x))=E[(w(f_{s}(x)-\mu _{s})+(1-w)(f_{{p(i)}}(x)-\mu _{{p(i)}}))(w(f_{s}(x)-\mu _{s})+(1-w)(f_{{p(j)}}(x)-\mu _{{p(j)}})]/\sigma _{i}\sigma _{j}$
$=w^{2}\sigma _{s}^{2}/\sigma _{i}^{2}$

under the assumption that $f_{{p(i)}}$ and $f_{{p(j)}}$ are independent and identically distributed. Also,

$\sigma _{i}^{2}=E[(f_{i}(x)-\mu _{i})^{2}]=E[(w(f_{s}(x)-\mu _{s})+(1-w)(f_{{p(i)}}(x)-\mu _{{p(i)}}))^{2}]$
$=w^{2}\sigma _{s}^{2}+(1-w)^{2}\sigma _{{p(i)}}^{2}$,

and assuming that $f_{s}$ is also i.i.d. with the private functions,

$\sigma _{i}^{2}=(w^{2}+(1-w)^{2})\sigma _{s}^{2}$

and consequently

$C(f_{i}(x),f_{j}(x))=\frac{w^{2}}{w^{2}+(1-w)^{2}}$.
##### Multivariate-normal version

It has been suggested by Dushoff that I could also use a multivariate normal distribution to get correlated fitnesses without the assumption of an underlying shared function. I like this suggestion esthetically, but haven't implemented it...

First of all it's okay for them to be Gaussian distributed, because we really only care about the rank order of fitness values (do we need different people's values to be comparable?). If we want them to be in $[-1,1]$ or similar, we can do an inverse-logit transform. So $(f_{1}(x),\ldots,f_{{N_{i}}}(x))$ is Gaussian with some correlation structure, and if there's a block structure we can safely require that the block structure's the same for all individuals, so

$(f_{1}(x),\ldots,f_{{N_{i}}}(x))=(f_{{1/1}}(x),\ldots,f_{{N_{i}/1}}(x))+\ldots+(f_{{1/N_{B}}}(x),\ldots,f_{{N_{i}/N_{B}}}(x))$

where $N_{B}$ is the number of blocks.

We can take each $F_{b}=(f_{{1/b}},\ldots,f_{{N_{i}/b}})$ to be an IID multivariate normal random variable with the desired correlation structure. A multivariate normal variable such as this is implemented by identifying its principal components, generating each of those independently, and vector-summing them together (or by a similar but more efficient means).

This would all be kind of weird to implement in the code, since we wouldn't simply ask an individual's FitnessLandscape for a single value, or rather we would, but in order to produce the answer it would need us to effectively compute all individuals' fitnesses for that BitString... but this is fine in principle, and since the underlying random variables can be made using SHA1, each fitness value can in principle be calculated without remembering anything from other calculations, though in practice we would cache values so they aren't computed redundantly.

In the special case that we want $C(f_{{i/b}},f_{{j/b}})=\alpha$ for all pairs of individuals, we would have a variance-covariance matrix of

$\begin{pmatrix}1&\alpha&\cdots&\alpha\\ \alpha&1&&\alpha\\ \vdots&&\ddots&\vdots\\ \alpha&\alpha&\cdots&1\end{pmatrix}$.

There should be a way to decompose this matrix to produce a simple form for the correlated variables in terms of $N_{B}$ independent variables. The matrix can't be diagonalized, but it probably has a sensible Cholesky decomposition or something...

... so I didn't work out how to decompose the matrix, but here's the straightforward solution: let $X_{i}$ be $n$ iid standard normal variates, and $Y_{i}=\gamma X_{i}+\beta\sum _{j}X_{j}$. Then

$\mathbb{E}(Y_{i}Y_{j})=\mathbb{E}((\gamma X_{i}+\beta\sum _{k}X_{k})(\gamma X_{j}+\beta\sum _{\ell}X_{\ell}))$
$=\gamma^{2}\delta _{{ij}}E(X_{i}^{2})+2\beta\gamma\mathbb{E}(X_{i}^{2})+\beta^{2}\sum _{k}\mathbb{E}(X_{k}^{2})$
$=\text{either }\gamma^{2}+(2\gamma+n\beta)\beta\text{ or }(2\gamma+n\beta)\beta$.

So we choose $\beta$ and $\gamma$ so that those quantities are equal to 1 and $\alpha$, respectively.

### JGA process

The process as of now is like this, but I plan to replace it or at least add alternatives. Maybe I should call thisThis is an old problem formulation, replaced by the more inclusive setup described below. This search process is called joint greedy ascent (JGA).

initial proposal is 0
repeat:
each individual does steepest-ascent search to a local peak on their landscape, offers
that as a next proposal.
each individual evaluates each next proposal, blocks it if they consider it worse than
the existing proposal
if any new proposals have no blocks, the first of them is accepted
until no more progress is made.


Note that this is a deterministic process, once the initial proposal and utility landscapes are chosen.

### Composite process

I've now gone beyond the JGA to a little palette of processes with interchangeable parts. Currently there are three independent choices.

The initial proposal is always 0. This doesn't matter because the fitness landscapes are chosen randomly, so that 0 is essentially a random location on the landscape.

The facilitation strategy is either each proposes one:

each individual is asked for an alternative to the current proposal
for each proposed alternative, each individual is asked whether it's an
acceptable replacement for the current proposal.
if any alternatives have no objections (if nobody "blocks"), one of them
becomes the new current proposal.  (It should probably be chosen randomly,
but currently it's always the first one proposed, and people are always
asked in the same order, so people early in the list have preference.)


or one proposal at a time:

a randomly chosen individual is asked to make a proposal
everyone is asked to whether it's an acceptable replacement
if nobody blocks, it becomes the new current proposal


But I think I'm going to retire "each proposes one", because except for the part about choosing the first one on the list, which I don't like, it's really the same as "one proposal at a time". So I'm going to make anyone proposes a synonym for "one proposal at a time", and introduce blockers propose:

everyone is asked whether the current proposal is acceptable to them
a randomly chosen individual is chosen from among those who object to
the current proposal, and asked to make a proposal
everyone is asked to whether it's an acceptable replacement
if nobody blocks, it becomes the new current proposal


Note that a replacement may be an acceptable replacement to the status quo, while not being acceptable in absolute terms. We ask everyone both questions here.

The proposal strategy, which is how individuals generate alternatives to a proposal, is either best:

take the fittest of all superior-fitness neighboring points, then take the
best of its fittest neighbors, repeating until a local peak is reached.


or best neighbor:

return the fittest of all the point's nearest neighbors (if any are fitter)


or any improvement:

return any neighbor that is fitter than the point itself.


And the block strategy, which is how to decide whether something is an acceptable replacement for the current proposal, is if worse:

don't accept anything that you like less than the current proposal


or if worse and not acceptable:

block things that you like less than the current proposal, but only if they
don't meet your minimum criterion for acceptance.


So what I was calling JGA is now "each proposes one, best alternative, and block if worse".

### Source code

The simulation's source code is at github: git@github.com:worden-lee/consensus-simulation.git and git@github.com:worden-lee/adap-dyn.git.

And the local code for running simulations is at Consensus Code.

## Results

First, a very preliminary "experiment": a "consensus process" with one person is able to find a local fitness peak. Actually, this is an important baseline: I'll probably want to know how often it finds an acceptable solution, how good it is, etc.

### JGA

Second, the JGA, which is now superseded by the composite process (it's one of the cases).

A few test runs for JGA suggest that for completely uncorrelated landscapes (with themselves and with each other) larger group size implies no progress, due to too many blocks. Higher-dimensional (uncorrelated) landscapes may support more progress with somewhat larger groups than smaller landscapes.

The JGA process is kind of unforgiving: one person's best offer may be crappy for others, while a nearby point that they passed through on the way might be much better received, but we don't give it a chance. We might have much better success if we put those intermediates on the table as well.

It also should probably pick at random when there are multiple acceptable next steps.

### Composite process

This is the current research. Results are at Consensus Project Results.