INSISOC


Go to content

BARGAINING MODEL (TAG MODEL)

RESEARCH > APPLICATIONS > NETLOGO APPLETS

INSISOC >> Publications >> NetLogo Applets >> Bargaining Model (TAG MODEL)

BARGAINING MODEL (TAG MODEL)

This applet requires Java 1.6 or higher. You may obtain the latest Java plugin from Sun's Java site.

powered by NetLogo

view/download model file: Bargaining_model_(tag_model).nlogo

WHAT IS IT?


This is a replication of the model developed by Robert Axtell, Joshua M. Epstein and H. Peyton Young (AEY's model hereafter) in which two players demand some portion of the same pie, and the portion of pie that they get depends on the portion demanded by their opponent.



HOW IT WORKS


In AEY's model, two players demand some portion of a pie. They could demand three possible portions: low, medium and high. As long as the sum of the two demands is not more than 100 percent of the pie, each player gets what he demands; otherwise each one gets nothing.

There is a population of n agents that are randomly paired to play. Each agent has a memory in which he retains the decision taken by his opponents in previous games. The agent uses the information stored in his memory to demand the portion of the pie that maximizes his/her benefit (with probability 1-epsilon) and randomly (with probably epsilon), where epsilon represents the noise level in the system.

What makes an agent choose among low, medium or high?

There are two possible decision rules in this model:

> FIRST DECISION RULE: DEMAND THE OPTION THAT MAXIMIZES THE EXPECTED BENEFIT

An agent will check his memory to find how often each option has been chosen by his opponents. Then, the agent considers that the probability that his current opponent chooses L (low) – for example – is equal to the relative appearance of L in his memory. In the same way, he calculates how likely it is for the opponent to choose M and H. Once the agent knows this information, the agent estimates his benefit for the three possible options as follows:
- The average benefit I get if I choose L is L multiplied by the probability that my opponent chooses L, M or H.
- The average benefit I get if I choose M is M multiplied by the probability that my opponent chooses L or M.
- The average benefit I get if I choose H is H multiplied by the probability that my opponent chooses H.

> SECOND DECISION RULE: CHOOSE THE BEST REPLY AGAINST THE OPPONENTS' MOST FREQUENT DEMAND

In this case, the agents will demande the pie portion that maximizes their benefits against the most likely option taken by their opponents in previous games (i.e. the mode of their memory). This is to say, an agent assumes that his opponent’s option will be the mode of the content of his memory.
An agent will choose H if L is the most frequent decision taken by his opponents in the previous matches; If the most repeated value in his memory is M, the player will choose M. If previous matches show that H is the most frequent decision taken by his opponents, the agent will choose L.

Notice that any of these ‘rational behaviours’ (first or second decision sule) take place with probability 1-epsilon. However, a random decision is taken with probability epsilon.

The payoff matrix (i.e. the combination of rewards) can be chosen as well by changing the value assigned to low in the interface screen. The values used for low, medium and high in AEY's model are, respectively, 30, 50 and 70.

Finally, you can choose three types of memory configuration:

• STANDARD MEMORY: An m-size memory (m is defined in the interface screen). All the agents are initialized with m random values in their memories.

• PROGRESSIVE MEMORY: The agents' memories are initialized with only one element (chosen at random). After the first iteration, each agents stores the decision taken by his opponent, increasing the memory size to two; after the second interation, each agent stores the decision taken by his opponent, generating a 3-size memory. This process is allowed to continue during a number of iterations, until the memory size reaches the value 'm'. At this point, the agents start deleting their oldest memory so that the memory size keps constant and equal to m.

• ENDORSED MEMORY: In AEY’s model, all the elements in the agents’ memory have the same weight when they have to take a decision. This means that the oldest values in the memory (i.e. the decisions taken by the agents’ opponents several matches ago) have the same importance than the recent ones. When the 'endorsed memory' option is selected, the memory structure is modified so that the recent decisions have a higher weight than the older ones.
The weight that we have assigned to each memory position follows an arithmetic progression: for instance, if the common difference of the arithmetic progression is set to 3, the oldest value in the memory will have a weight of 1, the second oldest value will have a weight of 4 (1+3), the third oldest value will have a weight of 7 (4+3), and so on. Supposing that the memory size is m, the most recent value in the memory will have a weight of [1 + (memory size – 1) · 3].



HOW TO USE IT


• 1. Select the parameters to run the simulation. They are located below the simplex:
- n: number of agents.
- m: memory size.
- number of iterations.
- epsilon: noise level.

If you want the agents to show a label with their id, turn on the switch 'label-on-agents'.

If you want to stop the simulation when the system reaches an equitable equilibrium or a fractious state, turn on the switch 'stop-if-equilibrium'. Otherwise, the simulation keeps on running until the specified number of iterations is reached (this is useful to observe how the system leaves the equitable equilibrium or the fractious state when there is noise in the system -epsilon > 0).

• 2. Select the payoff matrix by choosing a value for low (lowest demand). Notice that AEY's original model uses 30 as the lowest demand.

• 3. Choose a decision rule.

• 4. Select a memory configuration.

• 5. Click setup to initialize the system with the selected parameters and then go to run the simulation.






HOME | PEOPLE | RESEARCH | PUBLICATIONS | CONFERENCES | NEWS & FEEDS | RESOURCES | LINKS | Site Map


Back to content | Back to main menu