康尼島（英語：），又译科尼島，是位於美國紐約市布魯克林區的半島，原本為一座海島，其面向大西洋的海灘是美國知名的休閒娛樂區域。居民大多集中位於半島的西側，約有六萬人左右，範圍西至希捷社區，東至布萊登海灘和曼哈頓海灘，而北至葛瑞福山德社區。 二十世紀前葉在美國極為知名的太空星際樂園即是以康尼島作為主要的腹地，該樂園在二次大戰後開始衰退，並持續荒廢了許久。在最近幾年，康尼島因為凱斯班公園的開幕而重新繁榮起來，凱斯班公園是職棒小聯盟球隊布魯克林旋風的主要球場。旋風隊在當地十分受到歡迎，每季開賽時都會吸引許多球迷到場觀戰。 ..

Anjos da guarda são os anjos que segundo as crenças cristãs, Deus envia no nosso nascimento para nos proteger durante toda a nossa vida. Argumenta-se que a Bíblia sustenta em algumas ocasiões a crença do anjo da guarda: "Vou enviar um anjo adiante de ti para ..

Altay Cumhuriyeti (Rusça: Респу́блика Алта́й / Respublika Altay; Altay Türkçesi: Алтай Республика / Altay Respublika), Rusya'nın en güneyinde yer alan, federasyona bağlı bir özerk cumhuriyet. Orta Asya'da Asya kıtasının coğrafî merkezinin hemen kuzeyinde ve ..

沙羅週期長度為18年11天，本週期包含70次日食，其中公元3000年以前有49次。 註：下表各項數據均為食分最大地點的情況。寬度指該地點食甚時刻月球的本影（全食時）或偽本影（環食時）落在地表的寬度，持續時間指該地點食既到生光的時間，即全食或環食的持續時間，全環食（亦稱混合食）發生時，食分最大處為全食。最後兩項參數不適用於偏食。 本周期最終結束於3378年6月17日。

希西家王 (希伯來語：，英語：）是猶大末年的君主，也是猶大國歷史中極尊重上帝的君王，在位29年。終年54歲。他在位的年份有兩種說法：其一是前715年-前687年；另一種是前716年-前687年。他的德行在其前後的猶大列王中，没有一個能及他。其希伯來名字的意思是“被神加力量”。 希西家的父親亞哈斯是一個背逆上帝的君王。因此在希西家當政之初的猶大國，無論政治，宗教上都极其黑暗。根據《聖經》記載，因为北國以色列被亞述攻滅，亞述王可以趁勢来攻打猶大國；又猶大的先王亞哈斯曾封鎖了聖殿之路，引導舉國崇拜偶像，大大得罪上帝。若非上帝的憐憫，為了堅定向大衛家所說的應許，猶大國的暫得幸存。希西家在二十五歲就登基作王，且正在國家危急之秋，由於行耶和華上帝眼中看為正的事，因而得上帝的憐憫，得以成功脫離亞述大軍的攻擊和一場致死的大病。他樂於聽從當代先知以賽亞的指導，使他為上帝大發熱心。 ..

The OnePlus 2 (also abbreviated as OP2) is a smartphone designed by OnePlus. It is the successor to the OnePlus One. OnePlus revealed the phone on 28 July 2015 via virtual reality, using Google's Cardboard visor and their own app. OnePlus sold out 30,000 units ..

兴隆街镇，是中华人民共和国四川省内江市资中县下辖的一个乡镇级行政单位。 兴隆街镇下辖以下地区： 兴隆街社区、兴松村、玄天观村、三元村、金星村、三皇庙村、双桥村、红庙子村、华光村、高峰村、芦茅湾村、篮家坝村、五马村和解放村。

Национальная и университетская библиотека (словен. Narodna in univerzitetna knjižnica, NUK), основанная в 1774 году, — один из важнейших образовательных и культурных учреждений Словении. Она располагается в центре столицы Любляна, между улицами Турьяшка (Turjaška ..

Mauser M1924 (или M24) — серия винтовок компании Mauser, использовавшихся в армиях Бельгии и Югославии. Внешне напоминают чехословацкие винтовки vz. 24, в которых использовались стандартный открытый прицел, патроны калибра 7,92×57 мм (или 8×57 мм), укороченные ..

第三条道路（英語：），又称新中间路线（Middle Way），是一种走在自由放任资本主义和传统社会主义中间的一种政治经济理念的概称。它由中间派所倡导，是社会民主主义的一个流派，英国工党称其为「现代化的社会民主主义」。它的中心思想是既不主张纯粹的自由市场，也不主张纯粹的社會主義，主张在两者之间取折衷方案。 第三条道路不只单单是走在中间，或只是一种妥协或混合出来的东西，第三条道路的提倡者看到了社会主义和资本主义互有不足之处，所以偏向某一极端也不是一件好事，第三条道路正正是揉合了双方主义的优点，互补不足而成的政治哲学。 ..

**Minimax** (sometimes **MinMax**, **MM**^{[1]} or **saddle point**^{[2]}) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for *mini*mizing the possible loss for a worst case (*max*imum loss) scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain. Originally formulated for two-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.

The **maximin value** is the highest value that the player can be sure to get without knowing the actions of the other players; equivalently, it is the lowest value the other players can force the player to receive when they know the player's action. Its formal definition is:^{[3]}

Where:

- i is the index of the player of interest.
- denotes all other players except player i.
- is the action taken by player i.
- denotes the actions taken by all other players.
- is the value function of player i.

Calculating the maximin value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions—the one that gives player i the smallest value. Then, we determine which action player i can take in order to make sure that this smallest value is the highest possible.

For example, consider the following game for two players, where the first player ("row player") may choose any of three moves, labelled T, M, or B, and the second player ("column" player) may choose either of two moves, L or R. The result of the combination of both moves is expressed in a payoff table:

L | R | |
---|---|---|

T | 3,1 | 2,-20 |

M | 5,0 | -10,1 |

B | -100,2 | 4,4 |

(where the first number in each cell is the pay-out of the row player and the second number is the pay-out of the column player).

For the sake of example, we consider only pure strategies. Check each player in turn:

- The row player can play T, which guarantees them a payoff of at least 2 (playing B is risky since it can lead to payoff −100, and playing M can result in a payoff of −10). Hence: .
- The column player can play L and secure a payoff of at least 0 (playing R puts them in the risk of getting ). Hence: .

If both players play their respective maximin strategies , the payoff vector is .

The **minimax value** of a player is the smallest value that the other players can force the player to receive, without knowing the player's actions; equivalently, it is the largest value the player can be sure to get when they *know* the actions of the other players. Its formal definition is:^{[3]}

The definition is very similar to that of the maximin value—only the order of the maximum and minimum operators is inverse. In the above example:

- The row player can get a maximum value of 4 (if the other player plays R) or 5 (if the other player plays L), so: .
- The column player can get a maximum value of 1 (if the other player plays T), 1 (if M) or 4 (if B). Hence: .

For every player i, the maximin is at most the minimax:

Intuitively, in maximin the maximization comes before the minimization, so player i tries to maximize their value before knowing what the others will do; in minimax the maximization comes after the minimization, so player i is in a much better position—they maximize their value knowing what the others did.

Another way to understand the *notation* is by reading from right to left: when we write

the initial set of outcomes depends on both and . We first *marginalize away* from , by maximizing over (for every possible value of ) to yield a set of marginal outcomes , which depends only on . We then minimize over over these outcomes. (Conversely for maximin.)

Although it is always the case that and , the payoff vector resulting from both players playing their minimax strategies, in the case of or in the case of , cannot similarly be ranked against the payoff vector resulting from both players playing their maximin strategy.

In two-player zero-sum games, the minimax solution is the same as the Nash equilibrium.

In the context of zero-sum games, the minimax theorem is equivalent to:^{[4]}‹See TfM›
^{[failed verification]}

For every two-person, zero-sum game with finitely many strategies, there exists a value V and a mixed strategy for each player, such that

- (a) Given player 2's strategy, the best payoff possible for player 1 is V, and
- (b) Given player 1's strategy, the best payoff possible for player 2 is −V.

Equivalently, Player 1's strategy guarantees them a payoff of V regardless of Player 2's strategy, and similarly Player 2 can guarantee themselves a payoff of −V. The name minimax arises because each player minimizes the maximum payoff possible for the other—since the game is zero-sum, they also minimize their own maximum loss (i.e. maximize their minimum payoff). See also example of a game without a value.

B chooses B1 | B chooses B2 | B chooses B3 | |
---|---|---|---|

A chooses A1 | +3 | −2 | +2 |

A chooses A2 | −1 | 0 | +4 |

A chooses A3 | −4 | −3 | +1 |

The following example of a zero-sum game, where **A** and **B** make simultaneous moves, illustrates *minimax* solutions. Suppose each player has three choices and consider the payoff matrix for **A** displayed on the right. Assume the payoff matrix for **B** is the same matrix with the signs reversed (i.e. if the choices are A1 and B1 then **B** pays 3 to **A**). Then, the minimax choice for **A** is A2 since the worst possible result is then having to pay 1, while the simple minimax choice for **B** is B2 since the worst possible result is then no payment. However, this solution is not stable, since if **B** believes **A** will choose A2 then **B** will choose B1 to gain 1; then if **A** believes **B** will choose B1 then **A** will choose A1 to gain 3; and then **B** will choose B2; and eventually both players will realize the difficulty of making a choice. So a more stable strategy is needed.

Some choices are *dominated* by others and can be eliminated: **A** will not choose A3 since either A1 or A2 will produce a better result, no matter what **B** chooses; **B** will not choose B3 since some mixtures of B1 and B2 will produce a better result, no matter what **A** chooses.

**A** can avoid having to make an expected payment of more than 1∕3 by choosing A1 with probability 1∕6 and A2 with probability 5∕6: The expected payoff for **A** would be 3 × (1∕6) − 1 × (5∕6) = −1∕3 in case **B** chose B1 and −2 × (1∕6) + 0 × (5∕6) = −1/3 in case **B** chose B2. Similarly, **B** can ensure an expected gain of at least 1/3, no matter what **A** chooses, by using a randomized strategy of choosing B1 with probability 1∕3 and B2 with probability 2∕3. These mixed minimax strategies are now stable and cannot be improved.

Frequently, in game theory, **maximin** is distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain.

"Maximin" is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own minimum payoff. In non-zero-sum games, this is not generally the same as minimizing the opponent's maximum gain, nor the same as the Nash equilibrium strategy.

The minimax values are very important in the theory of repeated games. One of the central theorems in this theory, the folk theorem, relies on the minimax values.

In combinatorial game theory, there is a minimax algorithm for game solutions.

A **simple** version of the minimax *algorithm*, stated below, deals with games such as tic-tac-toe, where each player can win, lose, or draw.
If player A *can* win in one move, their best move is that winning move.
If player B knows that one move will lead to the situation where player A *can* win in one move, while another move will lead to the situation where player A can, at best, draw, then player B's best move is the one leading to a draw.
Late in the game, it's easy to see what the "best" move is.
The Minimax algorithm helps find the best move, by working backwards from the end of the game. At each step it assumes that player A is trying to **maximize** the chances of A winning, while on the next turn player B is trying to **minimize** the chances of A winning (i.e., to maximize B's own chances of winning).

A **minimax algorithm**^{[5]} is a recursive algorithm for choosing the next move in an n-player game, usually a two-player game. A value is associated with each position or state of the game. This value is computed by means of a position evaluation function and it indicates how good it would be for a player to reach that position. The player then makes the move that maximizes the minimum value of the position resulting from the opponent's possible following moves. If it is **A**'s turn to move, **A** gives a value to each of their legal moves.

A possible allocation method consists in assigning a certain win for **A** as +1 and for **B** as −1. This leads to combinatorial game theory as developed by John Horton Conway. An alternative is using a rule that if the result of a move is an immediate win for **A** it is assigned positive infinity and if it is an immediate win for **B**, negative infinity. The value to **A** of any other move is the maximum of the values resulting from each of **B**'s possible replies. For this reason, **A** is called the *maximizing player* and **B** is called the *minimizing player*, hence the name *minimax algorithm*. The above algorithm will assign a value of positive or negative infinity to any position since the value of every position will be the value of some final winning or losing position. Often this is generally only possible at the very end of complicated games such as chess or go, since it is not computationally feasible to look ahead as far as the completion of the game, except towards the end, and instead, positions are given finite values as estimates of the degree of belief that they will lead to a win for one player or another.

This can be extended if we can supply a heuristic evaluation function which gives values to non-final game states without considering all possible following complete sequences. We can then limit the minimax algorithm to look only at a certain number of moves ahead. This number is called the "look-ahead", measured in "plies". For example, the chess computer Deep Blue (the first one to beat a reigning world champion, Garry Kasparov at that time) looked ahead at least 12 plies, then applied a heuristic evaluation function.^{[6]}

The algorithm can be thought of as exploring the nodes of a *game tree*. The *effective branching factor* of the tree is the average number of children of each node (i.e., the average number of legal moves in a position). The number of nodes to be explored usually increases exponentially with the number of plies (it is less than exponential if evaluating forced moves or repeated positions). The number of nodes to be explored for the analysis of a game is therefore approximately the branching factor raised to the power of the number of plies. It is therefore impractical to completely analyze games such as chess using the minimax algorithm.

The performance of the naïve minimax algorithm may be improved dramatically, without affecting the result, by the use of alpha-beta pruning. Other heuristic pruning methods can also be used, but not all of them are guaranteed to give the same result as the un-pruned search.

A naïve minimax algorithm may be trivially modified to additionally return an entire Principal Variation along with a minimax score.

The pseudocode for the depth limited minimax algorithm is given below.

functionminimax(node, depth, maximizingPlayer)isifdepth = 0ornode is a terminal nodethenreturnthe heuristic value of nodeifmaximizingPlayerthenvalue := −∞for eachchild of nodedovalue := max(value, minimax(child, depth − 1, FALSE))returnvalueelse(* minimizing player *)value := +∞for eachchild of nodedovalue := min(value, minimax(child, depth − 1, TRUE))returnvalue

(* Initial call *)minimax(origin, depth, TRUE)

The minimax function returns a heuristic value for leaf nodes (terminal nodes and nodes at the maximum search depth). Non leaf nodes inherit their value from a descendant leaf node. The heuristic value is a score measuring the favorability of the node for the maximizing player. Hence nodes resulting in a favorable outcome, such as a win, for the maximizing player have higher scores than nodes more favorable for the minimizing player. The heuristic value for terminal (game ending) leaf nodes are scores corresponding to win, loss, or draw, for the maximizing player. For non terminal leaf nodes at the maximum search depth, an evaluation function estimates a heuristic value for the node. The quality of this estimate and the search depth determine the quality and accuracy of the final minimax result.

Minimax treats the two players (the maximizing player and the minimizing player) separately in its code. Based on the observation that , minimax may often be simplified into the negamax algorithm.

Suppose the game being played only has a maximum of two possible moves per player each turn. The algorithm generates the tree on the right, where the circles represent the moves of the player running the algorithm (*maximizing player*), and squares represent the moves of the opponent (*minimizing player*). Because of the limitation of computation resources, as explained above, the tree is limited to a *look-ahead* of 4 moves.

The algorithm evaluates each *leaf node* using a heuristic evaluation function, obtaining the values shown. The moves where the *maximizing player* wins are assigned with positive infinity, while the moves that lead to a win of the *minimizing player* are assigned with negative infinity. At level 3, the algorithm will choose, for each node, the **smallest** of the *child node* values, and assign it to that same node (e.g. the node on the left will choose the minimum between "10" and "+∞", therefore assigning the value "10" to itself). The next step, in level 2, consists of choosing for each node the **largest** of the *child node* values. Once again, the values are assigned to each *parent node*. The algorithm continues evaluating the maximum and minimum values of the child nodes alternately until it reaches the *root node*, where it chooses the move with the largest value (represented in the figure with a blue arrow). This is the move that the player should make in order to *minimize* the *maximum* possible loss.

Minimax theory has been extended to decisions where there is no other player, but where the consequences of decisions depend on unknown facts. For example, deciding to prospect for minerals entails a cost which will be wasted if the minerals are not present, but will bring major rewards if they are. One approach is to treat this as a game against *nature* (see move by nature), and using a similar mindset as Murphy's law or resistentialism, take an approach which minimizes the maximum expected loss, using the same techniques as in the two-person zero-sum games.

In addition, expectiminimax trees have been developed, for two-player games in which chance (for example, dice) is a factor.

In classical statistical decision theory, we have an estimator that is used to estimate a parameter . We also assume a risk function , usually specified as the integral of a loss function. In this framework, is called **minimax** if it satisfies

An alternative criterion in the decision theoretic framework is the Bayes estimator in the presence of a prior distribution . An estimator is Bayes if it minimizes the *average* risk

A key feature of minimax decision making is being non-probabilistic: in contrast to decisions using expected value or expected utility, it makes no assumptions about the probabilities of various outcomes, just scenario analysis of what the possible outcomes are. It is thus robust to changes in the assumptions, as these other decision techniques are not. Various extensions of this non-probabilistic approach exist, notably minimax regret and Info-gap decision theory.

Further, minimax only requires ordinal measurement (that outcomes be compared and ranked), not *interval* measurements (that outcomes include "how much better or worse"), and returns ordinal data, using only the modeled outcomes: the conclusion of a minimax analysis is: "this strategy is minimax, as the worst case is (outcome), which is less bad than any other strategy". Compare to expected value analysis, whose conclusion is of the form: "this strategy yields E(*X*)=*n.*" Minimax thus can be used on ordinal data, and can be more transparent.

In philosophy, the term "maximin" is often used in the context of John Rawls's *A Theory of Justice,* where he refers to it (Rawls 1971, p. 152) in the context of The Difference Principle.
Rawls defined this principle as the rule which states that social and economic inequalities should be arranged so that "they are to be of the greatest benefit to the least-advantaged members of society".^{[7]}^{[8]}

- ↑ Provincial Healthcare Index 2013 (Bacchus Barua, Fraser Institute, January 2013 -see page 25-)
- ↑ Turing and von Neumann - Professor Raymond Flood - Gresham College at 12:00
- 1 2 Michael Maschler, Eilon Solan & Shmuel Zamir (2013).
*Game Theory*. Cambridge University Press. pp. 176–180. ISBN 9781107005488.CS1 maint: uses authors parameter (link) - ↑ Osborne, Martin J., and Ariel Rubinstein.
*A Course in Game Theory*. Cambridge, MA: MIT, 1994. Print. - ↑ Russell, Stuart J.; Norvig, Peter (2003),
*Artificial Intelligence: A Modern Approach*(2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, pp. 163–171, ISBN 0-13-790395-2 - ↑ Hsu, Feng-Hsiung (1999), "IBM's Deep Blue Chess Grandmaster Chips",
*IEEE Micro*, Los Alamitos, CA, USA: IEEE Computer Society,**19**(2): 70–81, doi:10.1109/40.755469,During the 1997 match, the software search extended the search to about 40 plies along the forcing lines, even though the nonextended search reached only about 12 plies.

- ↑ Arrow, "Some Ordinalist-Utilitarian Notes on Rawls's Theory of Justice, Journal of Philosophy 70, 9 (May 1973), pp. 245-263.
- ↑ Harsanyi, "Can the Maximin Principle Serve as a Basis for Morality? a Critique of John Rawls's Theory, American Political Science Review 69, 2 (June 1975), pp. 594-606.

Look up in Wiktionary, the free dictionary.minimax |

Wikiquote has quotations related to: Minimax |

- "Minimax principle",
*Encyclopedia of Mathematics*, EMS Press, 2001 [1994] - A visualization applet
- Maximin principle at Dictionary of Philosophical Terms and Names
- Play a betting-and-bluffing game against a mixed minimax strategy
- Minimax at Dictionary of Algorithms and Data Structures
- Minimax (with or without alpha-beta pruning) algorithm visualization — game tree solving (Java Applet), for balance or off-balance trees.
- Minimax Tutorial with a Numerical Solution Platform
- Java implementation used in a Checkers Game

© 2019 raptorfind.com. Imprint, All rights reserved.