question for everyone...

Is there a name for when an optimal strategy is avoided because the optimal strategy is easily defended against when you know the person is using it in the first place?

Or the reverse, where someone might intentionally use a very poor strategy specifically because the user would never expect a user to pick a poor strategy and thus, at least when assumed it wont be used, becomes a strong strategy?

@freemo

That would mean that it's not an optimal strategy, no?

This reasoning makes me think of probabilistic strategies: for some classes of games there is no best deterministic strategy, but there _is_ a best probabilistic one. See en.wikipedia.org/wiki/Strategy for something related.

@robryk Depends.

Might work better as an example.

Imagine a game of battleship but where one player is firing the other controls the ships. So we are trying to find an optimal firing pattern vs optimal ship placement.

By default there is no strategy for ship placement I can think of other than to make sure ships arent adjacent. So you have a more or less random placement.

But if ship placement is random then since there are more ship layouts that overlap closer tot he center of the board as opposed to near the edges, then the optimal firing strategy would be to avoid the edges, with the 4 corner spaces being the least favorable target to hit. So the optimal stregy here is to more or less focus away from the walls.

it should also be noted that while random placement does tend to cover squares near the center more often, there is a strategic reason to not prefer walls and intentionally place ships near the center too. That is, once a strike on a ship is made it is easier to guess the orientation of the ship and destroy it for ships next to walls than it is for ships away from walls. So for this reason as well it is generally optimal to place ships asway from walls.

Anyway as you can see this chain of logic can go on as per the premise.

Follow

@freemo

I think this is exactly the kind of reasoning described in wiki for why mixed strategies are a thing (see the choosing how to pitch in baseball example there).

The thing that might be interesting is also exploring what "optimal" means. For simplicity let's assume the game rules itself don't randomize anything and that the game has a binary outcome (A wins or B wins, no scoring).

In battleships, if each player was forced to be deterministic, for each strategy for player A there is a strategy for player B that defeats it and v.v. This means that the standard approach of "optimal strategy is one that wins whenever it's possible to win" would imply lack of optimal strategies at all: after all, it's always possible to win but no strategy wins always. This is still true if you admit nondeterministic strategies: it's possible to win against each deterministic strategy with p=1, but there's no strategy that does that for all deterministic counter-strategies.

The reasoning you described is similar to looking at changes you'd make in your strategy in response to knowing that opponent's, then doing the same for your opponent, and iterating this. So, it seems that you describe looking for a fixpoint of that: a pair of strategies s.t. neither you nor the opponent would want to change theirs upon learning what other player's strategy is. Note that this necessarily speaks about pairs of strategies (IIRC there are games where you have multiple such pairs, but I sadly don't recall examples). It's been shown that every game with finite sets of moves has at least one such pair.

NB if you are after such pairs of strategies, then instead of iterating it's often easier to directly look at the fixpoint condition and infer what such a pair of strategies would need to satisfy for both players not to wish to amend theirs.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.