Monday, January 28, 2019

Devil’s Roulette – Maximum Surprise Expected!

"Here is the wisdom. Let him that hath understanding count the number of the wild beast: for it is the number of a game; and its number is three-score and six."
Suppose you're in a casino playing roulette, and that you can only bet on Black or Red. The roulette wheel has 18 red numbers, 18 black ones, and a green zero. If you bet on a color and a number of that color comes up, you win the size of your bet, but if another number comes up, you lose your bet. You can't bet on zero, so whenever the zero comes up, the casino wins everything.


Once you start playing, you're bound to lose on average. You can design clever systems that either give you a good chance of winning a small amount at a small risk of complete disaster (double your bet until you win), or the other way around (double until you lose). But the presence of the zero tilts the whole thing a tiny bit in the casino's favour so that you always lose in the long run.

Now into the casino walks the devil himself and gives you an offer. If you lend him your soul, you may ask for the exact number of red numbers, black numbers, and zeros in the next $N$ rounds, for any one number $N$. The devil will then look into the future and tell you the answer.

But there's a catch (what did you expect, bargaining with the devil?). Once you know those three numbers, it will turn out that whatever you do in the following $N$ rounds, the roulette ball will bounce in a way that gives you maximal bad luck. 

Suppose for instance that you ask for seven rounds and the devil tells you there will be three red numbers, three black, and one zero. If you wait for the last of those seven rounds, hoping to then double your stack when you know the outcome, then that's going to be when the zero comes up. But if you start betting on one color, then sure enough the other color will come up (and the zero will still be left).

Notice though that "bad luck" doesn't necessarily mean you lose in the current round. It means that the outcome will be what's worst for you overall. If you bet 10 out of 100 chips on black when there's a black number and a zero left, you will win (because otherwise you would double your stack in the last round).   

Let's decide what the rules are and what we're trying to optimize. We start the game with a bankroll that we can think of as a large stack of chips, say a million. We assume that at any time we can bet any fraction of what we have, like one third of our stack, even though a million isn't divisible by 3. We can't buy more chips during the game, so the amount we have at each round is a limit on how much we can bet. We're trying to finish with as much as possible after the $N$ rounds, and the payoff of the game is our final bankroll measured in units of the initial stack.

We can choose the number $N$ as we please, and the number of red, black, and zeros in the next $N$ rounds will be determined by the randomness of the roulette wheel with no supernatural intervention by the devil. If there are $N$ zeros, we can't win anything (so the payoff is 1 as we never make a bet), and if there are $N$ red or $N$ black numbers, we can double our stack $N$ times ending up on $2^N$, so it's not that interesting to ask about best- or worst-case scenarios for this stage.

Next, knowing $N$ and the number of red, black, and zeros, we are assuming a worst-case (also known as adversarial) scenario. We can imagine that the croupier gives the devil a set of $N$ cards of the colors red, black and green with the given distribution, and that we play against the devil. In each of $N$ rounds we make a bet, after which the devil plays one of the cards to decide the outcome. That card is then removed, and we play until the cards are exhausted. This stage of the game involves no probability. It's a two-person game of perfect information like chess.  


How to bet

A first observation is that unless there are only zeros, we can always win something. Suppose for instance that $N=6$ and that there are five zeros and one red. That's a pretty bad outcome, but still, if we have a stack of 63 chips, we can make that 64 by the double-until-you-win strategy: First we bet 1 chip on Red. If we win we're done and there are only zeros left, so suppose we lose. The devil has now spent one of the zeros, and in the next round we bet 2 on Red. Again if we win we're done, so assume we lose. Then we bet 4 in the next round and so on. In the penultimate round we have 48 chips and bet 16. If we lose again, the devil has spent the last zero and we must win in the last round.

Here's a challenge that you might want to try before reading on: Suppose $N=5$ and you're told that there will be 3 red and 2 black numbers. If you start out with 100 chips, how can you make sure to get more than 300? And if instead there are 3 red numbers and 2 zeros, how can you double your stack?

Spoiler alert, I'm about to tell you how to bet optimally! First let's change the game in a way that doesn't actually matter, but that simplifies the analysis. Instead of green cards for the zeros, let's make them wild cards (wild beasts?) that are red on one side and black on the other, so that when playing such a card, the devil can choose between counting it as red or as black.

As long as we only bet on one color, that's going to be just as good for the devil, since he can choose the other color. And it doesn't make much sense in the original setting for us to place a bet on red and another bet on black, since it will be at least as good to cancel chips bet on red against chips bet on black and only bet the difference on one of the colors. But with the red/black wild cards instead of the green ones, we can assume that we always bet everything we have, and that we only choose how much of our current bankroll to bet on red and how much to bet on black. 

Now we can use a method described by Peter Winkler (in a red/black setting without zeros) in his wonderful paper Games People Don't Play. His attempt to trace that problem to its origin seems to have reached the so-called grapevine (Update: It's actually discussed on pages 6-7 of these notes by Thomas Cover from 1974, which are cited in a different section of Peter Winkler's paper!). Suppose for instance that $N=10$ and that you learn that there's going to be 5 red, 3 black, and 2 zeros. Using the red/black wild cards, this means that whatever the devil does, in the end there will be 10 cards on the table of which 5, 6, or 7 will have a red side face up, and 3, 4, or 5 will be black. The total number of sequences of red/black that satisfy that constraint, like for instance R-B-B-B-R-R-B-R-R-R and B-B-R-R-R-R-B-R-R-R, turns out to be (using binomial coefficients)
\[\binom{10}3 + \binom{10}4 + \binom{10}5 = 120 + 210 + 252 = 582.\]
So we hire 582 gnomes to do the betting for us. Each gnome is assigned one of the R-B-sequences and a bag of money. Their task is to bet everything they have in each round on their own particular sequence. Clearly one of them will win whatever the devil does, and the one who wins will double their money 10 times, resulting in a payoff of 1024 times what they started with. If we distribute our initial bankroll evenly between the 582 gnomes, that means we can guarantee a final payoff of \[ \frac{1024}{582} \approx 1.759.\]
To see how to actually bet (in the first round say), let's find out how many gnomes will bet on red in the first round and how many will bet on black. It turns out (again summing some binomial coefficients) that of the 582 valid R-B-sequences, 336 start with an R and 246 start with a B. Consequently, 336 of our gnomes will bet on Red in the first round and 246 will bet on Black. To emulate this in the original game, we cancel the 246 bets on Black against equally many bets on Red, and in the end place $336 - 246 = 90$ out of every 582 chips on Red. So your first move is to bet a fraction of \[\frac{90}{582} = \frac{15}{97} \approx 0.155\] of your bankroll on Red.

After the first round, all the losing gnomes are out of the game, and we make the same calculation again for the second round, now using only the remaining gnomes/R-B-sequences.

Not only does this guarantee a payoff of, in this case, 1024/582, but it's also best possible. It turns out that whatever strategy we use can be emulated by a set of gnomes, and that the only thing we can vary is how we distribute our chips between them from the beginning. The devil will always let the poorest of our gnomes win, so the best we can do is to give them the same amount. 


Payoff as a surprise

The payoff of $1024/582$ in our example can be interpreted as the inverse of a probability. The number 582 was the number of ways we can have 3, 4, or 5 Black in a sequence of Red-Black of length 10, so the probability of that happening if the sequence was decided by fair coin flips (or a roulette without zero) is \[\frac{582}{1024}.\]
In general, our payoff with $r$ red numbers, $b$ black numbers, and $z$ zeros is the inverse of the probability that a sequence of length $r+b+z$ of Red/Black generated by a roulette without zeros (let's call that a fair roulette) has at least $r$ Red and at least $b$ Black. The inverse of the probability is what we might call the surprise of an event - it's larger the more unexpected that event is (I'm not sure if there's an established name for the reciprocal of a probability, but it's sort of the same thing as the so-called odds). So our payoff is always the surprise of the hypothetical event that a fair roulette played for the same number of rounds would yield an outcome (in terms of number of red/black) consistent with the one we have (in the sense that the devil could use his wild cards to get the same red/black distribution as the fair roulette).

Let's look at another example: Suppose we give the devil a standard pack of 26 red cards, 26 black, and two jokers (to use as wild cards). What's our payoff going to be?

We can compute that as the surprise of the event that a fair roulette, played 54 times, would yield somewhere from 26 to 28 red numbers (a red/black distribution of 26-28, 27-27 or 28-26). Again the binomial coefficients give us the answer: The payoff is going to be
\[\frac{2^{54}}{\binom{54}{26}+\binom{54}{27}+\binom{54}{28}} \approx 3.159,\] reflecting the fact that the probability of 26, 27 or 28 red is roughly one in 3.159.


By the way, the Chernoff bound

By the way (that's how in academia we signal that we're about to say something important), the connection also goes the other way: We can use the devil's roulette to establish that certain events are extremely unlikely. Suppose for instance that you flip a coin a million times and the outcome is heads a little more than 505,000 times and tails not even 495,000 times. Is that normal? Or imagine that we're looking at whatever statistical data and asking whether deviations from what we expect can be explained by good-old 50-50 randomness or not. Even though the "expected" number of heads with a fair coin would be 500,000, surely we don't expect exactly that number. Should we be suspicious because we're off by 0.5%?

Let's play Devil's Roulette and give the devil 505,000 red and 495,000 wild cards. Since the wild cards can represent both red and black, this distribution represents every outcome with 505,000 or more Red. And let's consistently bet, throughout the game, 1% of our current bankroll on Red. This isn't the optimal strategy, but it's good enough.

If the devil plays sensibly, we'll win 505,000 times and lose 495,000 times. Every time we win, our bankroll increases by a factor $1.01$, and every time we lose, it decreases by a factor $0.99$. No matter in which order this happens, we end up with a payoff of
\[ 1.01^{505,000} \cdot 0.99^{495,000} \geq 5\cdot 10^{21}.\] We didn't even play optimally, and yet we secured a payoff of fifty round trips to Alpha Centauri measured in millimeters, or almost the weight of the earth in tonnes (wait, the latter somehow feels less impressive than the former!?).

This means that the probability of getting 505,000 or more heads in a sequence of a million fair coin flips is at most 1 in $5\cdot 10^{21}$. So yes, we should probably be suspicious.

This sort of inequality is known in probability theory by various combinations of the names Bernstein, Chernoff, Hoeffding, and Azuma. I dare say that thinking about it in terms of roulette is a slight improvement on the standard proofs. There's a parameter that needs to be optimized, but in our setting that's just what fraction to bet in every round.

As a sanity check on the method, if instead we give the devil 500,500 red and 499,500 wild cards, the best we can do with a strategy of betting a constant fraction turns out to be around 1.65 times the money. This gives no new information, since we already know that the probability of more Red than Black is at most 50%. And indeed, being off by 500 from the mean here is not very unlikely.


How to choose $N$: Maximum expected surprise

If it wasn't for the zero, the analysis would be simple: The larger the value of $N$, the more we would tend to win. If $N=100$ for example, we would end up with on average 101 times our initial bankroll. It seems that the analysis would be quite complicated, since if the devil gets 58 red cards and 42 black ones, say, then our payoff is the inverse of the probability of 58 red and 42 black in 100 rounds of roulette-without-zero, and that's just one term in an enormous sum. But the probability of this scenario exactly cancels what we win, so in the end we're summing 101 terms, each of which is equal to 1!

This insight can be summarized in the following theorem:
The expected surprise of any random variable is its number of possible outcomes.
The surprise of an event is also equal to the expected number of times you have to try again until the same thing happens. For instance, if you repeatedly flip a coin, the expected number of times until you get the same outcome again as the first time is 2, even if the coin is biased. Or maybe it's 3...?

Anyway, when there are zeros, it turns out we shouldn't pick $N$ too large. If $N$ is large, then most likely the number of red, black, and zeros will be relatively close to their expected values of $\frac{18N}{37}$, $\frac{18N}{37}$ and $\frac{N}{37}$ respectively.

The probability that $N$ rounds of a roulette wheel without a zero would yield at least $\frac{18N}{37}$ red numbers and at the same time at least $\frac{18N}{37}$ black ones is going to be very close to 100% when $N$ is large. Since our payoff is the reciprocal of this, we will most likely finish with only a microscopic amount more than what we started with.

It's true that we might win a fortune if we're lucky, so to establish rigorously that the average payoff tends to 1 as $N$ tends to infinity will require a careful argument and some calculation. So let's not do that here.

The expected payoff for a given $N$ is, in a sense, how surprised we expect to be by the outcome of $N$ rounds of roulette (well, by a fair roulette being consistent with it in terms of the wild cards). If there are too few rounds, too few things can happen for anything to be surprising, but if there are too many, then most of the time each color will appear with about its expected frequency.

It's fairly easy to let the computer calculate the average payoff for various values of $N$.
\[
\begin{array}{cc}
N & \text{Payoff} \\
1 & 1.97 \\
2 & 2.91 \\
3 & 3.81 \\
4 & 4.69  \\
5 & 5.52 \\
6 & 6.33  \\
7  & 7.11 \\
8 & 7.85 \\
9  & 8.58 \\
10  & 9.27 \\
\end{array}
\]
So far so good. I said before that if there aren't any zeros, the average payoff is $N+1$, and these numbers seem to reflect that, being a little smaller due to the possibility of zeros.

And when we take the calculation a little further, we reach a peak average payoff at $N=66$, after which it decreases. At $N=311$, it's again down to less than 10 times the initial bankroll.
\[
\begin{array}{cc}
N & \text{Payoff} \\
\vdots & \vdots \\
61 & 22.6204 \\
62 & 22.6403 \\
63 & 22.6558 \\
64 & 22.6668  \\
65 & 22.6737 \\
66 & 22.6765  \\
67 & 22.6755 \\
68 & 22.6707 \\
69 & 22.6624 \\
70 & 22.6506 \\
\vdots & \vdots \\
311 & 9.977 \\
\vdots & \vdots \\
\end{array}
\]

Number of the beast

The number 666 is mentioned in the Bible (Revelation 13:18) and is traditionally associated with the devil, but also with roulette. The sum of all the numbers on the roulette wheel is 666, and its inventor François Blanc is said to have made a deal with the devil to obtain the secrets of the game. There is (of course!) a Numberphile video about this!

So 666 has to do with Antichrist and gambling and sin. But the number 66 is pretty badass too in a more modern way. It's associated with the Wild West and getting kicks from jazz music. Dangerous stuff.

And whenever you play roulette with the devil, play exactly 66 times!




Saturday, January 5, 2019

Shed: Another card game

Here's another card game I sort of invented. I say sort of, because it's really just a simplified and pure form of a game that already exists.

There's a whole family of shedding type or beating games where the players take turns playing to a pile with the goal of getting rid of their cards. There are various constraints on the cards played, requiring you to somehow beat or match the previous card. If you can't play (or don't want to), you have to pick up cards, either from the pile or from a talon. Games of this type include Crazy Eights, Uno, CheatBluffstopp (those last two are different games by the way), Vändtia (Turning Tens), Silltunna (Herring Barrel) and many more.

If you have my type of curiosity, you'll wonder how simple such a game can be and still be nontrivial.

So let's remove the special cards, forget about the different suits, and just have two people play with a deck of cards numbered from 1 to some number N. When it's your turn, you either play a higher card than the previous one, or pick up the whole pile. That's basically it. There is no talon and therefore perfect information. Your opponent has precisely the cards that are not in your hand or in the pile.

In traditional shedding games you normally win as soon as you get rid of your last card, but let's change that slightly in order to continue the game to its logical conclusion: Let's say you win only when your opponent picks up everything so that they have all the cards on their hand. That sort of makes it a misère game, since you win when it's your turn and you can't play.

We'll introduce another twist to the game in a moment, but let's start from just these rules and assume that the cards are somehow dealt randomly. Since this is the simplest shedding game I can think of, it seems fitting to call it Shed. Actually what we're discussing now is what I will eventually call Shed Endgames, the part coming before the endgame having to do with that twist.

But first things first. Here's an example showing that you sometimes want to pick up the pile even if you have a card that's high enough to play.

Suppose = 5 and you deal the cards 1, 3, 4 to your opponent and 2, 5 to yourself. Your opponent starts with the 3. By the way, a note to the mathematicians out there who may otherwise be confused: You cannot pick up the pile if it's empty. In that case you have to play a card, but you can play any card you like (and if your own hand is empty too, that means you already won!).


Anyway, now it's your turn, and you can play the 5 or pick up the 3. It's not hard to see that you lose if you play, but win if you pick up. If you play the 5, your opponent picks up the 3 and the 5, and you play your last card, the 2. Then your opponent plays the 3, you pick up the 2 and the 3, and your opponent plays the following cards in the order 1, 4, 5 no matter what you do.

If on the other hand you pick up the 3 in the first place, play might go (with a notation I just invented, X meaning pick up):

3 - X, 1 - 2, 4 - 5, X - 3, 4 - X, 1 - 3, 5 - X, 2 - 3, X - 1, 2 - 4, X - 5, and you win.

If you try this game a few times with a small number of cards (say up to 13, one suit of a standard pack), you'll notice that most of the time the game is an easy win for the player who happens to have the better cards. If you have good enough cards, they will almost automatically get better. You will enter a good spiral where your opponent can only choose between picking up your bad cards or giving you their good ones. There are relatively few card distributions that are "balanced" enough for the game to be interesting.

We'll return to this issue in a moment, but before we continue, let's formulate some mathematical questions about this game:

Q1. Are there card distributions where the game is drawn under optimal play? In other words, are there hands where none of the players can force a win? The answer, as revealed by a small computer hack, is yes, but the smallest N for which this is possible is 10. It will happen for instance if you deal 1, 2, 6, 7, 8 to your opponent and 3, 4, 5, 9, 10 to yourself (following the convention that the person who didn't deal plays first).



An example of optimal play from these hands is  1 - 3, 6 - 9, X - 4, 6 - X, 1 - 4, 7 - X, 2 - 4, 8 - X, 3 - 4, 9 - 10, X - 5, X, and we are back to the original position but with the roles of the players reversed: You now have the hand 1, 2, 6, 7, 8, and it's your turn.

Q2. What are the asymptotical probabilities of dealer winning, draw, and first hand winning respectively? I'm assuming here that the cards are distributed with equal probabilities on all the 2 to the power of N different possibilities (By the way, how can you achieve this in practice with an Uno deck? Answer: you riffle-shuffle and then distribute the cards based on their orientation, which you can see even for the symmetric digits by looking at the "shadow"). One hypothesis is that the drawn positions are rare, of asymptotic probability zero. Another is that the advantage of playing first is getting smaller as the number of cards increases, and that asymptotically 50% of the times the dealer wins, and 50% of the times the first hand wins. But I have no idea how to try to prove any of this.

One thing I can prove though is that a player holding the four highest cards (that we may call the ace, king, queen and jack) can force a win. To demonstrate this, it suffices to show that a player holding only the fifth highest card (the ten) and playing first will lose. This can be checked for some reasonable values of N like 10 and 11, and one will see that the method is the same and works for arbitrary N. If you have the four highest cards and any other combination, you can pick up everything until your opponent has only one card left, and in the worst case that's the 10.

This means that the probability of a certain player winning a random deal is at least 1/16. So in any case the probability of a drawn position is at most 7/8, and in particular does not tend to 100%.

On the other hand I cannot even construct an infinite family of drawn positions. There are draws when = 10 and when = 12, but for all I know, drawn positions might not exist for any larger N.

Q3. How many moves can it take, at most, to force a win in the N-card game (as a function of N)? What do the hardest hands look like? And is there a simple mathematical solution to the game after all? Again I have no idea.

The twist: an open talon

As I mentioned earlier, there is a tendency for moderately good hands to almost automatically get better, quickly reaching a point where they "play themselves". The game will not be very exciting if nine times out of ten you can tell from a glance at your cards who is winning. Therefore we'd like to modify the game by introducing some other mechanism like a talon (maybe you can draw a card from the talon instead of playing) or some protocol of the type I-split-you-choose.

An idea that comes to mind is to start the game with all cards face up in an open talon. When it's your turn, you can play a card of your choice either from your hand or from the talon. The winning condition is still that your opponent should have all the cards on their own hand. You don't want to play the strong cards from the talon too early, because your opponent will get an advantage by simply picking them up. So it seems the game should now have a natural tendency to lead to the balanced and interesting positions. At the same time it removes the random element, since there is now a natural starting position.

From now on, this is the game that we call Shed, and the final phase when (if) the talon is exhausted is the endgame.

Perhaps surprisingly, Shed seems to be quite sharp and complicated. One might a-priori expect there to be a simple strategy that draws or that wins for a particular player (first or second), or perhaps that the outcome would depend on the parity of the number of cards. But there seems to be no such simple pattern. Shed with an N-card deck is a first-player win for N = 1, 4, 7, 10, 11, 13, 14 and 15, and a second-player win for N = 2, 3, 5, 6, 8, 9 and 12. The number of moves that it takes (counting the moves of the winning player) to force a win for N = 1,...,15 is 1, 2, 6, 8, 14, 16, 20, 30, 36, 45, 49, 58, 74, 68 and 91.

By the way, with a talon there are drawn positions already for N = 6 (though the starting position is not one of them). The second player wins under optimal play, but if the game starts 1 - 4 (correct), X - 2?, we reach a drawn position (the second player should have played the 3 instead in the second move). With optimal play from this point on, the game continues 5 - X, 1 - 6, X - 2, 4 - X, 1 - 4, X - 2, 4 - X, and so on. All those moves are unique, anything else loses, and the 3 stays in the talon forever.

There are some patterns in the data: It seems at first that when N is divisible by 3, the game is a second-player win. Moreover it appears that the second player should pick up the first card if the first player starts with a card in the top two thirds, but play something higher if the first card is in the lower third. This is true for N = 3, 6, 9 and 12. But the pattern is broken for N = 15, when the first player wins by starting with the 5 and then picking up even if the second player just plays the 6.

When N = 3k+1, the game seems to be a first-player win with the unique winning first move of playing the card k+1 from the talon (so that there remain exactly twice as many higher cards as lower cards). But it's hard to see a clear pattern in the following strategy, so maybe this too is just a red herring.

An intriguing observation is that there always seems to be a well-defined threshold at around N/3 with at most one card that wins as a first move for the first player, and all cards above the threshold losing because the second player picks them up. It seems clear that if the second player wins by picking up a certain card in the first move, they would also have won by picking up any higher card. It also seems very reasonable that the first player should never start with a card on the upper half, say, because the second player immediately gets an advantage by picking it up. But I don't see a reason why the first player could never win by playing a small card like the 1.

Some more questions: Are there infinitely many N for which Shed is a first-player win, and infinitely many for which it's a second-player win? Is there some N for which it's a draw? Is there a simple pattern to this, say periodic, after all? And is there some constant c between 0 and 1 such that the "threshold" for picking up in the first move is asymptotically at c times N? Is c = 1/3?

Multi-suit Shed, strict and relaxed

Can we play Shed with an ordinary deck with several different suits? Let's say that, just as in traditional card games, a card only counts as higher than another if it's higher in rank and in the same suit. So if you play first to the pile you can play any card you like, but all the following cards in that pile will have to be in the same suit as the first one.

Again the game seems to be complicated, and sometimes a first-player win, sometimes a second-player win, sometimes drawn. It seems to become drawish if there are many suits and few cards in each suit, and otherwise most often a first-player win. For instance 4-3-3-2 seems easy to draw, but 5-3-2-2 is a first-player win in 99 moves.

The game also seems to make sense with a joker or excuse which is lower than all other cards and may only be played first in a pile. For instance, 5-3-3 with a low joker is a tricky second-player win, but would be drawn without the joker. If on the other hand the joker is regarded as higher than the other cards (a wild-card that can be played anytime forcing the opponent to pick up the pile), the game seems to become completely drawish.

We can even play under the rule that one of the suits, say spades, is lower than all the others, so that a spade can be beaten by a higher spade or by a card of any other suit, while cards other than spades can only be beaten by higher cards in their own suit. For instance, the game 5-3-1-(3), where the ordinary suits have 5, 3, and 1 card respectively and the spade suit has 3 cards, is drawn under optimal play. You can draw by starting with the spade 1 or spade 3 (or with the smallest card in the suit of length 5), but if you start with the spade 2, your opponent has a forced win in 106 moves.

In principle, Shed can be played with an arbitrary partial order imposed on the cards. The rule that each card must beat the previous one might be called the Strict rule. But there is also another natural generalization of Shed to partially ordered decks: The rule can just as well be that a card can be played to the pile as long as it's not lower than any card already in the pile. This might be called the Relaxed rule. In Relaxed Shed, if you play the 9 of hearts, I can still play the 4 of clubs. You can then play any spade or diamond, and any heart higher than the 9 or club higher than the 4. Under the relaxed rule it seems that we must have a high wild-card or a trump-suit for the game to make sense, because unless there's a card that's higher than all others, the game is normally an easy draw.

Relaxed Shed might be played with cards from a tarot deck. In a tarot deck there is a designated trump suit of cards numbered from 1 to 21 (that's what I used in the second picture above) as well as the ordinary four suits.

Yet an example: In Relaxed Shed with suits of 4-3-2 and a trump suit of 3, first-hand has a forced win in 66 moves. The unique winning first move is to play the 3 of the suit of 4, and in the line I looked at, the first 13 moves, and 45 out of the first 50, where "unique" in the sense that any other move would have lost. Amazing stuff.

 Why should we care?

I guess for now, Shed will have to be just another artificial card game that you probably won't play unless you just want to see how weirdly difficult a game can be with just ten or so cards face up. But I might explain in some later blog post that it has certain features that makes it interesting to experiment with. In particular it seems to be a complex and razor sharp game that has "board-feel" and scales easily. There are difficult positions, but also easy ones. The level of difficulty might be varied in a rather controlled way. Maybe someone can see where this is going.





Thursday, January 3, 2019

Sluta gnälla om "oseriösa" bud!

I serien Bloggposter jag skrev och sedan glömde att publicera har vi nu kommit till den om "oseriösa" bud på bostadsmarknaden. Artikeln jag länkar till är från augusti 2017, och det var nog ungefär då jag skrev den. Sedan dess har jag för övrigt blivit bostadsrättsinnehavare, efter en väldigt lugn och enkel budgivning som inte alls bekräftar den bild som målades upp i debatten.

Mäklare vill ha “tuffare regler”. De har upptäckt att det är störande när den andra parten lägger bud som inte är bindande och sedan vill vänta och se när man själv är beredd att göra affär.

Precis det som köpare får stå ut med hela tiden.

Därför föreslår man enligt denna artikel att bud ska vara bindande i 24-48 timmar.

Eller så lugnar mäklarna ner sig och slutar ringa runt och stressa människor som ska ta sitt livs största ekonomiska beslut.

För vad ska de göra om någon säger “Jag är beredd att köpa bostaden för X kronor, men jag tänker inte lägga ett bindande bud”? Mäklaren kan inte rimligen undanhålla detta för säljaren, så i praktiken är det inte möjligt att begära att alla bud ska vara bindande.

“Bindande bud innan kontrakt” är en sådan där oxymoron. En självmotsägelse. Det är köparens pengar, så budet kan inte bli bindande med mindre än att man skriver ett kontrakt som säger att det är bindande.

Ett system med bindande bud kommer därför knappast att “minska oron på marknaden", utan bara leda till en djungel av kontrakt hit och dit som är bindande över olika tidsperioder. Då kunde lika gärna mäklarfirman köpa bostaden och sedan sälja den, som bilhandlare gör.

Vill man minska oron, är det väl rimligare att säljarsidans bud ska vara bindande. Det vill säga att är det sagt ett pris, ska det bara vara att slå till. Säljaren vet ju vilken bostad de säljer, och har tillgång till den för att i förväg göra en värdering och i lugn och ro tänka igenom hur mycket man ska försöka få.

Det här med att motparten vill vänta och se är ju bara ett problem för den som själv vill vänta och se. När någon vinner en budgivning och sedan inte vill skriva kontrakt på en gång utan först vänta och se om de vinner en annan budgivning, beror det ju också på att säljaren inte sålde när budet kom, utan ville vänta och se om det skulle komma ett högre bud.

Men det är ju det som är hela grejen med budgivning. Den som sig i leken ger…

Vi behöver inte lagstifta, besluta om etiska regler, eller ens proklamera att vi gemensamt ska införa någon ny norm. Det räcker att konstatera att den som klagar över “oseriösa” budgivare är en gnällspik. Bestäm hur mycket du ska ha för din bostad och sälj när du får det, så har du inget problem.

Men så vill inte mäklarna ha det. De vill ha budgivning, gärna med start på en låg nivå, så att det blir många bud och så att det ser ut som om det är själva mäklandet som driver upp priset. Mäklaren har heller inget direkt intresse av att säljarna ska ha klart för sig i förväg hur mycket de kan få för bostaden (och fundera på om de tjänar något på att alls anlita en mäklare).

Nu har jag inte sprungit på jättemånga visningar, men en händelse jag minns och som knappast är unik, var när en potentiell köpare frågade om det gick bra att lägga ett bud direkt på visningen.
Mäklaren: Ja det går bra, utgångspriset är 1.8 miljoner, vill ni bjuda det?
Potentiell köpare: Nej, vi vill bjuda två miljoner.

Mäklaren ville hellre se ett lågt bud för att locka med många i budgivningen, medan budgivaren var så säker på att priset ändå skulle bli minst två miljoner att hon inte brydde sig om att lägga ett lägre bud. Kanske ville hon också signalera till övriga att här är det ingen idé ni lägger er i. Rimligen borde säljarsidan välkomna ett högre bud, men det blir obekvämt för mäklaren eftersom det avslöjar att det hade gått att få minst två miljoner även utan hans hjälp.

Låt oss sluta gnälla på “oseriösa” budgivare. Kaoset, stressen, osäkerheten, och det här med att folk bjuder på olika bostäder samtidigt, beror ju på att man som köpare inte vet vad säljaren vill ha för sin bostad, och därmed inte kan bedöma utsikterna för det egna budet. Klart att man måste bjuda på flera då.

Om säljaren var tydligare med vilket pris som gäller (just nu, givetvis kan man ändra sig), skulle även köparen kunna agera mer stabilt. Vet man ganska bra vad varje bostad kommer att kosta, behöver man inte lägga bud till höger och vänster.

Ytterligare några osorterade tankar:

Det är inte säkert att du som säljare får mer genom budgivning än med fast pris. Budgivningen går ju i princip bara så långt den näst mest angelägna budgivaren är med.

Enligt artikeln menar flera mäklarfirmor att “oseriösa budgivare" bidrar till prisuppgång. Det borde väl i så fall välkomnas av mäklarna, som ju företräder säljarsidan. Eller ?

Säljaren kunde ju i princip själv ha begärt vad de "oseriösa" budgivarna sedermera bjöd. Har dessa "oseriösa" bud trissat upp priserna från den nivå där de skulle ligga med så kallad "schysst" budgivning, och upp i paritet med var de hade legat om säljaren hade satt ett ordentligt pris från början?

I så fall tyder väl det på att mäklarna inte gör ett särskilt bra jobb? Jag menar, om det går att trissa upp något från nivå A till nivå B, så indikerar ju det att nivå B ligger högre än nivå A.

Kan det vara så att mäklarna retar sig på de "oseriösa" buden för att de avslöjar att många säljare hade kunnat få minst lika mycket för sin bostad om de bara hade tagit reda på vad de ungefär kan få, och helt sonika begärt det?

Mäklaren står inte till hundra procent på säljarens sida. Mäklaren är en tredje part, med egna intressen.

Bindande bud är i praktiken ett slags ångerrätt för säljaren. En jämförelse: Om jag köper en j***a dammsugare, så ska det vara garantier hit, försäkringar dit, och jag ska givetvis kunna lämna tillbaka den och få pengarna tillbaka. Jag som köpare alltså. Men nu vill mäklarna kunna ringa som de värsta telefonförsäljare och övertala folk till bindande miljonaffärer, och DE ska ha ångerrätt!?

Det grundläggande problemet kanske är att mäklarna lägger sig i för mycket. Det är klart att processen måste bli stressig om det ska löna sig för dem att hålla på med varje detalj på sin arbetstid.

Ju fler budgivare det ska vara på varje lägenhet, desto fler lägenheter kommer varje budgivare att
bjuda på. I genomsnitt alltså.

Slut osorterade tankar.