The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
These are my answers and I definitely could be wrong.
Andreas Mertens wrote:
Suppose you start rolling as described. If the first or second roll isn't a 20, do you finish through to the third roll?
1) There is no need to roll the die two more times because each die roll is independent of each other (mutually exlusive outcomes). However, you would simply increment your count of tries and failures that have occurred.
Andreas Mertens wrote:
What if instead of one 20-sided dice, you have 3 of these dice and roll all three at once? Would that change the odds?
2)Effectively no. Because each 20-sided die roll outcome is mutually exclusive to all others whether you roll them at the same time or individually. However, maybe there are some physics involved that affects the way the dice bounce against each other?? However, that isn't really counted in probability math. Instead it is the pure math of just the theoretical values (no physics involved).
Mutually exclusive meaning that if you roll a 20-sided die twice the second roll is in no way affected by the first roll. A non-mutually exclusive event might be like selecting one of three doors for a prize. Once you select a door it is then removed from the choices. So the next choice is only 1 out of 2.
Won't you need an initial estimate for Bayes, and then improve that estimate on each result?
You can already guess the 1:20 for the initial roll you don't need to do.. then look to the next roll to get two in a row, (19 chances of failure), then a final 50:50 hope for the last '20' with a 1:10 chance.
Needs a few more unknowns that need determining by measurement and Bayes update. Maybe the dice is weighted? If it was, how much would you pay per throw to give confidence of the weighting?
For my brother the accountant this Christmas, I got a big bag of receipts. I told him it was OK if he didn't like them, I'd kept all the presents…
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
This isn't even really enough for a tip but if you ever can't implement a real hashtable for reasons there's a pretty decent fake you can do for strings:
Store sz strings prepended by a size_t length (which includes trailing zero char) contiguously [len][string][len][string][len][string], like that
when you go to check for a match, start at the beginning.
get the size_t len from the current position
see if it matches your passed in string's length (don't forget to account for the trailing zero). if it doesn't, increment the current table pointer by len and continue
if it does, then in a loop check each character to see if they match. once you find one that doesn't match, you can increment the current table pointer by the remaining characters (computed from len and your current position)
this way you early out on non matches and you've got a remedial hash on top of that (the count)
all of that and you don't require an actual hashtable, and it's not bad in terms of space/time from my tests (currently on JSON fields in a document)
Hmmm... on a big-endian system -- if you limit each string's length to ensure that the high byte of the next string's length is always zero, you can avoid adding "trailing nulls" -- just terminate the array with a zero. Profit.
I once implemented a hash table almost exactly that way. The only difference was I put the hash value first, then the length. I did this because I had a lot of long symbols with very similar names plus it all sat in shared memory. This ended being considerably faster than not having the hash values. In my case, I was parsing a script language I wrote and this was for a table of imported DLL libraries and functions.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
Isn't that how any language implemented variable length strings before C arrived and "invented" NUL termination? (In conflict with long established international standards... If you really need a string terminator character, there is one in the ASCII / ISO646 character set; it is not NUL!)
For something like forty years, the common way of serializing mixed type data has been to take it one step further, adding a (binary) tag prefix, for a TLV (tag - length - value) format. Skipping through the fields of an unordered record is simple and quick.
One disadvantage is that if you run on a 64 bit CPU, and insist on 64 bit tag and length, then you have a 16 byte overhead per value. There are standards for packing both tag and length, not unlike UTF-8 encoding, so that a "small" tag and short length take up only one byte each - at the expense of more complex code when the small tag or length range is exceeded.
And, talking about NUL terminators: One standard way to terminate list of a variable number of fields (including a variable length array) is by a sentinel element of tag = 0, length = 0. If you want super-efficient code and use a packed format, one byte for each, you can test both as a single short to see if it is zero, in single instruction.
Through the years, there has been a few T-shirts I regret that I never bought. One had a great pi symbol at the front and the first 2000 digits of pi on the back. If I see anything like that again, I would buy it with without hesitating even a second.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle