Bostrom does not define posthuman (so saying "no wiggle room" is ridiculous), so these conclusions about what posthumans understand and do not understand are unjustified and moot. But let me continue:
Bostrom assumes all simulated minds will be "of a similar sort." Another assumption that "minds" are well understood.
Bostrom assumes we will never have the means to glean any information related to whether or not we are in a simulation. Indeed, compared to the unprecedented assumptions needed to admit posthumanity (computers the size of planets, for one!), the assumption that we can determine whether we are in a simulation seems paltry by comparison.
Bostrom's indifference argument is unfounded. In particular, say I only give his proof a 1/N chance of being correct for a very large N. Then by his indifference principle I _still_ must assign an overwhelming probability that I am in a simulation. This doesn't help in that I cannot be 100% certain his argument is wrong since he does not define things rigorously enough for a proof, nor can he possibly do so because of our lack of understanding about human minds or the future of computation. So to be a "rational" being (I hate this term, because it's also undefined), I can't disagree with his argument and conclude the opposite of what he concludes.
Bostrom assumes all simulated minds will be "of a similar sort." Another assumption that "minds" are well understood.
Bostrom assumes we will never have the means to glean any information related to whether or not we are in a simulation. Indeed, compared to the unprecedented assumptions needed to admit posthumanity (computers the size of planets, for one!), the assumption that we can determine whether we are in a simulation seems paltry by comparison.
Bostrom's indifference argument is unfounded. In particular, say I only give his proof a 1/N chance of being correct for a very large N. Then by his indifference principle I _still_ must assign an overwhelming probability that I am in a simulation. This doesn't help in that I cannot be 100% certain his argument is wrong since he does not define things rigorously enough for a proof, nor can he possibly do so because of our lack of understanding about human minds or the future of computation. So to be a "rational" being (I hate this term, because it's also undefined), I can't disagree with his argument and conclude the opposite of what he concludes.