Sujet : Re: Programming exercise - choose_k_of_n_then_select()
De : c186282 (at) *nospam* nnada.net (c186282)
Groupes : comp.programmingDate : 14. Jun 2025, 04:18:46
Autres entêtes
Message-ID : <596dnTFp__C1d9H1nZ2dnZfqnPidnZ2d@giganews.com>
References : 1 2
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0
On 1/30/23 9:01 AM, Paul N wrote:
On Monday, January 30, 2023 at 4:00:45 AM UTC, Tim Rentsch wrote:
I offer below a programming exercise, more in the spirit of fun than
being really challenging. The effort needed isn't trivial but it
shouldn't be huge either. The exercise was inspired by some recent
discussion in comp.lang.{c,c++}.
>
Exercise: write code to give a definition for the interface below
(the interface is written for C, but feel free to write a solution,
along with the corresponding interface, in a different language):
>
typedef unsigned long UL;
typedef UL RNG( void );
>
UL choose_k_of_n_then_select(
RNG rng, UL rng_max, UL n, UL k, UL j
);
>
The parameters may be assumed to obey the following constraints
(i.e., the constraints may be asserted at the start of the function
definition)
>
rng != 0
j <= k
k <= n
n < rng_max
>
Problem: rng is a random number generator function that returns
values uniformly distributed between 0 and rng_max, inclusive (so
rng_max+1 possible values. Choose k+1 distinct random values (using
the supplied function rng) in the range between 0 and n, inclusive
(so n+1 possible values). Of these k+1 distinct values, return the
j'th value in ascending order (so for j=0 return the least value,
for j=k return the largest value, etc).
>
It's important that the random selection be unbiased, with all of
the (n+1) choose (k+1) possible sets being equally likely (of
course under the assumption that rng is a "good" random number
generator). However it is also important that the code work
even if rng is "poor", as for example it first returns all the
even numbers and then returns all the odd numbers. It is safe
to assume that rng is not pathologically bad: it might be
really awful, but it will not be malicious.
>
For purposes of testing, if k is set equal to n, the result of
any j <= k should be equal to j, so
>
choose_k_of_n_then_select( rng, -1, 100, 100, 0 ) == 0
choose_k_of_n_then_select( rng, -1, 100, 100, 1 ) == 1
...
choose_k_of_n_then_select( rng, -1, 100, 100, 99 ) == 99
choose_k_of_n_then_select( rng, -1, 100, 100, 100 ) == 100
>
(with 'rng' being any suitable rng, even a poor one).
>
Note that rng_max might be close to n, which means it's important to
take that possibility into account in producing random numbers, so
that there is no bias.
>
Good solutions should not impose any artificial limitations on the
values of j, k, n, and rng_max.
>
I have written code to do this but will not be posting it for at
least a week. Have fun!
Just to check, we're free to "use" rng any way we want to, as long as the results are unbiased? For example, a naïve approach might be to try again if we get a value bigger than n, but if rng_max is between 2n+2 and 3n then we could have 0 or n+1 mean 0, 1 or n+2 mean 1, etc, and only have to reject values bigger than 2n+2. Also, do we have to select numbers in the range 0 to n and reject any duplicates, or can we rig things so we are selecting randomly only from those numbers not yet selected?
Note that 'randomness' is impressively difficult to
achieve using CPUs. And yes, in theory, you could get
a lot of the SAME number. CPU approaches tend to use
rules to get the NEXT 'random' number - which CAN be
exploited by evil people with tera/peta-FLOPS and
now 'AI' to throw at the problem, finding a pattern
in supposed 'randomness'.
Using some kind of quantum/quantum-like 'noise' as an
element in the equation is usually the best. Radio
"static", for example, or combined thermal readings
or delay stats from several system chips. There are
'randomness generator' chips. None of these methods
are 'perfect', but often 'better' is good enough.
Note the use of 'hashing' for things like database
files. This was especially relevant in the days of
slow CPUs and mechanical storage media. The 'random'
hash code would let you zero-in on files quickly,
as opposed to searching through one huge - maybe
too huge - file. Also made search times more predictable.
More than one record might get the same hash code, but
it got you CLOSE. Those hashes weren't really "random",
but 'good enough'.