Liste des Groupes | Revenir à s logic |
On 6/5/2024 10:55 AM, Mike Terry wrote:What happens now is that there is one single trace array in global memory, and all simulations appends simulated instructions to that one array and can read array entries written by other simulation levels. That fundamentally breaks the concept of a simulation exactly matching the behaviour of the outer (unsimulated) computation. More details below...On 05/06/2024 10:38, Ben Bacarisse wrote:Thank you very much for being the voice of correct reasoning here.John Smith <news2@immibis.com> writes:>
>Then increase the stack space until it doesn't run out. Turing machines>
can't run out of stack space unless you programmed them wrong.
A Turing machine can't run out of stack space because there is no stack.
That's like saying a polynomial has limited precision if you evaluate it
badly. It's the evaluation that's wrong, not the polynomial. I know
what you mean, but having talked to maths crank on Usenet for years, one
thing I would caution against is being slowly sucked into the cranks bad
use of technical terms.
>
Wandering slightly : also, PO's H/HH/etc. (running under x86utm) requires minimal stack space to run - probably just a few KB would suffice, /regardless of recursion depth/. Given that PO allocates 64KB for the stack, this is not going to be a problem.
>
The reason recusion depth is not a factor is that H /simulates/ D rather than calling it. The simulation does not consume H's stack space, and neither do nested simulations - they all have their own separately allocated stacks.
>
PO's design uses a single 32-bit address space which must hold ALL levels of nested recursion, so obviously something has to fail as nesting levels grow. That would be an "out of memory" failure when trying to acquire resource to create a new simulation level. I.e. a /heap/ error rather than "out of stack".
>
In practice his system would become unusable long before then due to CPU requirements in simulating instructions for each recursion level - that grows exponentially with a factor of (something like) 200 between each level. So at a recursive simulation depth of just 10, a single instruction would take something like 100,000,000,000,000,000,000,000 outer level instructions to simulate, which is just impractical.
>
>
Mike.
>
I just figured out how to handle your objection to my HH code.
My idea was to have the executed HH pass a portion of what is
essentially its own Turing Machine tape down to the simulated
instances of HH. It does do this now.
The key objection that you seemed to have is that it can't passPartly right but woefully incomplete. If you added "...or to modify its logical behaviour in any way" that would be a good summing up.
any information to its simulated instance that they can use in
their own halt status decision.
None of the simulated instances ever did this,The inner simulations examined a succession of trace entries written by other simulation levels. That is wrong. I think you've convinced yourself they didn't do anything wrong, because you've decided to focus only on "affect their own halt status decision", but that's not the requirement.
yet I can makeI think I understand what you're thinking, and
this more clear. As soon as they are initialized they can store
their own first location of this tape and never look at any
location before their own first location. In this case they
would never get a chance to look any data from the outer
simulations that they can use to change their own behavior.
I will implement this in code sometime later today and publishThat's wrong thinking. Each simulation level must be exactly the same as the outer one. Not just in terms of "their halt status decision" but in terms of code paths, data values accessed [* qualified as explained above due to data relocation issues in your environment].
this code to my repository.
The only issue left that seems to not matter is that each simulated
HH needs to see if it must initialize its own tape. Since this
has no effect on its halt status decision I don't think it makes
any difference.
I will double check everything to make sure there is no data passed..OR affect the code path blah blah. Your focus on /just/ the halt status decision is not enough. And anyhow you /know/ that the /only/ data passed to inner simulations must be the code to be simulated, and the input data arguments (simulating DD(DD) that is the code of DD and the DD argument). NOTHING ELSE, regardless of what it affects...
from the outer simulations to the inner simulations that can possibly
be used for any halt status decision by these inner simulated
instances of HH.
Les messages affichés proviennent d'usenet.