Liste des Groupes | Revenir à c theory |
On 6/5/2024 3:28 PM, Mike Terry wrote:tldr;On 05/06/2024 17:49, olcott wrote:On 6/5/2024 10:55 AM, Mike Terry wrote:On 05/06/2024 10:38, Ben Bacarisse wrote:John Smith <news2@immibis.com> writes:
No it's not legit.YES>>>>Then increase the stack space until it doesn't run out. Turing machines>
can't run out of stack space unless you programmed them wrong.
A Turing machine can't run out of stack space because there is no stack.
That's like saying a polynomial has limited precision if you evaluate it
badly. It's the evaluation that's wrong, not the polynomial. I know
what you mean, but having talked to maths crank on Usenet for years, one
thing I would caution against is being slowly sucked into the cranks bad
use of technical terms.
>
Wandering slightly : also, PO's H/HH/etc. (running under x86utm) requires minimal stack space to run - probably just a few KB would suffice, /regardless of recursion depth/. Given that PO allocates 64KB for the stack, this is not going to be a problem.
>
The reason recusion depth is not a factor is that H /simulates/ D rather than calling it. The simulation does not consume H's stack space, and neither do nested simulations - they all have their own separately allocated stacks.
>
PO's design uses a single 32-bit address space which must hold ALL levels of nested recursion, so obviously something has to fail as nesting levels grow. That would be an "out of memory" failure when trying to acquire resource to create a new simulation level. I.e. a /heap/ error rather than "out of stack".
>
In practice his system would become unusable long before then due to CPU requirements in simulating instructions for each recursion level - that grows exponentially with a factor of (something like) 200 between each level. So at a recursive simulation depth of just 10, a single instruction would take something like 100,000,000,000,000,000,000,000 outer level instructions to simulate, which is just impractical.
>
>
Mike.
>
Thank you very much for being the voice of correct reasoning here.
I just figured out how to handle your objection to my HH code.
>
My idea was to have the executed HH pass a portion of what is
essentially its own Turing Machine tape down to the simulated
instances of HH. It does do this now.
What happens now is that there is one single trace array in global memory, and all simulations appends simulated instructions to that one array
and can read array entries written by other simulation levels.NO they have not been doing this and I will encode it so that this is
impossible. I knew about this issue about two years before you recently
raised it. I only found out abut this issue from comp.theory
respondents.
That fundamentally breaks the concept of a simulation exactly matching the behaviour of the outer (unsimulated) computation. More details below...I have known this for at least two years.
>
[but the problem is I don't believe you really understand what the requirements of simulation are, because if you did you simply wouldn't have written code like that in the first place...]I was emulating a UTM passing a portion of its own tape down to its simulated instances of itself. That aspect always seemed legit.
>
The only legit way of doing this (without breaking simulation-compatibility behaviour rules somewhere below from an earlier post) is for each simulation to have its own trace table. Obviously all levels of simulated HH must initialise and maintain that table themselves, and code paths etc. are exactly the same for outer execution and all inner simulations.And calling it "a portion of what is essentially its own TM tape" is just obfuscation to try to hide the obvious breaking of rules.Not if the simulated instances never access the data from the
>
outer-simulations. I will encode this to make it impossible.
I will encode u32 start_tape_address to the position inThat's exactly what I imagined. But OUTER traces will write their trace data into the vector BEYOND your start_tape_address. In fact, 99% (or more) of entries beyond your start_tape_address will be for outer simulation instructions.
execution_trace where the simulated HH begins so that it
will not ever look further back.
typedef struct Decoded
{
u32 Address;
u32 ESP; // Current value of ESP
u32 TOS; // Current value of Top of Stack
u32 NumBytes;
u32 Simplified_Opcode;
u32 Decode_Target;
} Decoded_Line_Of_Code;
execution_trace is essentially a std::vector of Decoded_Line_Of_Code
By now you must have realised the problem is the "master UTM tape", which is not part of any computation model you're using. Even TMs, which do have a (personal) tape, cannot have a shared read/write portion of tape that is global to all simulations.Either the simulated instances must some how know not to>>
The key objection that you seemed to have is that it can't pass
any information to its simulated instance that they can use in
their own halt status decision.
Partly right but woefully incomplete. If you added "...or to modify its logical behaviour in any way" that would be a good summing up.
>
allocate memory for what is essentially a portion of the
master UTM tape that is passed down to them, or the master
UTM must some how find the UTM tape of these simulated instances.
They are definitely not the same. Part of the issue here is that you routinely filter out all the logging entries from H/HH, so perhaps it's not obvious to you that they are different. The "remain stuck" is OT.Your specific focus on just "their own halt status decision" is only one part of the picture - my guess would be you know "the rules" but for whatever reason the only solution you can come up with breaks the rules, so you try to convince yourself that that's ok by changing the rules.I am having you analyze the details. I was basically gobsmacked
>
that A TM cannot do any damn thing that any C program can do.
It has been at least three years since then.>They are exactly the same except that the inner ones remain stuck
More specifically, the behaviour of a simulated HH must exactly[*] match the behaviour of the outer (unsimulated) HH.
>
Behaviour means all of the following:
>
- the instruction path of simulated HH exactly matches the instruction path
of the outer HH. [Currently HH has different code paths for simulated/
outer execution! It is not enough that you /believe/ this will not affect "the result".]
>
in recursive simulation until the outer one stops simulating its DD.
*This unequivocally proves the behavior of DD correctly simulated by HH*It doesn't, but that's off topic right now.
https://liarparadox.org/DD_correctly_simulated_by_HH_is_Proven.pdf
I don't believe that can be legitimately achieved with your approach.- registers and all data and data locations used by the simulation must match the registersI don't think that they ever did this and they certainly do not need
and data and data addresses used by the outer HH. [*]
>
- no backdoor channels where a nested simulation feeds data to a nesting simulation or vice versa. Obviously an outer simulation can "scrape" data from it's directly simulated computation.
>
>
[*] exactly match?? A big headache is that you've chosen a model where all the simulations share one 32-bit address space. So simulation working data will be at different locations to the outer working data. IF your code acts "responsibly" this can be catered for. E.g. simulated data on stacks will be at different addresses for each simulation - but still the location of the data within the stack and their values should /exactly/ [*] match across all simulations. Similarly a call to Allocate() will return different results in each simulation, but the location of all data used by the simulation relative to that allocation address should match, and the value of the data should match [* allowing for pointer mismatches etc.]. Near the end of this post I give some rules for code that acts "responsibly" - those rules try to ensure these address relocation problems don't affect arguments you make.
>>>
None of the simulated instances ever did this,
The inner simulations examined a succession of trace entries written by other simulation levels.
to do this so I will encode HH so this is impossible.
You can prevent inner simulations from looking at trace entries that were written before they started. That seems to be what you're imagining. But the outer simulations are still running!!!! They will continue generating trace entries and writing to the table which will be picked up by the inner simulations.That is wrong. I think you've convinced yourself they didn't do anything wrong, because you've decided to focus only on "affect their own halt status decision", but that's not the requirement.I knew that this was wrong for three years.
>
I will make the minor change to my code to make it
more obvious that they are not doing this.
You say that simulated HH looks back at outer-simulationsyet I can make>
this more clear. As soon as they are initialized they can store
their own first location of this tape and never look at any
location before their own first location. In this case they
would never get a chance to look any data from the outer
simulations that they can use to change their own behavior.
I think I understand what you're thinking, and
a) you are just trying to replace one Wrong with another Wrong that you
hope will be given a tick by someone, rather than looking at what
a simulation needs to be, and implementing it logically
b) it won't work
>
I say no they don't. I will encode it so that it is more
obvious that they don't look back.
UTMs just DO NOT DO THAT. Where do you get the idea someone has "allowed" them to do that?As for (a): What I think you're proposing is to still have the global trace table, initialised by the outer HH, and still have different code paths followed by inner and outer simulations. That's all No Good.The outer UTM is allowed to pass a portion of its own tape
>
to its simulated instances. I may not have done this in the
cleanest way, yet *this <is> my architectural design intention*
UTMs just DO NOT DO THAT.As for (b): I think you intend each inner simulation to be told where it is starting in the global trace, so that when it inspects it, it can start from its own starting point. The problem is that all simulations append to the table beyond that point, so a given simulation will /still/ be seeing entries directly added by other simulation levels.That is OK as long as they are lower inner levels and not higher
>
outer levels, according to what I figured out on my own and you
confirmed:
Message-ID: <rLmcnQQ3-N_tvH_4nZ2dnZfqnPGdnZ2d@brightview.co.uk>
On 3/1/2024 12:41 PM, Mike Terry wrote:
>
> Obviously a simulator has access to the internal state
> (tape contents etc.) of the simulated machine. No problem there.
In effect you're still trying to keep the shared mutable static data idea that breaks all 3 of the "proper simulation" requirements I laid out above.The outer UTM is allowed to pass a portion of its own tape
>
to its simulated instances. I may not have done this in the
cleanest way, yet *this <is> my architectural design intention*
Rather than HH having to know its own machine address, it is more the case that HH has to recognise when its simulated computation is itself executing a DebugStep to simulating something. Since in your x86utm operations like DebugStep are primitive ops (provided atomically by x86utm rather than the "user code" in H/HH/..) it is easy for HH to see that it is simulating a nested DebugStep() operation. Also it can see what that nested DebugStep is debugstepping and so on.Why would you want to do that rather than fix the obvious design flaw itself? I even explained some time ago how to do that properly, but you're presenting yourself as some kind of master "Software Engineer" and yet you can't work that out for yourself? I'd guess it should be perhaps a days programming effort...I must not have understood what you were saying.
>
Anyway, the correct approach should obviously be for each simulation to maintain its own trace table which is only written to and read by itself. [Obviously, outer simulations could work out where that virtual table was implemented and /scrape/ data out of it if they wanted to.I knew that was one option yet could not figure out any way that HH
can reliably scrape the data from its inner simulated HH instances
without knowing that these inner HH instances are a copy of itself.
If HH merely knows its own machine address then it can do this. If HH
knows its own machine address then there is no need for data scraping
it rejects the next simulation of itself before it begins. The newer
H already does that.
They are not currently exactly the same, as explained above. HH simulated code paths are quite different to outer HH.But those outer simulations would have their own trace tables with the data relevant for their processing. What's more, the code paths and behaviours of HH in each simulation level would be identical which is a basic simulation requirement.Yet they themselves cannot ever stop running unless and until the
>
outer HH stops simulating them. If every HH stops simulating after
three recursive simulations then the outer one sees this first.
They are exactly the same except that the inner ones remain stuck>>
I will implement this in code sometime later today and publish
this code to my repository.
>
The only issue left that seems to not matter is that each simulated
HH needs to see if it must initialize its own tape. Since this
has no effect on its halt status decision I don't think it makes
any difference.
That's wrong thinking. Each simulation level must be exactly the same as the outer one. Not just in terms of "their halt status decision" but in terms of code paths, data values accessed [* qualified as explained above due to data relocation issues in your environment].
>
in recursive simulation until the outer one stops simulating its DD.
*This unequivocally proves the behavior of DD correctly simulated by HH*It simply has to recognise that a recursive DebugStep is occuring and scrape what it needs at that point. Or perhaps it can just use what is returned directly by the outer DebugStep. It depends on exactly what data HH needs in that situation, and what DebugStep returns. [more below...]
https://liarparadox.org/DD_correctly_simulated_by_HH_is_Proven.pdf
Look - if the outer HH has to initialise a variable, the inner HHs have to initialise that same variable, because it is THEIR implementation of that variable, within THEIR virtual address space provided by their simulator. Your problem comes because you insist on trying to SHARE DATA ACROSS SIMULATIONS.I can do all of this when HH can know its own machine address. H(D,D)
>
Just follow the simple and rather obvious rules like:
>
- no mutable static data. All mutable data should be anchored within the
simulations stack.
- when pointers are involved, the pointer values should only be used
within code to be dereferenced and access the "actual" data stored
at that location. [and apply this rule recursively if required!]
>
is already implemented this way.
To follow your advice HH still needs to know its own machine address
so that its knows that the simulated copies of itself are copies
of itself. If it does not know this then is has no idea where to
look inside the simulated copies to find their execution traces.
I doubt HH needs to find the execution trace table for the simulated computation. What it needs is just what it needs to put in its own trace table, but I don't know what you expect that to be.static const data should be ok I think, as it's akin to how a TM would implement a "literal value" in its program.Try and show how HH can find the execution trace data of the
>>>
I will double check everything to make sure there is no data passed
from the outer simulations to the inner simulations that can possibly
be used for any halt status decision by these inner simulated
instances of HH.
..OR affect the code path blah blah. Your focus on /just/ the halt status decision is not enough. And anyhow you /know/ that the /only/ data passed to inner simulations must be the code to be simulated, and the input data arguments (simulating DD(DD) that is the code of DD and the DD argument). NOTHING ELSE, regardless of what it affects...
>
simulated copies of itself without knowing that they are simulated
copies of itself. This seems to require HH to know its own machine
address and H(D,D) already fully implements that to reject the
invocation of the next simulation.
Les messages affichés proviennent d'usenet.