Liste des Groupes | Revenir à theory |
On 6/6/2024 4:55 PM, Mike Terry wrote:Nope, they create a "virtual" tape for the machine they are emulating (which gets encoded onto the tape, but in a way that the emulated machine can't tell the difference between the emulated tape and a real tape).On 05/06/2024 23:07, olcott wrote:UTM's must share a portion of their tape with their simulated TuringOn 6/5/2024 3:28 PM, Mike Terry wrote:>On 05/06/2024 17:49, olcott wrote:On 6/5/2024 10:55 AM, Mike Terry wrote:On 05/06/2024 10:38, Ben Bacarisse wrote:John Smith <news2@immibis.com> writes:
tldr;
1) UTMs /aren't/ allowed to share a portion of their input tape with what they're simulating. (In the sense you mean.)
Machine description. They should also be able to red from this portion
tape of their own tape yet not write to this portion.
But the traces you provide, are not a part of any properly produced tracing of the execution of the input, and HIDING the actual trace just indicates that you think there may be some evidence there you want to hide, if we can actually assume they exist at all.2) At some point below I realised you were probably thinking that because you suppress trace entries for everything but D/DD/.. code, that means that inner simulations won't see trace entries from outer simulationsNot at all. These traces are simply not displayed they still exist.
For three years no one has been able to correctly understand 1/2
of one page of execution trace, thus mixing in 251 more pages
CANNOT POSSIBLY HELP.
Then you can prove it so.[i.e. because they've been suppresed from the table]. That would be correct but I didn't go back and re-edit earlier comments I'd made.Based on your false assumption yes. It is a false assumption though.
3) Your proposal will still be breaking basic simulation rules.
Except that it isn't infinite recursive simulation, as EVERY layer of that simulation, if fully simulated, would also abort the simulation it is doing and return to its called.4) I started to answer your question on how HH would do it properly, but I would need more info on what exactly is needed. So I'll wait for your reply on that.HH needs to see exactly what it does see the inner execution trace of
>
DD derived by one single recursive simulation. I just checked HH simply
recognizes this as an instance of infinite recursion. It has no idea
that DD is calling itself.
If so, you don't have a legacy, except of shame.Most of all, I'd question whether it's worthwhile for you to expend much of your remaining time on this issue.It is my ONLY life's legacy and despite the fact that you very
persistently continue to simply ignore key details the notion
of a simulating halt decider IS VALID.
*This is my most concise proof of that*Which is a meaningless statement if you goal is to say something about halting.
On 6/6/2024 1:35 PM, olcott wrote:
Subject: [DD correctly simulated by HH ---
never stops running without aborting its simulation]
The problem is that you alternate definition of "correct simulation" just fails to provide the needed information to show non-halting behavior.The whole x86utm thing does not really add to your argument, and certainly does not "prove" the things you think it proves. That won't change if you spend your time fixing it. Instead you could just forget the traces etc. and argue your case in words.Once the x86utm version is correctly understood this forces people
>
to understand that my notion of a correct simulation is correct or
lie about it. Richard seem to prefer to lie about it. You refuse
to look at my proof.
Disagreeing with the execution trace of this code is like disagreeingBut the execution trace doesn't actually prove anything, except that HH aborts its simulation, and returns to its caller, so since it is defined to be a pure function, we know that DD *MUST* in its actual execution get that same answer and thus shows that H was wrong.
with arithmetic. LIARS ARE EXPOSED.
_DD()But it is YOU that has a incorrect notion, if the goal is to show that the machine being simulated is non-halting.
[00001e12] 55 push ebp
[00001e13] 8bec mov ebp,esp
[00001e15] 51 push ecx
[00001e16] 8b4508 mov eax,[ebp+08]
[00001e19] 50 push eax ; push DD
[00001e1a] 8b4d08 mov ecx,[ebp+08]
[00001e1d] 51 push ecx ; push DD
[00001e1e] e85ff5ffff call 00001382 ; call HH
When they maintain an incorrect notion what a correct simulation is
then they cannot possibly understand that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated
by embedded_H is this exact same thing.
No;e, and that just shows you really don't understand how a real simulator would work, which also explains why you started with such a broken model in the first place.If a UTM cannot pass a portion of its own tape to its simulated>YES>>>>Then increase the stack space until it doesn't run out. Turing machines>
can't run out of stack space unless you programmed them wrong.
A Turing machine can't run out of stack space because there is no stack.
That's like saying a polynomial has limited precision if you evaluate it
badly. It's the evaluation that's wrong, not the polynomial. I know
what you mean, but having talked to maths crank on Usenet for years, one
thing I would caution against is being slowly sucked into the cranks bad
use of technical terms.
>
Wandering slightly : also, PO's H/HH/etc. (running under x86utm) requires minimal stack space to run - probably just a few KB would suffice, /regardless of recursion depth/. Given that PO allocates 64KB for the stack, this is not going to be a problem.
>
The reason recusion depth is not a factor is that H /simulates/ D rather than calling it. The simulation does not consume H's stack space, and neither do nested simulations - they all have their own separately allocated stacks.
>
PO's design uses a single 32-bit address space which must hold ALL levels of nested recursion, so obviously something has to fail as nesting levels grow. That would be an "out of memory" failure when trying to acquire resource to create a new simulation level. I.e. a /heap/ error rather than "out of stack".
>
In practice his system would become unusable long before then due to CPU requirements in simulating instructions for each recursion level - that grows exponentially with a factor of (something like) 200 between each level. So at a recursive simulation depth of just 10, a single instruction would take something like 100,000,000,000,000,000,000,000 outer level instructions to simulate, which is just impractical.
>
>
Mike.
>
Thank you very much for being the voice of correct reasoning here.
I just figured out how to handle your objection to my HH code.
>
My idea was to have the executed HH pass a portion of what is
essentially its own Turing Machine tape down to the simulated
instances of HH. It does do this now.
What happens now is that there is one single trace array in global memory, and all simulations appends simulated instructions to that one array
>and can read array entries written by other simulation levels.>
NO they have not been doing this and I will encode it so that this is
impossible. I knew about this issue about two years before you recently
raised it. I only found out abut this issue from comp.theory
respondents.
>That fundamentally breaks the concept of a simulation exactly matching the behaviour of the outer (unsimulated) computation. More details below...I have known this for at least two years.
>
>[but the problem is I don't believe you really understand what the requirements of simulation are, because if you did you simply wouldn't have written code like that in the first place...]I was emulating a UTM passing a portion of its own tape down to its simulated instances of itself. That aspect always seemed legit.
>
No it's not legit.
>
Turing machine description (TMD) then UTMs cannot exist because
this is the ONLY way for the simulated TMD to get any tape space.
Certainly a UTM can read form its own tape. The objections that
you have seem to be based on false assumptions about HH.
But you didn't, but are too stupid to understand.There is no UTM tape in your x86utm and you've just picked on this wording because in your mind it seems to validate something people point out is wrong in your code.The final result must be valid at the Turing Machine level so I
create the analog at the C level.
Have you actually looked at and understood how an actual UTM works?Actually I've spelled this out in much more detail before and I'm not going through it again. If you didn't understand before I'd be wasting my time.You still seem to have some false assumptions and you still have not
>
looked at my prove that DD <is> correctly simulated by HH. For three
years it have been like I claim that 2 + 3 = 5 and everyone says:
"I don't believe in numbers thus there is no arithmetic."
That is not the way that actual UTMs work.>>And calling it "a portion of what is essentially its own TM tape" is just obfuscation to try to hide the obvious breaking of rules.>
>
Not if the simulated instances never access the data from the
outer-simulations. I will encode this to make it impossible.
The only legit way of doing this (without breaking simulation-compatibility behaviour rules somewhere below from an earlier post) is for each simulation to have its own trace table.
It must work with actual UTMs.
In other words, you are admitting that the simulation that you claim to be done by the outer HH, never actually proceeds past that call HH, like you claim is needed for a correct simulation.Obviously all levels of simulated HH must initialise and maintain that table themselves, and code paths etc. are exactly the same for outer execution and all inner simulations.This never happens. The outer HH called from main() writes its
>>>
I will encode u32 start_tape_address to the position in
execution_trace where the simulated HH begins so that it
will not ever look further back.
>
typedef struct Decoded
{
u32 Address;
u32 ESP; // Current value of ESP
u32 TOS; // Current value of Top of Stack
u32 NumBytes;
u32 Simplified_Opcode;
u32 Decode_Target;
} Decoded_Line_Of_Code;
>
execution_trace is essentially a std::vector of Decoded_Line_Of_Code
>
That's exactly what I imagined. But OUTER traces will write their trace data into the vector BEYOND your start_tape_address. In fact, 99% (or more) of entries beyond your start_tape_address will be for outer simulation instructions.
>
trace to execution_trace and never writes anything every again.
The last thing that it writes is the call to the next inner HH.
But the "inner" simulations aren't actually real, just an interpretation of the outer simulation. This is what we have been telling you, that you H doesn't even meet your own definiton of what a correct simulation should be.The outer HH instance can see the inner simulations the innerBy now you must have realised the problem is the "master UTM tape", which is not part of any computation model you're using. Even TMs, which do have a (personal) tape, cannot have a shared read/write portion of tape that is global to all simulations.>>>
The key objection that you seemed to have is that it can't pass
any information to its simulated instance that they can use in
their own halt status decision.
Partly right but woefully incomplete. If you added "...or to modify its logical behaviour in any way" that would be a good summing up.
>
Either the simulated instances must some how know not to
allocate memory for what is essentially a portion of the
master UTM tape that is passed down to them, or the master
UTM must some how find the UTM tape of these simulated instances.
>
>
simulations do not need to see the outer ones. It is the outer
simulation that first sees its abort criteria. I can code this
better.
I see Joes has summed it up succinctly: The TM /uses/ its tape to /implement/ the virtual machine where simulation is occuring. That's not a read/write backchannel between the simulator and simulated.No one here has had much of any actual clue besides you.
>
It is the exact same freaking machine code forcing it to be>Your specific focus on just "their own halt status decision" is only one part of the picture - my guess would be you know "the rules" but for whatever reason the only solution you can come up with breaks the rules, so you try to convince yourself that that's ok by changing the rules.>
>
I am having you analyze the details. I was basically gobsmacked
that A TM cannot do any damn thing that any C program can do.
It has been at least three years since then.>>
More specifically, the behaviour of a simulated HH must exactly[*] match the behaviour of the outer (unsimulated) HH.
>
Behaviour means all of the following:
>
- the instruction path of simulated HH exactly matches the instruction path
of the outer HH. [Currently HH has different code paths for simulated/
outer execution! It is not enough that you /believe/ this will not affect "the result".]
>
They are exactly the same except that the inner ones remain stuck
in recursive simulation until the outer one stops simulating its DD.
They are definitely not the same.
exactly the same.
Part of the issue here is that you routinely filter out all the logging entries from H/HH, so perhaps it's not obvious to you that they are different. The "remain stuck" is OT.HH never looks at any code the is part of the x86utm operating system.
>
HH cannot handle conditional branch instructions at all.
https://link.springer.com/content/pdf/10.1007/978-3-030-99527-0_21.pdf
Can handle conditional branch instructions after decades of work
and million of dollars.
Whether the whole idea is simulating halt deciders is valid or>It doesn't, but that's off topic right now.
*This unequivocally proves the behavior of DD correctly simulated by HH*
https://liarparadox.org/DD_correctly_simulated_by_HH_is_Proven.pdf
>
>
useless depends on that proof.
*This is my most concise proof of that*
On 6/6/2024 1:35 PM, olcott wrote:
Subject: [DD correctly simulated by HH ---
never stops running without aborting its simulation]
I think that I showed that it can with actual UTMs.I don't believe that can be legitimately achieved with your approach.- registers and all data and data locations used by the simulation must match the registers>
and data and data addresses used by the outer HH. [*]
>
- no backdoor channels where a nested simulation feeds data to a nesting simulation or vice versa. Obviously an outer simulation can "scrape" data from it's directly simulated computation.
>
>
[*] exactly match?? A big headache is that you've chosen a model where all the simulations share one 32-bit address space. So simulation working data will be at different locations to the outer working data. IF your code acts "responsibly" this can be catered for. E.g. simulated data on stacks will be at different addresses for each simulation - but still the location of the data within the stack and their values should /exactly/ [*] match across all simulations. Similarly a call to Allocate() will return different results in each simulation, but the location of all data used by the simulation relative to that allocation address should match, and the value of the data should match [* allowing for pointer mismatches etc.]. Near the end of this post I give some rules for code that acts "responsibly" - those rules try to ensure these address relocation problems don't affect arguments you make.
>>>
None of the simulated instances ever did this,
The inner simulations examined a succession of trace entries written by other simulation levels.
I don't think that they ever did this and they certainly do not need
to do this so I will encode HH so this is impossible.
>
>
Incidently I've just checked the HH source code, and nested HH's actually SKIP all the inspection of trace tables! - it's skipped due to the "Am I being simulated" logic, so the code path is completely different to the outer HH. (Perhaps because of this they don't see outer trace data but that code is an even worse problem.) Simulations (must) have no way of detecting they're being simulated.It might look like that. HH always ignores all of the x86utm
>
operating system code.
Sure I can. I give this there own first address and encodeYou can prevent inner simulations from looking at trace entries that were written before they started.That is wrong. I think you've convinced yourself they didn't do anything wrong, because you've decided to focus only on "affect their own halt status decision", but that's not the requirement.>
>
I knew that this was wrong for three years.
I will make the minor change to my code to make it
more obvious that they are not doing this.
>>yet I can make>
this more clear. As soon as they are initialized they can store
their own first location of this tape and never look at any
location before their own first location. In this case they
would never get a chance to look any data from the outer
simulations that they can use to change their own behavior.
I think I understand what you're thinking, and
a) you are just trying to replace one Wrong with another Wrong that you
hope will be given a tick by someone, rather than looking at what
a simulation needs to be, and implementing it logically
b) it won't work
>
You say that simulated HH looks back at outer-simulations
I say no they don't. I will encode it so that it is more
obvious that they don't look back.
>
them so that never look back beyond that.
That seems to be what you're imagining. But the outer simulations are still running!!!! They will continue generating trace entriesNope they completely finished before the inner one begins.
HH ignores all of the x86utm operating system code so
when DD calls another instance of HH(DD,DD) that is the
last thing that the outer HH ever writes to its execution_trace.
and writing to the table which will be picked up by the inner simulations.Yes just like a UTM shares a portion of its own tape.
>
Hmmmmmmmmmmmm, I've just had a realisation - you're thinking that you /suppress/ all those outer trace entries because you only capture instructions from within D/DD. So although you /are/ sharing one memory area for output by all levels of simulation, you've arranged your code
so that as the code stands, the outer simulations in fact /won't/ write to the shared area (until the inner simulations terminate).Not even then because the inner ones only terminate when
>
the outer one aborts its simulation of DD.
Nonetheless, simulators simply do not share such read/write backchannels between simulator and simulated code. Posters here can and I'm sure WILL simply point out that "this is not a valid simulation".Posters here besides you seem to be happy with disagreeing with
>
verified facts. I thought that Ben was lying too, the real issue
is that he was hiding his ignorance of some of these technical
details. He may have memorized theory of computation material
quite well. He may have been mostly clueless about actual programming.
Another point - The simple restrictions (like no mutable static data, identical code paths, use of pointer values and so on) are actually principles intended to /help/ you to justify the correctness of your code. As soon as you start deliberately breaking them but claiming it doesn't matter, you're goint to have problems convincing people the code logic does what you claim. Don't think you can just tell people "Anyone with enough skill can see that blah blah" and all will be ok - you should realise through experience that doesn't turn out as you'd hope.They only need to look at things at a high level of abstraction to
>
see that I totally proved that DD is correctly simulated by HH
with no need to ever look at any of my code.
https://liarparadox.org/DD_correctly_simulated_by_HH_is_Proven.pdf
When HH simulates each instruction of the x86 machine code of DD
correctly and in the correct order THEN THIS CONCLUSIVELY PROVES
THAT THE SIMULATION IS CORRECT.
2 + 3 = 5 EVEN IF YOU DON'T BELIEVE IN NUMBERS
(1) UTMs either share a portion of their own tape with their>As for (a): What I think you're proposing is to still have the global trace table, initialised by the outer HH, and still have different code paths followed by inner and outer simulations. That's all No Good.>
>
The outer UTM is allowed to pass a portion of its own tape
to its simulated instances. I may not have done this in the
cleanest way, yet *this <is> my architectural design intention*
UTMs just DO NOT DO THAT. Where do you get the idea someone has "allowed" them to do that?
>
simulated Turing machine Description or UTMs cannot exist.
(2) UTMs can read from their own tape.
(3) UTMs cannot write to the portion of this tape that their
simulated TMDs are using or the notion of computable function
is trashed.
As easy as 1,2,3...
Before the notion of simulating halt decider that I invented>>As for (b): I think you intend each inner simulation to be told where it is starting in the global trace, so that when it inspects it, it can start from its own starting point. The problem is that all simulations append to the table beyond that point, so a given simulation will /still/ be seeing entries directly added by other simulation levels.>
>
That is OK as long as they are lower inner levels and not higher
outer levels, according to what I figured out on my own and you
confirmed:
>
Message-ID: <rLmcnQQ3-N_tvH_4nZ2dnZfqnPGdnZ2d@brightview.co.uk>
On 3/1/2024 12:41 PM, Mike Terry wrote:
>
> Obviously a simulator has access to the internal state
> (tape contents etc.) of the simulated machine. No problem there.
>In effect you're still trying to keep the shared mutable static data idea that breaks all 3 of the "proper simulation" requirements I laid out above.>
>
The outer UTM is allowed to pass a portion of its own tape
to its simulated instances. I may not have done this in the
cleanest way, yet *this <is> my architectural design intention*
UTMs just DO NOT DO THAT.
>
they never needed to do that.
I think you're confused by my comment somewhere else about the outer TM using its own tape resources to /implement/ the virtual machine it creates, like a TM that calculates Pi /uses/ its tape as a resource during its calculation. That doesn't count as "sharing a portion of its tape with the calculation.Obviously a simulator has access to the internal state
>
(tape contents etc.) of the simulated machine. No problem there.
DebugStep() is an x86utm operating system function. HH must ignore>>Why would you want to do that rather than fix the obvious design flaw itself? I even explained some time ago how to do that properly, but you're presenting yourself as some kind of master "Software Engineer" and yet you can't work that out for yourself? I'd guess it should be perhaps a days programming effort...>
>
I must not have understood what you were saying.
>Anyway, the correct approach should obviously be for each simulation to maintain its own trace table which is only written to and read by itself. [Obviously, outer simulations could work out where that virtual table was implemented and /scrape/ data out of it if they wanted to.>
I knew that was one option yet could not figure out any way that HH
can reliably scrape the data from its inner simulated HH instances
without knowing that these inner HH instances are a copy of itself.
>
If HH merely knows its own machine address then it can do this. If HH
knows its own machine address then there is no need for data scraping
it rejects the next simulation of itself before it begins. The newer
H already does that.
>
Rather than HH having to know its own machine address, it is more the case that HH has to recognise when its simulated computation is itself executing a DebugStep to simulating something.
all OS functions because they contain conditional branch instructions
that ruin its non-halting criteria.
Since in your x86utm operations like DebugStep are primitive ops (provided atomically by x86utm rather than the "user code" in H/HH/..) it is easy for HH to see that it is simulating a nested DebugStep() operation. Also it can see what that nested DebugStep is debugstepping and so on.It must ignore its own code or it fails.
>
I haven't got the 20 years and million dollars that it took for
AProVE: Non-Termination Witnesses for C Programs
to handle these.
Not at all as a verified fact.>But those outer simulations would have their own trace tables with the data relevant for their processing. What's more, the code paths and behaviours of HH in each simulation level would be identical which is a basic simulation requirement.>
>
Yet they themselves cannot ever stop running unless and until the
outer HH stops simulating them. If every HH stops simulating after
three recursive simulations then the outer one sees this first.
>>>>
I will implement this in code sometime later today and publish
this code to my repository.
>
The only issue left that seems to not matter is that each simulated
HH needs to see if it must initialize its own tape. Since this
has no effect on its halt status decision I don't think it makes
any difference.
That's wrong thinking. Each simulation level must be exactly the same as the outer one. Not just in terms of "their halt status decision" but in terms of code paths, data values accessed [* qualified as explained above due to data relocation issues in your environment].
>
They are exactly the same except that the inner ones remain stuck
in recursive simulation until the outer one stops simulating its DD.
They are not currently exactly the same, as explained above. HH simulated code paths are quite different to outer HH.
>
Can't possibly do this. This would take 20 more years and one>>
*This unequivocally proves the behavior of DD correctly simulated by HH*
https://liarparadox.org/DD_correctly_simulated_by_HH_is_Proven.pdf
>Look - if the outer HH has to initialise a variable, the inner HHs have to initialise that same variable, because it is THEIR implementation of that variable, within THEIR virtual address space provided by their simulator. Your problem comes because you insist on trying to SHARE DATA ACROSS SIMULATIONS.>
>
Just follow the simple and rather obvious rules like:
>
- no mutable static data. All mutable data should be anchored within the
simulations stack.
- when pointers are involved, the pointer values should only be used
within code to be dereferenced and access the "actual" data stored
at that location. [and apply this rule recursively if required!]
>
I can do all of this when HH can know its own machine address. H(D,D)
is already implemented this way.
>
To follow your advice HH still needs to know its own machine address
so that its knows that the simulated copies of itself are copies
of itself. If it does not know this then is has no idea where to
look inside the simulated copies to find their execution traces.
It simply has to recognise that a recursive DebugStep is occuring and scrape what it needs at that point. Or perhaps it can just use what is returned directly by the outer DebugStep. It depends on exactly what data HH needs in that situation, and what DebugStep returns. [more below...]
>
million dollars.
So far EVERYONE has rejected the basic notion of a simulating
halt decider out-of-hand without sufficient review for three
years.
*This is my most concise proof of that*
On 6/6/2024 1:35 PM, olcott wrote:
Subject: [DD correctly simulated by HH ---
never stops running without aborting its simulation]
One can continue to essentially insist that 2 + 3 = 5 is wrong
because they simply do not believe in numbers. That is not
helpful. The machine code of DD correctly simulated by HH
proves that this simulation is correct and a counter-example
is impossible
*This unequivocally proves the behavior of DD correctly simulated by HH*
https://liarparadox.org/DD_correctly_simulated_by_HH_is_Proven.pdf
>I doubt HH needs to find the execution trace table for the simulated computation. What it needs is just what it needs to put in its own trace table, but I don't know what you expect that to be.static const data should be ok I think, as it's akin to how a TM would implement a "literal value" in its program.>
>>>
I will double check everything to make sure there is no data passed
from the outer simulations to the inner simulations that can possibly
be used for any halt status decision by these inner simulated
instances of HH.
..OR affect the code path blah blah. Your focus on /just/ the halt status decision is not enough. And anyhow you /know/ that the /only/ data passed to inner simulations must be the code to be simulated, and the input data arguments (simulating DD(DD) that is the code of DD and the DD argument). NOTHING ELSE, regardless of what it affects...
>
Try and show how HH can find the execution trace data of the
simulated copies of itself without knowing that they are simulated
copies of itself. This seems to require HH to know its own machine
address and H(D,D) already fully implements that to reject the
invocation of the next simulation.
>
>
Here is a typical scenario:
>
Outer L0 Sim L1 Sim L2 Sim L3
--------- --------- --------- ---------
DebugStep ------> DebugStep ------> DebugStep ------> push ecx
>
Outer HH is about to step its simulation (Sim L1) using DebugStep.
The simulation in L1 (whatever the program) is about to step its own (nested) simulation Sim L2. In turn L2 is about to step Sim L3, which is about to execute push ecx.
>
So what do you want to be appended to L0's instruction trace when the above has all been executed? Here are some possibilities:
>
A) ... // prev entries
DebugStep
DebugStep
push ecx
>
B) ... // prev entries
push ecx
DebugStep
DebugStep
>
C) ... // prev entries
push ecx
>
(C) is simple. (A) and (B) would be more complicated so I'll wait for your answer before proceeding. I kind of doubt that you'd need to inspect DebugStep traces for your abort pattern matching...
>
>
Mike.
Les messages affichés proviennent d'usenet.