Liste des Groupes | Revenir à c theory |
On 10/1/24 7:26 PM, olcott wrote:In the case of their evaluation of my work they are correct.On 10/1/2024 12:58 PM, joes wrote:No, it will tell you something that matches the words you told it.Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:>On 10/1/2024 8:09 AM, joes wrote:q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:On 10/1/2024 7:19 AM, olcott wrote:>https://www.google.com/search?
>>AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wizhttps://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21cIt sounds like it’s trained on your spam. LLMs don’t know anything
anyway.>I asked: „What happens when HHH tries to simulate itself?”
ChatGPT: [my comments in brackets]>### 1. **Self-Invocation in HHH**
If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop
of calls unless there is some form of termination condition built into
`HHH`.
Without such a condition, `HHH` would never return, resulting in non-
termination.
[So not a decider.]
>
### 2. **Halting Criteria**
In a well-designed termination analyzer, `HHH` would need to have
specific logic to prevent endless recursion:I’ll leave that to you.[Not present.]*You can continue this conversation with ChatGPT*
>
### 3. **Practical Consequences**
If `HHH` lacks the ability to handle self-referential calls correctly:
- **Stack Overflow**: Continuous self-invocation will eventually
exhaust the stack memory, leading to a stack overflow error.
[Not considered in theory.]
- **Incorrect Results**: If not designed carefully, it may lead to an
incorrect conclusion about halting, misrepresenting itself as either
halting or non-halting.
>
### 4. **Best Practices for Termination Analyzers**
To avoid these pitfalls, a termination analyzer should implement:
- **Guard Clauses**: Checks at the beginning of the function to prevent
unnecessary self-simulation.
- **State Management**: A way to track the functions being analyzed,
allowing `HHH` to recognize if it's being invoked with itself and act
accordingly.
>
### Conclusion In summary, if `HHH` tries to simulate itself without
proper safeguards,
it risks falling into an infinite recursion or causing a stack
overflow. A robust design would include checks to prevent
self-invocation, ensuring the termination analyzer can handle all
cases, including its own function,
gracefully.
>Click on the link and see how it answers this question:You should feed it our objections.
Is H a Halt Decider for D?
If you believe in it only when you prompt it, it is not suited as an
authority (fallacious anyway).
>
You feed it your objections.
It will tell you how and why you are wrong.
>
You don't seem to understand what Large Language Models are.
You seem to forget that LLM know nothing of the "truth", only what matches their training data.
They are know to be liars, just like you.
Les messages affichés proviennent d'usenet.