Liste des Groupes | Revenir à theory |
On 2024-10-12 22:52:30 +0000, olcott said:Right, like you claim that HHH can correctly answer based on its limited knowledge even if that answer is wrong.
On 10/12/2024 5:12 PM, joes wrote:Not necessarily in the reasoning. The error could also be in the inputAm Sat, 12 Oct 2024 14:03:01 -0500 schrieb olcott:>On 10/12/2024 9:43 AM, Richard Damon wrote:>On 10/12/24 6:17 AM, olcott wrote:On 10/12/2024 3:13 AM, Mikko wrote:On 2024-10-11 21:13:18 +0000, joes said:Am Fri, 11 Oct 2024 12:22:50 -0500 schrieb olcott:On 10/11/2024 12:11 PM, Richard Damon wrote:On 10/11/24 11:06 AM, olcott wrote:On 10/11/2024 9:54 AM, Richard Damon wrote:On 10/11/24 10:26 AM, olcott wrote:On 10/11/2024 8:05 AM, Richard Damon wrote:On 10/11/24 8:19 AM, olcott wrote:On 10/11/2024 6:04 AM, Richard Damon wrote:On 10/10/24 9:57 PM, olcott wrote:On 10/10/2024 8:39 PM, Richard Damon wrote:On 10/10/24 6:19 PM, olcott wrote:On 10/10/2024 2:26 PM, wij wrote:On Thu, 2024-10-10 at 17:05 +0000, Alan Mackenzie wrote:Mikko <mikko.levanto@iki.fi> wrote:Yes. DDD reaches it, so a purported simulator should as well.So if we ask the exact question can DDD emulated by any HHH reach itsWhen HHH is an x86 emulation based termination analyzer then each DDDNope, Even software Engineering treats the funciton HHH as part of the
emulated by any HHH that it calls never returns.
program DDD, and termination analysis as looking at properties of the
whole program, not a partial emulation of it.
own return statement they would answer the counter-factual yes?
Therefore HHH is not a simulator.
>
I tried to tell ChatGPT the same thing several times
and it would not accept this.
https://chatgpt.com/share/6709e046-4794-8011-98b7-27066fb49f3e
>
Although LLM system are prone to lying: If it told a lie
there would be an error that could be found in its reasoning.
material.
Les messages affichés proviennent d'usenet.