On 5/18/25 1:08 PM, Mr Flibble wrote:
Analysis of Richard Damon’s Responses to Flibble
=================================================
Overview:
---------
Richard Damon's critiques of Flibble's arguments regarding the Halting
Problem and pathological inputs are based on a classical Turing model.
However, his rebuttals fundamentally misunderstand or misrepresent the
core of Flibble’s alternative framework. Below is a breakdown of the key
errors in Damon’s reasoning.
1. Misconstruing the Category Error
-----------------------------------
Damon claims:
"No, they are not [category errors]."
Flibble’s argument is not that the input is syntactically invalid, but
that it is semantically ill-formed due to conflating two distinct roles:
- The decider
- The program being analyzed
Which are not "Categories" but "Roles", so it seems you have made a category error in defining your categories.
This is a semantic collapse, akin to Russell’s paradox or typing errors in
logical frameworks. Damon's rebuttal fails to address the **semantic**
level, focusing only on syntactic permissibility.
Nope, as the DEFINITION of a Halt Decider is to decide on the representation of *ANY* program. Thus, since a decider is a program, a decider, or a program based on one is in the category suitable to be decided on.
2. Ignoring the Model Shift
----------------------------
Damon critiques Flibble’s ideas *from within the classical Turing
framework*, missing the fact that:
- Flibble is proposing a **different semantic foundation**.
But to define a "different foundation" you need to do that. The Flibble framework doesn't actually define a new framework, but asks about make some ill-defined changes.
- In this new foundation, certain forms of self-reference are *disallowed*
as a matter of type safety.
Damon’s refusal to recognize the change in assumptions renders his
critique logically misaligned.
IF you intend for your statements to be outside the field of Computation Theory, you need to make this clear. This is part of the same error that Olcott makes.
The problem is that Computation THeory fundamentally allows for the formation of this self-referencing (that ability to give an machine as a input, and input based on a machine derived from that machine).
There is no way in Computation Theory to disallow this.
THus, to actually implement your ideas, you need a totally new method of defining what a program is, and how to build an input.
3. Misreading Flibble’s Law
----------------------------
Damon dismisses Flibble’s Law:
"Which just isn't true..."
But Flibble’s Law isn’t about infinite computation—it’s about *recognizing
the structure* of potentially infinite behavior. It’s a meta-level
principle, not an operational mandate.
WHich isn't true. Flibble's law tries to establish a "right" for a given program, but it can have no rights not established by the theory that it is defined under. Since the DEFINITION of a decider is to ALWAYS answer in finite time, it is definitionally impossibe to "allow" a machine that is a decider to take unbounded time, as Flibble's law tries to do.
That is like saying that 1 equals 2, at least for big values of 1.
Damon misinterprets it as a proposal for unbounded simulation time, rather
than an assertion about the necessity of structural analysis.
because your wording doesn't say that. The problem is that compuational logic just doesn't allow the limitation of valud questions based on whether the question is computable.
THis shows a fundamental misunderstanding of what the field is.
4. Stack Overflow as a Semantic Signal
--------------------------------------
Damon argues that stack overflow represents a failed computation:
"...it just got the wrong answer."
Flibble’s view is different:
- A stack overflow (or crash) isn’t failure.
Sure it is. A program that fails to complete and give the correct answer has just failed to give an answer.
If you want to define "stack overflow" as an "I don't know" result, fine, but first you have to define that this is a "valid" result.
- It is the **semantic manifestation** of an ill-formed input—an expected
behavior when the category boundaries are violated.
But there is nothing "ill-formed" about the input.
Again, that is a presupposition you make that isn't true.
This is a reinterpretation of what “failure” means, not a bug.
And "redefinition" isn't an allowed operation.
You can start over and make a new system, but that means you NEED to start over.
This is just a repeating of Olcott's misunderstanding of how logic works
5. Misidentifying Recursion Criticism
-------------------------------------
Damon says:
"Just the presence of recursion isn't the problem."
This is a straw man. Flibble does *not* claim recursion is inherently
invalid—only that **self-referential decider/input recursion** creates
type-theoretic inconsistency. Damon’s rebuttal avoids the specific case
Flibble addresses.
But Flibble can't define where the error is. Note, that in the actual theory, the program at the input has definite behavior because it is built from a specific decider, there is no ambiquity on the correct answer.
Only by adoptitng Olcott's error of makeing an input that isn't actually a program, and claiming that doing so is correct, does the pathology in the form adderess by Flibble appear. SInce the original theory already calls the Olcott input a category error because it isn't an actual program, but a template that can't ever actually be run (as it has missing code) but only be simulated/decider by a decider with a specific name (an attribute that isn't supposed to have import in the theory).
Thus, the Flibble theory is based on the assumption of allowing a category error to define something to be a category error.
Yes, the Olcott DDD is a category error, not because it calls the decider HHH, but because it assumes it can call it as an external, a concept that doesn't exist in the theory. When the input is made into a full program, it still foils the input, when it is the machine that it was built on, but
6. Downplaying the Role of SHDs
-------------------------------
Damon concedes:
"Yes, partial deciders have some uses..."
But Flibble’s point is stronger:
- SHDs are useful not as general solvers, but as **structural recognizers
of malformed input**.
- Their “failure” (e.g., crashing on pathological input) is
**informative**, not defective.
Damon reduces SHDs to weak approximators, missing Flibble’s proposed
semantic role.
No, I do not. I point out that they have *SOME* uses.
They do not fulfill the original desire for a Halt Decider, which was to make all logic computable with mathematics, allowing any problem to be answered. Partial decider just don't meet this requirement.
7. Failing to Address Reframing
-------------------------------
Damon says:
"The halting problem isn't malformed..."
In Turing’s model, it isn’t. But Flibble isn’t working in Turing’s model—
he is reframing the domain entirely to exclude semantically ill-formed
constructs. Damon critiques within one framework while Flibble builds
another.
You say you are not working in Turing's model, but you haven't actually created any other model, so either you are talking about something totally undefined, or you are just lying.
Conclusion:
-----------
Richard Damon's responses fail not because of flawed logic, but due to a
**category error of interpretation**: he assumes Flibble is arguing within
the classical Turing framework when Flibble is explicitly rejecting it in
favor of a type- and semantics-based system.
No, Flibble's logic is the one that failed on numerous category errors.
Damon's responses miss the mark because they measure Flibble’s ideas by
the wrong metric. The disagreement isn’t about whether the Halting Problem
is undecidable—it’s about *how the problem should be framed* in the first
place.
SInce Flibble hasn't created an alternate metric there is nothing else to measure it by.
*IF* Flibble would phrase his ideas as a preliminary exploration to an alternate system, an thus not be making claims about the classic system, he might get a better reception.
For instance, Flibble claims to only need to define the extra case for "pathological input", but has failed to deal with the questions about defining what this means. Since the only defined theory of compuation in view, defines Programs as (at least logically) copy the code of one machine into the other as the method of composition, the "pathology" of the Olcott system doesn't actually occur. (This has been pointed out to Peter Olcott, and he has just ignored this).
SIhce that copying can have semanticless modifcations performed (like adding no-ops) "comparing" the programs isn't actually a viable measure of pathology.
It is in fact, a know prpoerty that it is uncomputable to in general determine if two algorithms compute the same results for all inputs, but we CAN generate a "clone" of a machine that would need that uncomputable comparison to determine that it is one.