Sujet : Re: Why "not.first.false"
De : ross.a.finlayson (at) *nospam* gmail.com (Ross Finlayson)
Groupes : sci.mathDate : 26. Dec 2024, 20:22:20
Autres entêtes
Message-ID : <XJ2cnWG2BdB3MfD6nZ2dnZfqnPadnZ2d@giganews.com>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0
On 03/05/2024 09:24 PM, Ross Finlayson wrote:
On 03/05/2024 07:42 PM, Ross Finlayson wrote:
On 03/05/2024 12:33 PM, Jim Burns wrote:
On 3/4/2024 6:16 PM, Ross Finlayson wrote:
On 03/04/2024 01:20 PM, Jim Burns wrote:
On 3/4/2024 1:52 PM, Ross Finlayson wrote:
On 03/04/2024 10:31 AM, Jim Burns wrote:
On 3/3/2024 6:19 PM, Ross Finlayson wrote:
>
And Leibniz is like, "thanks, I got this".
>
Leibniz has this.
We have this.
Nobody enters Cantor's Paradise.
>
n = 1, 2, 3, ...
>
0^1/n = 0, 0, 0, ..., 1
>
What, not first first, not ultimate untrue?
>
Perhaps you'd like some sort of response to that?
>
Perhaps you'd be interested to know that
I don't know what you mean by
not.ultimately.untrue.
>
Here this was
>
0 ^ 1/1 = 0, not.first.false
0 ^ 1/2 = 0, not.first.false
0 ^ 1/3 = 0, not.first.false
>
When I started to use "not.first.false"
I intended it as a short, a very.very.short,
explanation why we all should trust
the conclusions of a correct logical argument
no less than we trust its premises.
>
>
Our goal is to distinguish true ⊤ claims
from false ⊥ claims about points.in.a.line or
widgets or flying.rainbow.sparkle.ponies or Bob.
>
Consider a finite sequence of truth.values, ⊤ and ⊥
any finite sequence of truth.values.
⟨ ⊤ … ⊤ ⟩
>
It's a finite sequence, therefore,
if the value 'false' exists in ⟨ ⊤ … ⊤ ⟩
then the value 'false' exists a first time.
∃⊥ ⇒ ∃₁⊥ in ⟨ ⊤ … ⊤ ⟩
>
That's what I wanted to say:
∃⊥ ⇒ ∃₁⊥ in ⟨ ⊤ … ⊤ ⟩
>
We know that's true because
it's true in general of anything
in a finite sequence.
>
In a finite sequence of playing cards,
If one card is a club
then one of them is the first club.
∃♣ ⇒ ∃₁♣ in ⟨ ♠ … ♥ ⟩
>
And so on.
>
∃⊥ ⇒ ∃₁⊥ in ⟨ ⊤ … ⊤ ⟩
¬∃₁⊥ ⇒ ¬∃⊥ in ⟨ ⊤ … ⊤ ⟩
∀¬₁⊥ ⇒ ∀¬⊥ in ⟨ ⊤ … ⊤ ⟩
∀¬₁⊥ ⇒ ∀⊤ in ⟨ ⊤ … ⊤ ⟩
>
∀⊤ in ⟨ ⊤ … ⊤ ⟩ is our Holy Grail,
wherein all the truth values are ⊤
>
The lemma ∀¬₁⊥ ⇒ ∀⊤
reduces the problem to finding finite ⟨ ⊤ … ⊤ ⟩
such that
each claim is not.first.false in ⟨ ⊤ … ⊤ ⟩
>
That is why "not.first.false"
>
>
First.false is false, thus
true is not.first.false.
₁⊥ ⇒ ⊥
⊤ ⇒ ¬₁⊥
>
Some claims in some sequences must be
not.first.false in their sequences.
>
In ⟨ P P⇒Q Q ⟩ Q is ¬₁⊥
⊥ ⊤ ⊥
⊤ ⊥ ⊥
⊥ ⊤ ⊤
⊤ ⊤ ⊤
>
In ⟨ P Q P⇒Q ⟩ Q might not be ¬₁⊥
⊥ ⊥ ⊤
⊤ ⊥ ⊥ !
⊥ ⊤ ⊤
⊤ ⊤ ⊤
>
Our goal, the Holy Grail, is to find/construct
a sequence of claims such that
each claim in that sequence is not.first.false,
and we can see it is, like Q in ⟨ P P⇒Q Q ⟩
or we already know that claim is true, ⊤ ⇒ ¬₁⊥
>
0 ^ 1/1 = 0, not.first.false
0 ^ 1/2 = 0, not.first.false
0 ^ 1/3 = 0, not.first.false
>
0 ^ 0 = 1, not.ultimately.untrue
>
I don't see how 'not.first.false' and
'not.ultimately.untrue' have
anything to do with each other.
>
The most important use of 'not.first.false'
is in finite sequences of not.first.false claims,
some of which we wouldn't know are true
except for being located in that sequence.
>
The claim.sequence
⟨ 0¹ᐟ¹=0 0¹ᐟ²=0 0¹ᐟ³=0 ... 0⁰=1 ⟩
isn't a finite sequence,
and
we know those claims for reasons other than
being located in that sequence.
>
You may be familiar with this as a definition
in fractional powers with respect to zero
the radix and zero the power, just showing that
as a sort of example that = 0 is not.first.false,
but, that not.ultimately.untrue, is different.
>
Setting aside
'not.first.false' and 'not.ultimately.untrue',
limit(0¹ᐟⁿ) ≠ 0ˡⁱᵐⁱᵗ⁽¹ᐟⁿ⁾
>
Which is to say 0ˣ is discontinuous at 0
>
That could be overlooked, because
arithmetic is continuous _almost_ everywhere.
But that is an exception.
>
In these cases, it's not _jumping_ cases
so much as, _spanning_ cases.
>
The Intermediate Value Theorem works quite well
when that each of:
>
extent <- you allow this [0,1], or where LUB = 1
density <- you allow this
completeness <- you don't allow this
>
The intermediate Value Theorem implies
Dedekind completeness.
Dedekind completeness implies
the Intermediate Value Theorem.
>
That's why I have been so free in
switching between the two when describing ℝ
>
measure <- it would so follow
holds up,
>
That in this case it also exactly is that
dom(EF) is discrete and ran(EF) is continuous,
a continuous domain,
>
s/continuous/connected
>
Then, about completeness as above,
"iota-completeness" if you will,
the LUB of a subset of ran(EF) is in ran(EF)
quite trivially, so, that's the usual definition.
>
That's the usual definition, Jim.
>
It is inconsistent for positive.iota to exist
which equal.spaces infinitely.many points
from 0 to 1
>
0 is the greatest.lower.bound of
finite.ordinal.reciprocals
>
If iota is positive,
a finite.ordinal.reciprocal exists
between 0 and iota,
and
some finite ordinal is larger than
the set of iota.spaced points from 0 to 1
>
Yet, in "Pre-Calculus", then of course
there was the notion of limit, and
it was about mentioned in passing exactly that
the course-of-passage of numbers zero through one,
"constant monotone strictly increasing",
was just being put aside,
as the later work has to be all stood up,
and that it has its own way,
and it's a pretty good way,
and it's standard,
and it's a linear curriculum,
and it's the way everyone would know.
>
Limits are poorly characterized as
infinitieth elements of infinite series.
>
Here is an unnecessary "paradox":
What is limit sin(x)/x as x -> 0 ?
>
There is no sin(0)/0
There is no sin(0)/0 anywhere,
and it's also not at some putative infinitieth entry.
>
There is a point 1
1 is near almost all of
⟨ sin(⅟1)/⅟1 sin(⅟2)/⅟2 sin(⅟3)/⅟3 ... ⟩
>
1 is a synecdoche for
⟨ sin(⅟1)/⅟1 sin(⅟2)/⅟2 sin(⅟3)/⅟3 ... ⟩
>
1 can't be in
⟨ sin(⅟1)/⅟1 sin(⅟2)/⅟2 sin(⅟3)/⅟3 ... ⟩
but it doesn't need to be.
>
>
>
>
>
>
That's a pretty good post, Mr. Burns, it's interesting
to watch your developments in style, first employing
the wider symbolry of math symbols, which come from
one or two upper blocks usually in Unicode, and another
refreshing point of style is your adoption of the
concatenation of terms with ".", which has a usual
meaning of concatenation, after the usual course,
which is concatenation, to help improve the accidence
of words, with indicating direct attachment among terms.
>
>
https://en.wikipedia.org/wiki/Mathematical_operators_and_symbols_in_Unicode
>
>
I don't have very good input methods besides "the keyboard keys",
just noting that it's good form, while of course the usual sort
of "ASCII math" written here is a sort of "sub-LaTeX", with regards
to MathML, TeX or usually enough LaTeX of course and "math mode",
and that being the usual format to result type-set mathematical symbols.
>
Beyond style then, not.first.false evokes the spirit of inductive
inference itself. It's the very idea that each case is true and
none is first.false, thus together all true, there's no doubt about
that.
>
Imre Lakatos, calls it, "the miseries of induction". Lakatos describes
the era of the formalization of the Fourier integral, and the
establishment
of the various properties that go into formalizing it sufficiently.
>
https://en.wikipedia.org/wiki/Proofs_and_Refutations
>
https://www.youtube.com/watch?v=jUjiY4WlP3U
>
Now, it seems that currently the only hit in Google for
"Lakatos 'miseries of induction'" is a link to my podcasts,
and "miseries of induction" itself gets nothing at all,
which is bad because it reflects a lot that Cauchy and Seidel
and Dirichlet and Prinzing got into it and there was a lot of
consternation because the criteria of convergence were
based on induction that while not.first.false was not
not.ultimately.untrue, which resulted some extra conditions
to make it so that what was considered the valid the particular
criteria of convergence got into the whole reason why after
convergence there is uniform convergence and these kinds
of things. This basically introduced isn't.not then "is", to follow.
>
I.e., something that was according to induction didn't arrive.
Lakatos deconstructs that either their axioms weren't independent
or that otherwise a deconstructionist account resulted that
then it got improved because, "false axioms", in this
case cr.
>
For all its grasp of things I don't know why search doesn't
bring that up because it's text in Lakatos, where "Proofs
and Refutations" is a great little book about rigor and
the creative process.
>
>
So, when I say "inductive impasse", it means that there are two
course of inductive inference and they contradict each other
in the middle. So, this was "not.ultimately.untrue", as about
the _completions_ of things, that _completions_ are the great
things, that provide the validity of the infinite and recursive
in mathematics.
>
(Consulting Google for "inductive impasse" doesn't turn
up much, either.)
>
The idea is that in mathematics, as they are completions
of two things, that must meet, they must meet in the middle,
the must meet in the middle of "nowhere", as neither's induction
arrives, deductively, they can't not, "not.ultimately.untrue.
>
So, the "Holy Grail" then, sirrah. Here it is as a sort of state
of grace, "not being wrong". So, it's certainly agreeable,
induction and infinite induction is justified and sound for
itself, while it can only talk about its own terms of definition,
it's only so relevant, and the ir-relevant is independent.
>
>
About iota-values, you must not bring in the rules of
constant finite quantities, to constant infinitesimal
quantities, any more than you would the other way,
except saying nothing at all. Each has their own
constructive justification. It is about that the
both range across [0,1], in the span, of the
vector space, as it were, span across, that makes
it quite profound that they are entirely different
and contradistinct models, each though a continuous
domain.
>
"Completeness", has that extent and density and
completeness and measure, suffice to build the
IVT, upon which directly the FTC's are built. When
studying this and finding "essentially the IVT
is the essential result after the laws of arithmetic
to result the fundamental theorems of calculus".
Now, whether that's "Dedekind" completeness, or not,
here is that there are three "definitions of continuity",
that "Dedekind completeness" basically is the standard
"definition of continuity". So, there is line-continuity,
there is field-continuity, and there is signal-continuity,
and each definition of continuity has its own definition
of "continuity completeness". I first wrote this at least
ten years ago, or more.
>
continuity definition <- it's not just Dedekind's
continuity completeness <- each has its own
continuous domain <- any suffices
continuous topology <- a specialization we were discussing
>
So, yes, these are things required, of this kind of concept,
for formalist rigor, and full definition, besides intuition.
>
>
>
Then, you close with a description of a linearisation
called small-angle approximation, which is that as
as x -> 0, that sin(x)/x = 1, and, linearisations are
examples of numerical methods. I.e., numerical methods
with infinite forms of which some few provide much
result are approximations, and the small-angle approximation
is a bit of usually physics, that when small-angle approximation
was introduced by the instructor, certainly I thought to
myself "no it don't".
>
>
https://en.wikipedia.org/wiki/Small-angle_approximation
>
As you can read there on the Wiki, it's a truncation
and higher order terms are all left out, and while
it _greatly_ simplifies some things, just because "1"
is very simple to deal with algebraically, it must be
considered in terms of all its contributions, of the
infinite series, about when, where, whether, why,
and whence: "non-linear terms dominate", resulting
tipping it over.
>
Another example of a numerical method with which
you'll be familiar is, "e = mc^2", which is just the
first term of a series expansion, all the higher-order
terms small, and truncated, yet, also not matching
the dimensional analysis, about the attachment
of dimensional analyais and higher-order dimensions,
what result terms back in the same quantity as the
dimensional analysis as with regards, to, "dimensionless quantities".
>
It's all about "quantities", algebraically, by their definition,
by their derivation, by their dependence, that their denotation
is within an overall derivation and inputs and parameters
provide for implicits and essentially the non-linear analysis.
>
>
>
>
So, "not.first.false" need no defense here: it has its
own very strong justification itself, but it's only as
valid as its attachment, and, commitment, and,
the validity of that is dictated by the space.
>
>
Here then "Zeno's bridge" is a neat first example,
showing that something's got to give either way,
then showing a "meeting in the middle, middle of
nowhere", of the greater perspective afforded
"complementary duals and a full-enough dialectic".
>
>
Here in Imre Lakatos' "Proofs and Refutations"
is an example where he quotes Abel characterizing
"miserable Eulerian induction".
>
https://books.google.com/books?id=zb8qDgAAQBAJ&pg=PA142
>
>