Liste des Groupes | Revenir à t origins |
On Tue, 9 Apr 2024 10:11:52 -0500, DB Cates <cates_db@hotmail.com>???
wrote:
On 2024-04-09 4:09 AM, Martin Harran wrote:ISTM that your Occam's razor is getting a bit blunt.On Mon, 8 Apr 2024 10:19:01 +0200, Arkalen <arkalen@proton.me> wrote:>
>On 07/04/2024 17:01, Martin Harran wrote:>On Sat, 6 Apr 2024 10:22:18 +0000, j.nobel.daggett@gmail.com (LDagget)>
wrote:
>Martin Harran wrote:>
>On Fri, 5 Apr 2024 16:29:20 -0500, DB Cates <cates_db@hotmail.com>>
wrote:>On 2024-04-05 11:05 AM, Martin Harran wrote:There was quite an interesting discussion a few weeks ago on Free Will>
vs Determinism but it died a death, at least in part due to the
departure of some contributors to the Land Beyond GG. I'd like to take
up some of the issues again if anyone is interested.
>
One point made by Hemidactylus that didn't get developed any further
was the way that we sometimes give a lot of time and effort into
making a decision - he gave the example of buying a car. It's also
common for someone to want to "sleep it on it" before making a
decision where the decision is important but it is not clear what
decision is best. If a decision is essentially predetermined then what
is the point of that time and effort or sleeping on it?
Do you not see that this argument depends on the belief that there was
an *option* to make the decision earlier under different conditions
(lack of 'thinking it over' and/or 'sleeping on it'). IOW that free will
exists. You are 'begging the question'.It's actually the complete opposite, I am starting with the assumption>
that there is no free will and asking what then is the point in
deliberating over the various options. You seem to be taking things a
bit further and saying that if determinism exists then there aren't
any options to begin with but that is just a variation in emphasis, it
doesn't address the question of why we spend so much time pondering
those options when they don't even exist.
You missed his point.
Consider writing an algorithm controlling a robot walking down a path.
The robot comes to a fork in the road. Does it take the left fork or
the right fork?
>
The robot has no free will. It can, however, process data.
>
The algorithm can have layered complexity. Scan left, scan right,
process data. Simple-minded algorithm scans 1 sec each way, sums up
some score of positive and negatives and picks the best. If it's a
tie, it might kick the random number generator into gear.
>
Alternatively, it can get into a loop where it keeps scanning left
and right until one "choice" passes a threshold for "better" that
is not just a greater than sign, maybe 10% better or such. From
the outside, this is "pause to think". With a little imagination,
one can add much more complexity and sophistication into how the
robot chooses. It can be dynamically adjusting the thresholds. It
can use it's wifi connection to seek external data. It can find that
its wifi signal is poor at the fork in the road so back up to where
it was better.
>
Map "go home and sleep on it" to some of that or to variants.
Map it into Don's words. The robot could not "choose" left or
right until its algorithm met the decision threshold, i.e. it
didn't have a legitimate option yet. (hopefully he'll correct
me if I have abused his intent too far)
>
To an outside observer lacking full knowledge of the algorithm,
it looked like it had a choice but inexplicably hesitated.
It is *you* who have missed the point. What you have described above
is an algorithm to process data and arrive at a decision; what I was
asking about is why we delay once all the information that is
available or likely to be available *has been processed*. Once all the
information has been input in your algorithm there is no reason for
the processor to continue analysing unless you add in some sort of
rather pointless "just hang about for a while" function; no matter how
many times your algorithm runs with a given set of inputs, it will
reach the same decision. One exception to that is your suggestion of a
random number generator when the two options look more or less equal
but your problem is that that randomness is very antithesis of
determinism.
>
I think that makes some big assumptions on what information is being
processed and how the processing actually works (and what that implies
about how long it can take & what conditions cause it to terminate).
>
Consider the common decision-making advice of "flip a coin to make the
decision; how do you feel about the result? You have your decision". It
doesn't always work but I think most would agree that it can. It's also
very analogous to the case of your wife changing her mind after having
made the choice.
>
It also seems clear that this method *does* generate new information, to
the conscious self at least. The reason to do this is that a critical
component of a decision is *how we feel* about something, and this isn't
something we have full conscious clarity on. New events like the coin
flip might not add information about external aspects of the decision
but they can add information about *us* and that can impact the decision.
>
Or more analytically if you imagine decision-making as a back-and-forth
between two different information-processing mechanisms, the one we
consciously experience as thoughts and the one we consciously experience
as feelings, then ISTM that accounts for the phenomenon neatly enough.
Decisions where "feelings" provide a strong answer but "thoughts" don't,
or agree with "feelings", are easy and quickly made. Decisions where
"feelings" give a weak answer but "thoughts" give a strong one are
slightly slower & harder because "thoughts" are a slower & more
effortful process, but still quick enough at conscious scales.
>
>
The really long-winded or difficult decisions are those where both
"thoughts" and "feelings" give weak or ambiguous answers, or they give
answers that are at odds with each other (and it's possible that second
is just a case of ambiguous "feelings" - that "feelings" always carry
the day & situations where "thought" seems to override "feelings" are
actually a case of "thought" identifying a contradiction between
different feelings & resolving it). What goes on with those isn't just
"information processing", or at least the processing is a lot more
involved than that bloodless term suggests. It's a lengthy exchange
between the thinking brain coming up with scenarios, submitting them to
the feeling brain for evaluation, incorporating the result into new
scenarios & repeat until it's kicked the feeling brain into a distinct
coherent preference.
>
>
If we collapse all of this into "an information-processing robot" then
all it means is there never was a point of "all information has been
processed". The sleeping on it is information processing; the choosing
curtains then thinking better of it is information processing,
information is being processed the whole time.
What is going on in our brain whilst we are sleeping still seems to be
one of the most poorly understood aspects of human behaviour but it
seems to me that there is a hell of a lot of brain activity involved
and part of the reason for sleep is probably to allow the brain to
focus more or less exclusively on processing everything we have
experienced that day without being distracted by what is happening
now.
>
Again, that takes me back to the point that I have been making to Don
- where is the benefit from loading the brain with additional activity
just to process information where the decision has been
pre-determined?
Let's say that the conditions at time A pre-determine the action B at
time B. That only works if time A conditions also pre=determine all the
*changes* in conditions up to time B that provide the time B conditions
that determine action B. You can't skip to the head of the line.
-->>And in fact could keep>
being processed forever, with different cognitive processes being
required to make the processing stop (for example I don't recall the
name of the phenomenon but I'm pretty sure it's a thing that picking an
option causes us to prefer that option more than we did before.
Presumably it says something that this phenomenon wasn't enough to make
your wife feel good about her choice, thus justifying her change of mind).
>
>>>>
The same general retort will apply to most all of your retorts.
In that case, it would have been useful for you to retort to the
example I gave about my wife in the second half of my post which you
ignored.
>>>
An added thing to consider is where "consciousness" comes into play.
All the data the robot is scanning can be processed by sub-processors
that generate most of the information needed to produce a choice
before the central processing algorithm distributes instructions
to the subroutines that activate whatever it is the robot needs
to do to locomote down a path. Fill in the blanks.
--
Les messages affichés proviennent d'usenet.