Sujet : Re: Making your mind up
De : arkalen (at) *nospam* proton.me (Arkalen)
Groupes : talk.originsDate : 02. May 2024, 13:12:48
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v10003$3rdjq$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0
On 01/05/2024 01:30, Mark Isaak wrote:
On 4/30/24 2:08 AM, Martin Harran wrote:
On Mon, 29 Apr 2024 09:43:03 -0700, Mark Isaak
<specimenNOSPAM@curioustaxon.omy.net> wrote:
>
>
[…]
>
As it happens, I have been reading Yuval Noah Harari's _Homo Deus_ and
yesterday read his take on free will. He considers it a modern myth
disproved by science. One example he gives is "robo-rats", rats in a
laboratory which have electrodes implanted in the pleasure centers of
their brain, which scientists can stimulate to make the rats do what the
scientists want them to do. The rats turn this way and that not of their
own choice, but according to the choices of the people pressing buttons.
Now, imagine you are one of those rats. You turn left. Why? Because you
*chose* to turn left. "What does it matter whether the neurons are
firing because they are stimulated by other neurons or by transplanted
electrodes connected to Professor Talwar's remote control? If you ask
the rat about it, she might well tell you, 'Sure I have free will! Look,
I want to turn left -- and I turn left. I want to climb a ladder -- and
I climb a ladder. Doesn't that prove I have free will?'" [pp. 333-334]
>
>
Most brain research that I'm aware of - including the Lbet experiments
- show a considerable difference in brain activity between trivial
decisions and important decisions. I think it's safe to say that 'Turn
left or tun right' is well into the trivial category.
>
You and he also seem to be making the assumption that the decision
process in rats can be directly transposed into humans which isnot
necessarily the case - there are distinct difference between rats and
primates, including humans. See my response to Arkalen below.
I took the rat illustration as an illustration, not as proof of final concept. If a rat controlled by a human can be thinking, "I made that decision on my own", so can a human controlled by fate.
I think it's very unhelpful to conflate "human controlled by fate" and "mammal controlled by a different mammal" this way. It kind of ignores what "control" or "decision" even mean.
The fact is, core to the idea of a "decision" is the notion of some agent *making* the decision. If you make a robo-rat turn left or right you can argue that it's "really making the decision" because "it thinks it does" (does it?) but in any real-life situation everyone immediately knows that the decision is being made by you.
Basically we're able to classify behaviors along whether they're a decision or not, and who the decision is made by. If you're driving a mattress truck across an intersection near some cliff or ramp and see your friend barrelling down the road on their bicycle - and you know your friend likes doing dangerous bike tricks and that this is a typical place for them to do it, and they scream "get out of the way!" you will immediately infer that the bicycle is under your friend's control, and that its speed and direction are the product of your friend's decisions. If, on the other hand, your friend screams "The brakes don't work! Help me!" you will infer that the bicycle *isn't* under your friend's control, and that its speed and direction are *not* the product of your friend's decisions - in fact your friend is asking you to dramatically change those because she can't.
Now say you save your friend using the mattress truck and examine the bike and find the brake lines were cut and an ingenious remote control mechanism was controlling the bicycle. Now the inference changes again: the bicycle's speed and direction were indeed the product of a decision and the bicycle was indeed under someone's control - the someone just wasn't your friend.
The "humans controlling a rat" and "fate controlling a human" cases cannot be conflated because the first involves an agent making decisions instead of another, and the second involves no decision being made at all. Even if an analogy between the two might technically work, the intuitions we have about both notions are too strong to make it work cleanly.
More specifically when we talk about "fate making decisions for us" that's metaphorical, poetic language personifying fate that doesn't apply to discussing determinism. Under determinism, "fate" is the universe or the overall web of causality. It's not an entity that can make decisions, and if it were it's not the one involved in the decisions attributed to it. Saying "fate decided this for me" is like looking at a boulder rolling down a mountain and saying "fate rolled down the mountain". No it didn't; it's not a thing that can roll down mountains and if it were it's not the thing that happens to be rolling down that specific mountain we're talking about. That would be the boulder.
And if the notion of "decision" is inseparable from the notion of some agent making that decision then it doesn't matter whether the decision is made deterministically or not - it's still made *by that agent* just like rolling down mountains happens to specific boulders.
Or put another way, "someone else made that decision for me" is claiming an external agent causing a given decision, but the "fate" of determinism isn't external to any given agent - it's everything altogether, including the agent itself. We could imagine an entity capable of observing events and inferring which are decisions and who those decisions were made by (we can imagine it because we are such entities); we could also imagine such an entity having a part of its cognition dedicated to making such inferences about itself, making it think for any given behavior "I made this decision" or "this wasn't my decision" (again, easy enough as our own sense of having made a decision likely comes from such a specific cognitive module). We could further imagine this entity being virtually unhackable in its inference abilities - if they were the rat in the illustration instead of thinking "I decided to go left" they'd think "I just felt an urge to go left although the right has features that usually appeal to me - also there's an electrode in my brain connecting me to a machine that this human just pressed a button on and they're looking left, apparently expecting me to go there. Clearly the decision to turn left is being made by them, not by me".
Note I'm not suggesting a perfect ability to identify one's own decisions vs those driven by others, arguably that's not even possible, I'm just saying guardrails against the most obvious direct-brain-hacking being suggested in the illustration, that's aimed to illustrate the role of brain processes in decisions. Our experience as humans shows we're able to be manipulated but also to recognize and resist manipulation; if we had evolved in an environment where brains could be directly stimulated via electrodes & such I see no reason we couldn't have evolved the cognitive reflexes to account for it or other ways of preventing it (like self-destruction as soon as the skull is breached or whatever).
All that to say, we can imagine an entity that you could never stimulate specific bits of the brain of to prompt a decision that would get them to reliably, incorrectly think "I made this decision" - you either couldn't do it at all, or they'd correctly think "this isn't my decision". However if such an entity were deterministic it would still be the case that its decisions *were* the outcome of the brain processes. So it would still be true that "fate made the decision" in that poetic sense. And the entity would still think "I made the decision".
But then my question is: *how would that entity be wrong*? They'd have used their decision-inferring skills to infer that the decision was the result of their own cognitive processes acting in accordance to the values and preferences instantiated in their own brain. And it was.