On Fri, 27 Dec 2024 22:38:58 +0000, rhertz wrote:
My question:
>
Regarding the conclusion of the 1959 paper, It's the same as if he had
written:
>
Net shift (source at the bottom, red shifting) = (-17.6 - 2.1) x 10^-15.
Netshift (source at the top, blue shifting) = (-15.5 + 2.1) x 10^-15.
>
Clearly, it's shown that the gravitational shift exists and that can be
extracted from two sets of measurements that should have been equal.
Stop mentioning differences due to Doppler or other sources. The
experiment clearly used statistically cooked data, and NO ONE questioned
these results.
<snip>
I finally rest my case. The experiment was an HOAX, with fraudulent,
cooked data, and NOBODY DID CARE TO QUESTION THIS EXPERIMENT (under a
grant from US. MILITARY, who paid even the sandwiches and drinks for all
the staff).
<snip>
Here is what ChatGPT says about the vulnerability of chatbots
(inclucing itself) to social engineering efforts by humans who want
to guide the chatbot into making false statements:
My question:
How susceptible are ai chatbots to being led on by a questioner to
making false or distorted statements?
ChatGPT's answer:
AI chatbots, including myself, can be susceptible to being led into
making false or distorted statements, though safeguards are in place
to minimize this risk. Here are some factors that influence
susceptibility:
1. Context Manipulation
• Leading Questions: Users can pose misleading or loaded questions
designed to nudge the chatbot into giving inaccurate responses.
• Ambiguity: If a question is unclear or lacks context, the chatbot
might interpret it in a way that leads to an incorrect or distorted
statement.
2. Mimicking User Input
• Echoing False Information: Chatbots often aim to be conversational,
and in doing so, they might inadvertently repeat or agree with
misinformation provided by the user.
3. Knowledge Gaps
• If a chatbot doesn’t have sufficient information on a topic, it may
attempt to “fill in the gaps” or make assumptions, which can lead to
errors.
4. Complex or Niche Topics
• Chatbots are more likely to falter on topics requiring specialized
knowledge, as they rely on patterns from training data or accessible
online sources, which may not always be accurate.
5. Social Engineering
• Users can exploit the chatbot’s attempt to align with conversational
tone or user sentiment, subtly leading it to make questionable
statements.
Safeguards to Prevent This
• Fact-Checking: Some chatbots are designed to verify statements
against reliable databases when available.
• Transparency: Many chatbots acknowledge uncertainty or gaps in
knowledge when appropriate.
• Ethical Constraints: Ethical guidelines help limit the chatbot’s
engagement with harmful or misleading narratives.
To ensure accuracy, questioners should provide clear, well-intentioned
questions and clarify when necessary. Similarly, chatbots must evolve
to detect and resist manipulation while prioritizing factual responses.
======================================================================
You clearly have been priming ChatGPT with misleading "facts" outside
of its range of knowledge. Chatbots generally assume good faith on the
part of their users. I know from observation that ChatGPT has only a
rather superficial knowledge of the details of Pound & Rebka. ChatGPT
assumed good faith on your part, that you would provide it with
accurate and fairly stated facts to fill in the gaps in its knowledge.
Instead, you deliberately fed it false information.