Re: ChatGPT

Liste des GroupesRevenir à cl ada 
Sujet : Re: ChatGPT
De : Master_Fontaine_is_dishonest (at) *nospam* Strand_in_London.Gov.UK (Nioclásán Caileán de Ghlostéir)
Groupes : comp.lang.ada
Date : 17. Oct 2024, 00:22:50
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <ff510cc1-6d03-c50e-55d1-9117ee813d67@Strand_in_London.Gov.UK>
References : 1 2 3 4 5 6
On Thu, 6 Apr 2023, Ken Burtch wrote:
"On Saturday, April 1, 2023 at 3:39:51 AM UTC-4, Dmitry A. Kazakov wrote:
On 2023-03-31 23:44, Anatoly Chernyshev wrote:
Data science people swear it's just a matter of the size of training set used...
They lie. In machine learning overtraining is as much a problem as
undertraining. The simplest example from mathematics is polynomial
interpolation becoming unstable with higher orders.
 
And this does not even touch contradictory samples requiring retraining
or time constrained samples etc.
I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as of now.
One must remember that ChatGPT is nothing but ELIZA on steroids.
 
https://en.wikipedia.org/wiki/ELIZA
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

For what it's worth on the subject of the chatbot, " "Produce Ada code for solving a quadratic equation."  is a terrible choice for a test of ChatGPT as one is asking if it can do a Google search.  To test its abilities, you have to pick a challenge that cannot be solved with a Google search.

My short assessment of ChatGPT, with the history of chatbots, are available on my February blog post.  I gave it a simple programming problem and it failed 3 times out of 4.  It's not surprising as I've learned since February that the chatbot doesn't actually understand programming: it uses examples off the Internet and tries to predict what you might have typed based on keyword patterns.  It is an imitation of an imitation, smoke and mirrors.  This is why Vint Cerf denounced it.  You can read my thoughts on my blog:

https://www.pegasoft.ca/coder/coder_february_2023.html

Ken Burtch"


As an experiment I asked WWW.Perplexity.AI "Why is ChatGPT bad at Ada?" so
Perplexity alleged:
"Answer
ChatGPT's performance with the Ada programming language can be perceived
as inadequate due to several factors related to both the language's
characteristics and the limitations of the AI model itself.
Factors Contributing to ChatGPT's Performance Issues with Ada
1. Complexity of Ada:
Ada is known for its strong emphasis on safety and correctness, which
includes features like runtime checks and strong typing. While these
features are beneficial for developing reliable software, they can also
make the language more complex and verbose. This complexity may lead to
misunderstandings or oversights when ChatGPT attempts to generate or
analyze Ada code, especially for more intricate tasks12.
2. Performance Overhead:
The safety features inherent in Ada, such as range checking and bounds
checking, introduce additional overhead that can affect performance. This
overhead is crucial for safety-critical applications but may not be
well-handled by ChatGPT when generating code or discussing performance
optimizations. The model may struggle to provide accurate assessments or
optimizations for Ada code compared to languages like C or C++, which are
often perceived as more straightforward in terms of low-level
optimizations1.
3. Limitations of AI Understanding:
ChatGPT operates based on patterns in the data it has been trained on. As
a result, it may not fully grasp the nuances of niche programming
languages like Ada, particularly when users ask complex questions or
request sophisticated code examples. This limitation can lead to
incorrect or overly simplistic responses when users seek detailed help
with Ada programming23.
4. User Expectations and Experience:
Users have reported that while ChatGPT can handle basic queries related
to Ada, it often fails with more complex tasks. This inconsistency can
lead to frustration, as users expect a level of expertise that the model
may not be able to provide due to its training limitations and the
inherent complexities of the Ada language25.
5. Hallucination and Inaccuracy:
ChatGPT has a tendency to "hallucinate" or generate incorrect
information, particularly in specialized fields where it lacks extensive
training data. This issue is exacerbated in niche programming languages
like Ada, where accurate information may not be as prevalent in its
training corpus. Consequently, users might receive misleading advice or
code snippets that do not function as intended34.
In summary, ChatGPT's challenges with Ada stem from the language's
complexity, performance overhead from safety features, limitations in AI
understanding, user expectations, and a propensity for inaccuracies.
These factors collectively contribute to a perception that ChatGPT is not
well-suited for generating or analyzing Ada code effectively."

Date Sujet#  Auteur
17 Oct 24 * Re: ChatGPT3Nioclásán Caileán de Ghlostéir
20 Oct 24 `* Re: ChatGPT2Kevin Chadwick
20 Oct 24  `- Re: ChatGPT1Nioclásán Caileán de Ghlostéir

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal