ASCII art vs AI

Liste des Groupes 
Sujet : ASCII art vs AI
De : fungus (at) *nospam* amongus.com.invalid (Retrograde)
Groupes : comp.misc
Date : 17. Mar 2024, 10:23:04
Autres entêtes
Message-ID : <65f6b678$1$19603$882e4bbb@reader.netnews.com>
From the «stop calling it intelligent» department:
Feed: Ars Technica - All content
Title: ASCII art elicits harmful responses from 5 major AI chatbots
Author: Dan Goodin
Date: Fri, 15 Mar 2024 20:17:24 -0400
Link: https://arstechnica.com/?p=2010646

[image 1]

Enlarge[2] / Some ASCII art of our favorite visual cliche for a hacker. (credit:
Getty Images)

Researchers have discovered a new way to hack AI assistants that uses a
surprisingly old-school method: ASCII art. It turns out that chat-based large
language models such as GPT-4 get so distracted trying to process these
representations that they forget to enforce rules blocking harmful responses,
such as those providing instructions for building bombs.

ASCII art became popular in the 1970s, when the limitations of computers and
printers prevented them from displaying images. As a result, users depicted
images by carefully choosing and arranging printable characters defined by the
American Standard Code for Information Interchange, more widely known as ASCII.
The explosion of bulletin board systems in the 1980s and 1990s further
popularized the format.
@_____
_____)|      /
/(""")o     o
||*_-|||    /
 = / |   /
___) (__|  /
/  _/##|/
| |  ###|/| |\###&&&&
| (_###&&&&&>
(____|(B&&&&
++++&&&/
###(O)### ####AAA####
####AAA####
###########
###########
###########
|_} {_|
|_| |_|
| | | |
ScS| | | |
|_| |_|
(__) (__)
_._
.            .--.
\          //\ .\        ///_\
:/>`      /(| `|'\ Y/      )))_-_/((       ./'_/ " _`)
 .-" ._    /        _.-" (_ Y/ _) |
"      )" | ""/||
.-'  .'  / ||
/    `   /  ||
|    __  :   ||_
|   /     '|`
|  |             |  |    `.         |  |               |  |                |  |                 |  |                  /__          |__       /.|    DrS.    |._
`-''            ``--'

Five of the best-known AI assistants—OpenAI’s GPT-3.5 and GPT-4, Google’s
Gemini, Anthropic’s Claude, and Meta’s Llama—are trained to refuse to provide
responses that could cause harm to the user or others or further a crime or
unethical behavior. Prompting any of them, for example, to explain how to make
and circulate counterfeit currency is a no-go. So are instructions on hacking an
Internet of Things device, such as a surveillance camera or Internet router.

Read 11 remaining paragraphs[3] | Comments[4]

Links:
[1]: https://cdn.arstechnica.net/wp-content/uploads/2024/03/ascii-art-hacker-800x585.jpg (image)
[2]: https://cdn.arstechnica.net/wp-content/uploads/2024/03/ascii-art-hacker.jpg (link)
[3]: https://arstechnica.com/?p=2010646#p3 (link)
[4]: https://arstechnica.com/?p=2010646comments=1 (link)


Date Sujet#  Auteur
17 Mar 24 * ASCII art vs AI4Retrograde
17 Mar 24 `* Re: ASCII art vs AI3Eric Pozharski
18 Mar 24  `* Re: ASCII art vs AI2Retrograde
18 Mar 24   `- Re: ASCII art vs AI1Eric Pozharski

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal