Liste des Groupes | Revenir à e design |
On 3/11/2024 9:48 AM, legg wrote:>When I ask google for "how to add a captcha to your website">
I see many solutions, for example this:
https://www.oodlestechnologies.com/blogs/create-a-captcha-validation-in-html-and-javascript/
>
Maybe some html guru here nows?
That looks like it's good for accessing an html page.
So far the chinese are accessing the top level index, where
files are offered for download at a click.
>
Ideally, if they can't access the top level, a direct address
access to the files might be prevented?
Using barebones (Netscape) Seamonkey Compser, the Oodlestech
script generates a web page with a 4-figure manually-entered
human test.
How do I get a correct response to open the protected web page?
Why not visit a page that uses it and inspect the source?
>>What I am doing now is using a html://mywebsite/pub/ directory<snip>
with lots of files in it that I want to publish in for example this newsgroup,
I then just post a direct link to that file.
So it has no index file and no links to it from the main site.
It has many sub directories too.
https://panteltje.nl/pub/GPS_to_USB_module_component_site_IXIMG_1360.JPG
https://panteltje.nl/pub/pwfax-0.1/README
>
So you need the exact link to access anything
fine for publishing here...
The top (~index) web page of my site has lists of direct links
to subdirectories, for double-click download by user.
You could omit the actual links and just leave the TEXT for a link
present (i.e., highlight text, copy, paste into address bar) to
see if the "clients" are exploring all of your *links* or are
actually parsing the *text*.
>It also has limks to other web pages that, in turn, offer links or>
downloads to on-site and off-site locations. A great number of
Whether or not you choose to "protect" those assets is a separate
issue that only you can resolve (what's your "obligation" to a site that
you've referenced on YOUR page?)
>off-site links are invalid, after ~10-20years of neglect. They'll>
probably stay that way until something or somebody convinces me
that it's all not just a waste of time.
At present, I only maintain data links or electronic publications
that need it. This may not be neccessary, as the files are generally
small enough for the Wayback machine to have scooped up most of the
databases and spreadsheets. They're also showing up in other places,
with my blessing. Hell - Wayback even has tube curve pages from the
'Conductance Curve Design Manual' - they've got to be buried 4 folders
deep - and each is a hefty image.
You can see if bitsavers has an interest in preserving them in a
more "categorical" framework.
>Somebody, please tell me the the 'Internet Archive' is NOT owned>
by Google?
Some off-site links for large image-bound mfr-logo-ident web pages
(c/o geek@scorpiorising) seem already to have introduced a
captcha-type routine. Wouldn't need many bot hits to bump that
location into a data limit. Those pages take a long time
simply to load.
There is an art to designing all forms of documentation
(web pages just being one). Too abridged and folks spend forever
chasing links (even if it's as easy as "NEXT"). Too verbose and
the page takes a long time to load.
>
OTOH, when I'm looking to scrape documentation for <whatever>,
I will always take the "one large document" option, if offered.
It's just too damn difficult to rebuild a site's structure,
off-line, in (e.g.) a PDF. And, load times for large LOCAL documents
is insignificant.Anyway - how to get the Oodlestech script to open the appropriate>
page, after vetting the user as being human?
No examples, there?
>
Les messages affichés proviennent d'usenet.