Sujet : Re: Chinese downloads overloading my website
De : legg (at) *nospam* nospam.magma.ca (legg)
Groupes : sci.electronics.designDate : 11. Mar 2024, 18:48:57
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <gabuui56k0fn9iovps09um30lhiqhvc61t@4ax.com>
References : 1 2 3 4 5 6 7
User-Agent : Forte Agent 4.2/32.1118
On Mon, 11 Mar 2024 06:05:26 GMT, Jan Panteltje <
alien@comet.invalid>
wrote:
On a sunny day (Sun, 10 Mar 2024 13:47:48 -0400) it happened legg
<legg@nospam.magma.ca> wrote in <t7rrui5ohh07vlvn5vnl277eec6bmvo4p9@4ax.com>:
>
On Sun, 10 Mar 2024 06:08:15 GMT, Jan Panteltje <alien@comet.invalid>
wrote:
>
On a sunny day (Sat, 09 Mar 2024 20:59:19 -0500) it happened legg
<legg@nospam.magma.ca> wrote in <u14quid1e74r81n0ajol0quthaumsd65md@4ax.com>:
>
<snip>
When I ask google for "how to add a captcha to your website"
I see many solutions, for example this:
https://www.oodlestechnologies.com/blogs/create-a-captcha-validation-in-html-and-javascript/
>
Maybe some html guru here nows?
>
That looks like it's good for accessing an html page.
So far the chinese are accessing the top level index, where
files are offered for download at a click.
>
Ideally, if they can't access the top level, a direct address
access to the files might be prevented?
Using barebones (Netscape) Seamonkey Compser, the Oodlestech
script generates a web page with a 4-figure manually-entered
human test.
How do I get a correct response to open the protected web page?
>
What I am doing now is using a html://mywebsite/pub/ directory
with lots of files in it that I want to publish in for example this newsgroup,
I then just post a direct link to that file.
So it has no index file and no links to it from the main site.
It has many sub directories too.
https://panteltje.nl/pub/GPS_to_USB_module_component_site_IXIMG_1360.JPG
https://panteltje.nl/pub/pwfax-0.1/README
>
So you need the exact link to access anything
fine for publishing here...
<snip>
The top (~index) web page of my site has lists of direct links
to subdirectories, for double-click download by user.
It also has limks to other web pages that, in turn, offer links or
downloads to on-site and off-site locations. A great number of
off-site links are invalid, after ~10-20years of neglect. They'll
probably stay that way until something or somebody convinces me
that it's all not just a waste of time.
At present, I only maintain data links or electronic publications
that need it. This may not be neccessary, as the files are generally
small enough for the Wayback machine to have scooped up most of the
databases and spreadsheets. They're also showing up in other places,
with my blessing. Hell - Wayback even has tube curve pages from the
'Conductance Curve Design Manual' - they've got to be buried 4 folders
deep - and each is a hefty image.
Somebody, please tell me the the 'Internet Archive' is NOT owned
by Google?
Some off-site links for large image-bound mfr-logo-ident web pages
(c/o
geek@scorpiorising) seem already to have introduced a
captcha-type routine. Wouldn't need many bot hits to bump that
location into a data limit. Those pages take a long time
simply to load.
Anyway - how to get the Oodlestech script to open the appropriate
page, after vetting the user as being human?
RL