Sujet : Re: Command Languages Versus Programming Languages
De : rweikusat (at) *nospam* talktalk.net (Rainer Weikusat)
Groupes : comp.unix.shell comp.unix.programmer comp.lang.miscDate : 22. Nov 2024, 19:59:43
Autres entêtes
Message-ID : <87cyinrt5s.fsf@doppelsaurus.mobileactivedefense.com>
References : 1 2 3 4 5 6 7
User-Agent : Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)
scott@slp53.sl.home (Scott Lurndal) writes:
Rainer Weikusat <rweikusat@talktalk.net> writes:
[...]
Something which would match [0-9]+ in its first argument (if any) would
be:
>
#include "string.h"
#include "stdlib.h"
>
int main(int argc, char **argv)
{
char *p;
unsigned c;
>
p = argv[1];
if (!p) exit(1);
while (c = *p, c && c - '0' > 10) ++p;
if (!c) exit(1);
return 0;
}
>
but that's 14 lines of text, 13 of which have absolutely no relation to
the problem of recognizing a digit.
>
Personally, I'd use:
>
$ cat /tmp/a.c
#include <stdint.h>
#include <string.h>
>
int
main(int argc, const char **argv)
{
char *cp;
uint64_t value;
>
if (argc < 2) return 1;
>
value = strtoull(argv[1], &cp, 10);
if ((cp == argv[1])
|| (*cp != '\0')) {
return 1;
}
return 0;
}
This will accept a string of digits whose numerical value is <=
ULLONG_MAX, ie, it's basically ^[0-9]+$ with unobvious length and
content limits.
return !strstr(argv[1], "0123456789");
would be a better approximation, just a much more complicated algorithm
than necessary. Even in strictly conforming ISO-C "digitness" of a
character can be determined by a simple calculation instead of some kind
of search loop.