Liste des Groupes | Revenir à cl c |
On 29/05/2024 23:08, bart wrote:On 28/05/2024 16:34, David Brown wrote:On 28/05/2024 13:41, Michael S wrote:>>Let's start another round of private parts' measurements turnament!>
'xxd -i' vs DIY
>
I used 100 MB of random data:
>
dd if=/dev/urandom bs=1M count=100 of=100MB
>
I compiled your code with "gcc-11 -O2 -march=native".
>
I ran everything in a tmpfs filesystem, completely in ram.
>
>
xxd took 5.4 seconds - that's the baseline.
>
Your simple C code took 4.35 seconds. Your second program took 0.9 seconds - a big improvement.
>
One line of Python code took 8 seconds :
>
print(", ".join([hex(b) for b in open("100MB", "rb").read()]))
That one took 90 seconds on my machine (CPython 3.11).
>A slightly nicer Python program took 14.3 seconds :>
>
import sys
bs = open(sys.argv[1], "rb").read()
xs = "".join([" 0x%02x," % b for b in bs])
ln = len(xs)
print("\n".join([xs[i : i + 72] for i in range(0, ln, 72)]))
This one was 104 seconds (128 seconds with PyPy).
>
This can't be blamed on the slowness of my storage devices, or moans about Windows, because I know that amount of data (the output is 65% bigger because of using hex format) could be processed in a couple of a seconds using a fast native code program.
>
It's just Python being Python.
I have two systems at work with close to identical hardware, both about 10 years old. The Windows one has a little bit faster disk, the Linux one has more memory, but the processor is the same. The Windows system is Win7 and as old as the machine, while the Linux system was installed about 6 years ago. Both machines have a number of other programs open (the Linux machine has vastly more), but none of these are particularly demanding when not in direct use.
On the Linux machine, that program took 25 seconds (with python 3.7). On the Windows machine, it took 48 seconds (with python 3.8). In both cases, the source binary file was recently written and therefore should be in cache, and both the source and destination were on the disk (ssd for Windows, hd for Linux).
Python throws all this kind of stuff over to the C code - it is pretty good at optimising such list comprehensions. (But they are obviously still slower than carefully written native C code.) If it were running through these loops with the interpreter, it would be orders of magnitude slower.
So what I see from this is that my new Linux PC took 14 seconds while my old Linux PC took 25 seconds - it makes sense that the new processor is something like to 80% faster than the old one for a single-threaded calculation. And Windows (noting that this is Windows 7, not a recent version of Windows) doubles that time for some reason.
>(I have had reason to include a 0.5 MB file in a statically linked single binary - I'm not sure when you'd need very fast handling of multi-megabyte embeds.)>
I have played with generating custom executable formats (they can be portable between OSes, and I believe less visible to AV software), but they require a normal small executable to launch them and fix them up.
>
To give the illusion of a conventional single executable, the program needs to be part of that stub file.
>
There are a few ways of doing it, like simply concatenating the files, but extracting is slightly awkward. Embedding as data is one way.
>
Sure.
The typical use I have is for embedded systems where there is a network with a master card and a collection of slave devices (or perhaps multiple microcontrollers on the same board). A software update will typically involve updating the master board and have that pass on updates to the other devices. So the firmware for the other devices will be built into the executable for the master board.
Another use-case is small web servers in program, often for installation, monitoring or fault-finding. There are fixed files such as index.html, perhaps a logo, and maybe jquery or other javascript library file.
Les messages affichés proviennent d'usenet.