Sujet : Re: Python (was Re: I did not inhale)
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : comp.unix.shell comp.unix.programmer comp.lang.miscDate : 19. Aug 2024, 10:40:32
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v9v0e0$2q822$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 19/08/2024 09:37, Dmitry A. Kazakov wrote:
On 2024-08-19 01:14, Lawrence D'Oliveiro wrote:
On Sun, 18 Aug 2024 10:10:09 +0200, Dmitry A. Kazakov wrote:
>
On 2024-08-17 23:51, Lawrence D'Oliveiro wrote:
>
On Sat, 17 Aug 2024 12:58:31 +0200, Dmitry A. Kazakov wrote:
>
Windows inter-process API are far more advanced than what UNIX ever
had. It would be enough to mention famous file locks.
>
Except those file locks are more of a liability than an asset.
>
Like so many things in UNIX...
>
People voluntarily choose to use Unix-type OSes. There’s a reason why
Unix-type OSes are the official de-facto standard in the computing world,
not Windows.
Both OSes contributed to the Dark Ages of computing. The reasons are not technical, because both were worst on the market.
What sort of time-frame are you thinking of here, what were the alternatives that you think were "better", what markets or uses are you considering, and in what way were other OS's "better" ?
There's no doubt that non-technical issues have had a big influence on which OS's or types of OS have succeeded, but you seem to have something specific in mind.
The similar process happened with programming languages, e.g. C and with the hardware architectures, e.g. x86. It is always a race to the bottom...
The success of the x86 was very much a race to the bottom - it was picked specifically to give a cheaper system rather than the technically superior architecture (m68k) preferred by the engineers. Momentum and backwards compatibility has kept it going ever since.
I am not as convinced with respect to C. It certainly has its flaws, and it certainly has been, and continues to be, used in situations where it is not a good choice of language. But I think much of the bad reputation of C is the result of poor C programmers and poor use of the language, rather than the language itself. Good programmers will write good code in any language, bad programmers (or badly managed programmers) will write bad code in any language.
They are what prevent you from continuing to use a Windows system while
it is being updated, for example.
>
Windows mutex gets collected when the last process using it dies. UNIX
file lock does not.
>
What happens to a file lock when there is no file for it to lock?
Windows does not use lock files.
Windows has locks on files, which are a different thing. While I can understand the point of them, they can be a real inconvenience (try deleting a directory tree when a file from that tree is in use).
Under Linux you must log in as the root and remove the stray file lock manually. It happens in UNIX administration all the time.
As someone who has administrated Linux servers for decades, and used it as my desktop OS on many machines, I am not sure I can ever remember removing a stray lock file. Certainly needing to do so "all the time" is a very wild exaggeration. Linux, like all systems, undoubtedly has its flaws and weaknesses, but this is not one of them IME.
Remember, the current Windows (aka Windows NT) was masterminded by Dave
Cutler, who came from the nest of Unix-haters at DEC. He carried over many
of the characteristics of his last major brainchild there, VMS. One of
them is that creating multiple processes is expensive, so you try to avoid
it.
A wise decision. The look of UNIX SysV process list was a sheer horror to any user of RSX or VMS. No wonder UNIX was many times slower on same machines. A VMS 1Mb machine supported 4 users running an interactive IDE sessions (in LSE). UNIX users enjoyed Vi and permanent fatal crashes. The early filesystem rewrote the master block, so after the crash you could not boot anymore and have to restore the system from the tape. Under RSX you could turn the main disk off and on without reboot.
Times change. Needs and uses change. Hardware changes.
Keeping things separate and modular has advantages in scalability, security and stability. Keeping things monolithic has advantages in efficiency (speed and memory) and consistency. There is no "right" answer.
The reason why Windows NT could no compete Linux on servers is unbearable maintenance and being fat. Linux had a monolithic kernel. I compiled it for each machine to include only drivers I needed. I did not install X11 stuff. The result was twice leaner than Windows NT.
On the other hand you still cannot have decent gaming under Linux.
I do almost all my gaming under Linux. Some games do work better under Windows, but that is primarily because most games developers target Windows as their main platform. It may also be because Linux systems are more varied.
Windows has a pipe object named and anonymous. No problem.
>
One problem: you can’t use them with poll/select calls.
You can. See overlapped I/O.
P.S. It is no wonder that Windows process API are far beyond UNIX.
>
Linux has clone(2). This can create regular POSIX-style processes, as well
as regular POSIX-style threads. And quite a few things in-between.
>
On the other hand, Windows NT was developed by people influenced with
the VMS design. VMS had a very elaborated process communication API.
>
And single drive letters?
They are dozens characters long actually, if you mean the device names.
I thought by "drive letters", he meant "drive letters" - "c:", "d:", etc.