Sujet : Re: News : ARM Trying to Buy AmperComputing
De : rich (at) *nospam* example.invalid (Rich)
Groupes : comp.os.linux.miscDate : 21. Jan 2025, 20:56:36
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vmou5k$bc8h$1@dont-email.me>
References : 1 2 3 4 5 6 7 8 9 10
User-Agent : tin/2.6.1-20211226 ("Convalmore") (Linux/5.15.139 (x86_64))
D <
nospam@example.net> wrote:
[-- text/plain, encoding 8bit, charset: utf-8, 94 lines --]
On Mon, 20 Jan 2025, 186282@ud0s4.net wrote:
On 1/20/25 3:53 PM, D wrote:
On Mon, 20 Jan 2025, The Natural Philosopher wrote:
On 20/01/2025 09:30, D wrote:
The Pi hat or OMV ?
The pi, with directly connected spinning disks. Does the hat have its own
extra power supply?
I've managed to get a P4 I think to run one spinning rust disk without
extra power.
Strictly it depends on the disk.
The pi hat for 5 drives has an external 60W PSU
Ahh, if it has an external PSU then there is no problem. Ideally, if the pi
hat for 5 drives is intended to accomodate 5 spinning drives, it would be
nice if it did so at full speeds.
>
>
One review said the WRITEs were a little pokey,
but not TOO bad. READs were apparently snappy.
>
This is OK ... most stuff on HDDs is "write once /
read more often".
Hmm, do you have a link? What does "a little pokey" mean in terms of
writes? If it is only performance and latency related, then it is ok,
since the software will take care of a lot of that for me.
The nymshift troll was likely referring to two possibilities:
1) SMR mechanical drives
2) SSD's
In both cases, writes have to be done in what amounnts to a "two step
process".
For SMR drives, because the magnetic tracks physically overlap, writes
get queued to a non SMR area, and then get "moved" to the actual disk
sectors as a bigger batch to maintain the proper "overlap" of the
magnetic tracks.
For SSD's, writes occur to an "erased" flash block (typically much
larger than a "disk sector" size used by the host) and given enough
writes over a short enough timeframe the SSD controller can run out of
"pre-erased" blocks to use, and when that happens write speed slows
down to the rate that can be done when a "block erase" has to occur
before the actual writes can hit the media. Note that this "block
erase" can also invove moving any partially used data sectors out of
the block into another block, creating a "write amplification"
situation as well.