Liste des Groupes | Revenir à col misc |
On Tue, 21 Jan 2025, 186282@ud0s4.net wrote:That's how we do it.
On 1/21/25 2:56 PM, Rich wrote:This is the truth!D <nospam@example.net> wrote:>[-- text/plain, encoding 8bit, charset: utf-8, 94 lines --]>
>
>
>
On Mon, 20 Jan 2025, 186282@ud0s4.net wrote:
>On 1/20/25 3:53 PM, D wrote:>>>
>
On Mon, 20 Jan 2025, The Natural Philosopher wrote:
>On 20/01/2025 09:30, D wrote:>>>>
The Pi hat or OMV ?
The pi, with directly connected spinning disks. Does the hat have its own
extra power supply?
I've managed to get a P4 I think to run one spinning rust disk without
extra power.
Strictly it depends on the disk.
The pi hat for 5 drives has an external 60W PSU
Ahh, if it has an external PSU then there is no problem. Ideally, if the pi
hat for 5 drives is intended to accomodate 5 spinning drives, it would be
nice if it did so at full speeds.
>
One review said the WRITEs were a little pokey,
but not TOO bad. READs were apparently snappy.
>
This is OK ... most stuff on HDDs is "write once /
read more often".
Hmm, do you have a link? What does "a little pokey" mean in terms of
writes? If it is only performance and latency related, then it is ok,
since the software will take care of a lot of that for me.
The nymshift troll was likely referring to two possibilities:
>
1) SMR mechanical drives
2) SSD's
>
In both cases, writes have to be done in what amounnts to a "two step
process".
>
For SMR drives, because the magnetic tracks physically overlap, writes
get queued to a non SMR area, and then get "moved" to the actual disk
sectors as a bigger batch to maintain the proper "overlap" of the
magnetic tracks.
>
For SSD's, writes occur to an "erased" flash block (typically much
larger than a "disk sector" size used by the host) and given enough
writes over a short enough timeframe the SSD controller can run out of
"pre-erased" blocks to use, and when that happens write speed slows
down to the rate that can be done when a "block erase" has to occur
before the actual writes can hit the media. Note that this "block
erase" can also invove moving any partially used data sectors out of
the block into another block, creating a "write amplification"
situation as well.
>
Disks - magnetic or SSD - are kinda messy. Of course
their mission is kinda messy - deal with odd-sized
blobs of data, try to jam it in somewhere, maybe have
to move pre-existing around, try not to create TOO
many 'gaps', for years and years.
>
SSDs are quicker regardless and use less power, but
that doesn't mean they're just a petabyte of empty
space, STUFF has to happen. SSDs trend smaller than
HDDs too and are more $$$ per terabyte. Yer basic
WD/Seagte magnetic laptop drives are a pretty good
deal IF you can handle the power req.
Made a "different building" aux backup unit usingGood stuff! I replicate between two countries for added resilience. Both rsync and restic work great backing up to a tor hidden service. To speed things up, the first backup can be done locally, and after that, only deltas are sent from around the world.
a Pi-3 and 2.5" USB mag drive. The idea was to
keep the Most Important Stuff in a separate
building, separate leg of the power system. Used
wi-fi ... but had all day to do its thing. This
was protection against lighting/surges/fires
and the dreaded Giant Mug Of Coffee that might
afflict the main NAS. Cheap, worked great, a
Python pgm to do the backups (DO confirm yer
USB and NAS are both mounted). The USB drive
was powered by the Pi, not an external wart.
The drive and Pi were taped together and the
whole mess was velcroed to the underside of a
shelf out in a shop building.
So far restic is still good. I wonder what its weaknesses are? Would be a shame to drop my trusted old rsync script + hardlinks in favour of restic only to discover some hidden bug. On the other hand it seems as if 1000s of people all over the world are using it and are happy with it, so maybe it is mature enough.I've generally used rsync, though rarely as client/server.
Les messages affichés proviennent d'usenet.