On 2/21/25 1:57 AM, c186282 wrote:
On 2/19/25 1:27 AM, c186282 wrote:
On 2/18/25 8:49 PM, Lawrence D'Oliveiro wrote:
On Tue, 18 Feb 2025 02:59:35 -0500, c186282 wrote:
>
SAMBA isn't anything NEW - it SHOULDN'T be this difficult.
>
It isn’t. I find when I test things with smbclient, they usually work.
>
It’s invariably the Windows clients that cause problems.
>
In this exact case, it's Linux clients.
>
Tried and tries, checked smb,conf against recent
examples, checked /etc/groups, checked permissions
and ownership. Just WON'T work.
>
As said, I used SAMBA in a mixed environment for
many many years. Then, decidedly by bookworm,
SOMETHING nasty changed. This is terrible.
>
Did NFS shares instead. Don't love NFS, SAMBA
offers much more fine-grained options and
more nuanced security. If this was a work
environment I'd be even more upset - but
this is a 'home' system now that I've retired
and Vlad probably ain't looking.
>
Anyway, a root @reboot runs a script that mounts
the /dev/sdx USBs. They DO show up where they
are supposed to be. Liberal permissions. SAMBA
just CAN'T share them properly, nothing but
permission errors on the client no matter how I
set things - and I've used Linux since 'X' was
introduced (no fun setting mouse/keyboard/mon
back then) on floppies.
>
Dunno WHAT the hell they changed, but it's not
documented worth a damn.
followup :
DID have to do it all with NFS alas.
Root crontab @reboot, after a safe delay, mounts
the USB drives into local subdirs.
@reboot sleep 100 && /root/scripts/mountUSB.sh &
#!/bin/bash
# mount the external USBs
sleep 10
mount -t ext4 /dev/sdb1 -o defaults /home/nas/share/MyShare1
mount -t ext4 /dev/sdc1 -o defaults /home/nas/share/MyShare2
/etc/exports has the proper NFS share defs. Note
that the syntax can be fiddly.
/home/nas/share/MyShare1 192.168.0.0/24(rw)
/home/nas/share/MyShare2 192.168.0.0/24(rw)
All clients have the proper /etc/fstab mount defs.
192.168.0.123:/home/nas/share/MyShare1 /mnt/shar1 nfs defaults,timeo=900,retrans=5,_netdev 0 0
192.168.0.123:/home/nas/share/MyShare2 /mnt/shar2 nfs defaults,timeo=900,retrans=5,_netdev 0 0
Root crontab runs a backup bash script twice a day.
The USB drives all have a tag file - "USB-Mounted" -
on them. The script makes sure it's there. If so
then it's "rsync -a --delete <whatever>/ <wherever>".
Pretty quick. Appends the datetime and 'ok' to a
log file. If the tag file is NOT found then it
appends the datetime with "fail" to the log.
5 6,18 * * * /root/scripts/dupSDB1.sh
#!/bin/bash
# use rsync to copy sdb1 (samsung ssd) to sdc (wd black)
# when run, it will do rsync and append 'backups.log'
# with either the success datetime OR the datetime
# with 'fail' afterwards.
# our test file - if not there then usb
# is not properly mounted
FILE=/home/nas/share/MyShare1/USB-Mounted
if test -f $FILE ; then
rsync -a --delete /home/nas/share/MyShare1/ /home/nas/share/MyShare2
echo "backup done"
dt=$(date)
dt2=${dt}" ok"
touch /home/nas/backups.log
echo $dt2 >> /home/nas/backups.log
else
dt=$(date)
dt2=${dt}" fail"
touch /home/nas/backups.log
echo $dt2 >> /home/nas/backups.log
fi
Early morning, root crontab reboots the whole box.
0 1 * * * /sbin/shutdown -r +1
It works, it's easy, not even any Python or 'C'.
Quick improvement, test for the "USB-Mounted" on
the proposed mirror drive as well, to be sure.
Client NFS connections SEEM to hold thru a reboot
of the NAS - at least if it's not TOO lengthy.
Auto-backups MIGHT want to run a 'mount -a' though
just to be sure.
Set the usb SSD as the prime backup point since
it's a bit faster. For now the mirror is a WD
Black magnetic. Decided against softRAID.
This NAS runs on the latest MX Linux. You'll
rarely go wrong with MX.
Final followup - the revised/improved backup script :
. . .
#!/bin/bash
# v0.2
# we now test for both source and dest in /proc/mounts
# as solid confirmation.
# using rsync to copy sdb1 (samsung ssd) to sdc1 (wd black)
# when run, it will do rsync and append /home/nas/share/qshar1/backups.log
# with either datetime with 'ok' OR the datetime with 'fail'
mount -a # (re)mount all
OKFLG="NO" # a flag
dt=$(date) # guess
PM=$(</proc/mounts) # load all current mounts
# check for mountpoints - if found then proceed
if [[ $PM == *"/dev/sdb1"* ]]; then
if [[ $PM == *"/dev/sdc1"* ]]; then
rsync -a --delete /home/nas/shar/MyShare1/ /home/nas/shar/MyShare2
echo "backup done"
dt=${dt}" ok"
touch /home/nas/share/MyShare1/backups.log
echo $dt >> /home/nas//share/MyShare1/backups.log
OKFLG="YES"
fi
fi
# mountpoints NOT found - fail
if [ "$OKFLG" != "YES" ] ; then
echo "backup failed"
dt=${dt}" fail"
touch /home/nas/share/MyShare1/backups.log
echo $dt >> /home/nas/share/MyShare1/backups.log
fi
. . .
It seemed 'more certain' to actually look in /proc/mounts
to see if the USB drives were still mounted than to rely
on a 'tag file' on them.
Clearly we've loaded the contents of /proc/mounts into a var
and then did pattern searchs for the relevant /dev/sdX? in there.
The log file was moved to the primary share so it'd be easy
to view from any unit making use of the NAS capability.
'touch'-ing ensures the backup file IS there before we try
to append anything to it. Straight "echo >>" might create
the file if not there on most systems, but why risk ?
Decided to go with the success/fail FLAG var because it'll
be easy to mod for more info about issues if needed.
In any case, this is all straight-up bash. Could have done
it in Python, maybe a little more clearly, but I like KISS.
The primary NAS share is duplicated to the other disk twice
a day by root crontab. Any probs, don't do anything but WARN.
COULD add e-mail or whatever, but as this is a small 'home'
system, well, just not needed. Still two drive slots in
my Sabrient external USB unit ... and the structure of
the backup script is easy to add too as so to deal with
whatever I wanna do with them.