Sujet : Re: Ksplice equivalent for VMS ?
De : cross (at) *nospam* spitfire.i.gajendra.net (Dan Cross)
Groupes : comp.os.vmsDate : 21. Feb 2025, 14:13:12
Autres entêtes
Organisation : PANIX Public Access Internet and UNIX, NYC
Message-ID : <vp9u58$s14$1@reader2.panix.com>
References : 1 2 3 4
User-Agent : trn 4.0-test77 (Sep 1, 2010)
In article <
vp9snk$2arhn$1@paganini.bofh.team>,
Waldek Hebisch <
antispam@fricas.org> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 19 Feb 2025 15:05:35 -0500, Arne Vajhøj wrote:
* cluster with node A and B
* critical process P that for whatever reason does not work
running concurrent on multiple nodes runs on A
* node A needs to be taken down for some reason
* so VMS on node A and B does some magic and migrate P from A to B
transparent to users (obviously require a cluster IP address or load
balancer)
Linux virtualization migrates entire VMs that way, rather than individual
processes.
>
Migrating whole VM will also migrate _current_ kernel. The whole
point of orignal question is updating kernel.
Indeed. But if the goal is to update the host, and not the
guest, it's an acceptable method, provided you can meet the
requirements vis resources et al, and can handle the limitations
with respect to direct access to devices. And if the workloads
in question can tolerate the drag on performance during the
migration (no matter how you shake it, there's a mandatory
blackout period where the guest is not running on either the
host or target systems).
I designed the live migration protocol used for Bhyve in the
Oxide architecture. Minimizing that period was an explicit
design goal, but at some point you simply have to bottle up the
last bits of state from the source and move them to the
destination.
- Dan C.