To: German whom I feel okay to To: now that https://github.com/kubevirt/kubevirt/pull/13756 is merged, and Hanna who knows one thing or two about vhost-user based migration. This is Stefano from your neighbouring mailing list, passt-dev. David is wondering: On Wed, 5 Feb 2025 13:09:42 +1100 David Gibson <david(a)gibson.dropbear.id.au> wrote:On Wed, Feb 05, 2025 at 01:39:03AM +0100, Stefano Brivio wrote:...what we should be doing in the source passt at different stages of moving our TCP connections (or failing to move them) over to the target, which might be inspired by what you're doing with your... filesystem things in virtiofsd. We're taking for granted that as long as we have a chance to detect failure (e.g. we can't dump sequence numbers from a TCP socket in the source), we should use that to abort the whole thing. Once we're past that point, we have several options. And actually, that's regardless of failure: because we also have the question of what to do if we see that nothing went wrong. We can exit in the source, for example (this is what patch implements): wait for VHOST_USER_CHECK_DEVICE_STATE, report that, and quit. Or we can just clear up all our connections and resume (start from a blank state). Or do that, only if we didn't smell failure. Would you have some pointers, general guidelines, ideas? I know that the topic is a bit broad, but I'm hopeful that you have a lot of clear answers for us. :) Thanks.On migration, the source process asks passt-helper to set TCP sockets in repair mode, dumps the information we need to migrate connections, and closes them. At this point, we can't pass them back to passt-helper using SCM_RIGHTS, because they are closed, from that perspective, and sendmsg() will give us EBADF. But if we don't clear repair mode, the port they are bound to will not be available for binding in the target. Terminate once we're done with the migration and we reported the state. This is equivalent to clearing repair mode on the sockets we just closed.As we've discussed, quitting still makes sense, but the description above is not really accurate. Perhaps, === Once we've passed the migration's "point of no return", there's no way to resume the guest on the source side, because we no longer own the connections. There's not really anything we can do except exit. === Except.. thinking about it, I'm not sure that's technically true. After migration, the source qemu enters a kind of limbo state. I suppose for the case of to-disk migration (savevm) the guest can actually be resumed. Which for us is not really compatible with completing at least a local migration properly. Not really sure what to do about that. I think it's also technically possible to use monitor commands to boot up essentially an entirely new guest instance in the original qemu, in which case for us it would make sense to basically reset ourselves (flush the low table). Hrm.. we really need to know the sequence of events in a bit more detail to get this right (not that this stops improving the guts of the logic in the meantime). I'm asking around to see if I can find who did the migration stuff or virtiofsd, so we can compare notes.> Signed-off-by: Stefano Brivio <sbrivio(a)redhat.com> > --- > vhost_user.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/vhost_user.c b/vhost_user.c > index b107d0f..70773d6 100644 > --- a/vhost_user.c > +++ b/vhost_user.c > @@ -997,6 +997,8 @@ static bool vu_send_rarp_exec(struct vu_dev *vdev, > return false; > } > > +static bool quit_on_device_state = false; > + > /** > * vu_set_device_state_fd_exec() - Set the device state migration channel > * @vdev: vhost-user device > @@ -1024,6 +1026,9 @@ static bool vu_set_device_state_fd_exec(struct vu_dev *vdev, > migrate_request(vdev->context, msg->fds[0], > direction == VHOST_USER_TRANSFER_STATE_DIRECTION_LOAD); > > + if (direction == VHOST_USER_TRANSFER_STATE_DIRECTION_SAVE) > + quit_on_device_state = true; > + > /* We don't provide a new fd for the data transfer */ > vmsg_set_reply_u64(msg, VHOST_USER_VRING_NOFD_MASK); > > @@ -1201,4 +1206,10 @@ void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events) > > if (reply_requested) > vu_send_reply(fd, &msg); > + > + if (quit_on_device_state && > + msg.hdr.request == VHOST_USER_CHECK_DEVICE_STATE) { > + info("Migration complete, exiting"); > + exit(EXIT_SUCCESS); > + } > }-- Stefano