Stefano convinced me that my earlier proposal for the dynamic update protocol was unnecessarily complex. Plus, I saw a much better way of handling socket continuity in the context of a "whole table" replacement. So here's an entirely revised protocol suggestion. # Outline I suggest that each connection to the control socket handles a single transaction. 1. Server hello - Server sends magic number, version - Possibly feature flags / limits (e.g. max number of rules allowed) 2. Client hello - Client sends magic number - Do we need anything else? 3. Server lists pifs - Server sends the number of pifs, their indices and names 4. Server lists rules - Server sends the list of rules, one pif at a time 5. Client gives new rules - Client sends the new list of rules, one pif at a time - Server loads them into the shadow table, and validates (no socket operations) 6. Server acknowledges - Either reports an error and disconnects, or acks waiting for client 7. Client signals apply - Server swaps shadow and active tables, and syncs sockets with new active table 8. Server gives error summary - Server reports bind/listen/whatever errors 9a. Client signals commit - Shadow table (now the old table) discarded or 9b. Client signals rollback - Shadow and active tables swapped back, syncs sockets - Discard shadow table (now the "new" table again) - New bind error report? 10. Server closes control connection # Client disconnects A client disconnect before step (7) is straightforward: discard the shadow table, nothing has changed. A client disconnect between (7) and (9) triggers a rollback, same as (9b). # Error reporting Error reporting at step (6) is fairly straightforward: we can send an error code and/or an error message. Error reporting at (8) is trickier. As a first cut, we could just report "yes" or "no" - taking into account the FWD_WEAK flag. But the client might be able to make better decisions or at least better messages to the user if we report more detailed information. Exactly how detailed is an open question: number of bind failures? number of failures per rule? specific ports which failed? # Interim steps I propose these steps toward implementing this: i. Merge TCP and UDP rule tables. The protocol above assumes a single rule table per-pif, which I think is an easier model to understand and more extensible for future protocol support. ii. Read-only client. Implement steps (1) to (4). Client can query and list the current rules, but not change them. iii. Rule updates. Implement remaining protocol steps, but with a "close and re-open" approach on the server, so unaltered listening sockets might briefly disappear iv. Socket continuity. Have the socket sync "steal" sockets from the old table in preference to re-opening them. If you have any time to work on (ii) while I work on (i), those should be parallelizable. # Concurrent updates Server guarantees that a single transaction as above is atomic in the sense that nothing else is allowed to change the rules between (4) and (9). The easiest way to do that initially is probably to only allow a single client connection at a time. If there's a reason to, we could alter that so that concurrent connections are allowed, but if another client changed anything after step (4), then we give an error on the next op (or maybe just close the control socket from the server side). # Tweaks / variants - I'm not sure that step (2) is necessary - I'm not certain that step (7) is necessary, although I do kind of prefer the client getting a chance to see a "so far, so good" before any socket operations happen. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson