Honestly, I don't see why you'd need to batch it ever 100ms. Sure, you don't want to send a mouse movement every time an event triggers, but surely 30fps looks smooth and won't overload the system.
You can't reliably send, fail, retry, and confirm receipt of a TCP packet in 33ms over arbitrary Internet connections. 8ms is right out. I can get 200us over a local EtherCAT realtime industrial IO network, but that's with careful management of well-isolated single-machine network conditions and it just doesn't work with a cellular modem for download and oversubscribed residential cable for upload. Assuming latency of 120ms (as used in the defaults in the linked tutorial) is much more realistic.
And you also can't set up your system with a 5 second delay to send data every 5 seconds, because any jitter will result in hiccups.
You could set up your system to send out the new data each time the previous buffer is acknowledged, but that's kind of pointless, if you get lucky with a good connection and can send data to be rendered 4.970 to 5.000 seconds from now, what's the difference for the user between doing that versus reducing the network load by approximately a factor of 3 and waiting until you have data for 4.900 to 5.000 seconds?
I don't need to reliably send the TCP packets in 33 ms. I just need to be able to start sending a packet every 33ms. Assuming sending, acking, etc is non blocking and uses few enough resources asynchronously it's fine.
Just for comparison, the default USB mouse polling rate is 125 Hz, so 8 ms. If that is too often, 16 or 32 ms would make sense, which is close to 60 or 30 Hz/fps, respectively.