What Happens to a Downed Node When It Comes Back Into the Cluster?`
When a node goes down in a YugabyteDB cluster, a re-election process begins for the remaining tablet-peers in the RAFT group whose leader was on the downed node. The physical copy of the data remains on that downed node, and when it comes back up, the data is typically in a different state than the remaining nodes due to the 15-minute re-election period. During this time, the downed node remains part of the RAFT group, but if it resumes operation before the timeout, it will receive all data changes that occurred during the downtime from the remaining nodes. If the node is down for longer than the default 15 minutes, its copies of data will replicate to the remaining nodes, and it will be kicked out of the quorum, potentially impacting full functionality of the cluster if not replaced immediately. Rolling restarts are often performed to apply configuration flag settings that cannot be changed online, which may affect connections to the node going down.