In classical machine learning, we collect all data in one place and train the model on a central server. This means every dataset must be moved and stored together. In federated learning, the flow is reversed. The model is sent to where the data already lives. Each site trains the model locally and only the model updates are sent back. No raw data leaves the site. For example, hospitals can train on their own patient records while keeping the data inside their systems, and only share model updates with a central server. This makes it practical to work with private, regulated, or siloed data.
Follow these steps to experience one full Federated Round using hospitals as the example:
🏥 Local Training Click this to begin a local training phase. The global model is first sent to all hospitals, and then each hospital trains on its own private patient data. You will see the hospital heatmaps diverge as they learn different local patterns. The global model does not change in this step.
🤝 Merge Models (FedAvg) Click this to send the locally trained model updates to the server and average them. This completes one Federated Round. The Global Model updates, all hospitals are synchronized to this new global model, and a new point is added to the Accuracy Curve.
♻️ Reset Simulation Click this to start from scratch with a newly initialized global model. All hospital models are reset, and the federated round counter and accuracy history are cleared.