Keep Keras fit() and Train Your Model Your Way | HackerNoon
Briefly

The article discusses overriding the training step in Keras models to customize training algorithms while still utilizing the convenience of the fit() method. It emphasizes Keras' core principle of progressive disclosure, which lets users gradually delve into more complex workflows without losing the benefits of high-level features. By subclassing keras.Model and overriding the train_step method, users can implement personalized training logic without sacrificing built-in functionalities like callbacks and distribution support. This approach works across different model-building techniques, including Sequential and Functional API models.
A core principle of Keras is progressive disclosure of complexity, allowing users to gain control over their training workflows while retaining the convenience of high-level features.
When customizing fit(), overriding the train_step method of the Model class enables users to implement their own learning algorithms while still leveraging fit() functionality.
Read at Hackernoon
[
|
]