KnowledgeDistillationConfig Class¶
The following API can be used to create a KnowledgeDistillationConfig instance which can be used for post training quantization using knowledge distillation from a teacher (float Keras model) to a student (the quantized Keras model)
- class model_compression_toolkit.KnowledgeDistillationConfig(n_iter, optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), log_function=None, train_bias=True, representative_data_gen=None)¶
Configuration to use for quantization with KnowledgeDistillation (experimental).
Initialize a KnowledgeDistillationConfig.
- Parameters
n_iter (int) – Number of iterations to train.
optimizer (OptimizerV2) – Optimizer to use.
log_function (Callable) – Function to log information about the KD process.
train_bias (bool) – Whether to update the bias during the training or not.
representative_data_gen (Callable) – Dataset generator.
Examples
Create a KnowledgeDistillationConfig to run for 5 iteration and uses a random dataset generator:
>>> import numpy as np >>> def repr_datagen(): return [np.random.random((1,224,224,3))] >>> kd_conf = KnowledgeDistillationConfig(n_iter=5, representative_data_gen=repr_datagen)
An optimizer can be passed:
>>> kd_conf = KnowledgeDistillationConfig(n_iter=5, representative_data_gen=repr_datagen, optimizer=tf.keras.optimizers.Nadam(learning_rate=0.2))
To disable the biases training, one may set train_bias to False (enabled by default):
>>> kd_conf = KnowledgeDistillationConfig(n_iter=5, representative_data_gen=repr_datagen, train_bias=False)
The configuration can then be passed to
keras_post_training_quantization()
.