This notebook demonstrates training a CARE model for a 2D denoising task, assuming that training data was already generated via 1_datagen.ipynb and has been saved to disk to the file data/my_training_data.npz
.
Note that training a neural network for actual use should be done with more training time as used here.
More Documentation is available at http://csbdeep.bioimagecomputing.com/doc/.
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from tifffile import imread
from csbdeep.utils import axes_dict, plot_some, plot_history
from csbdeep.utils.tf import limit_gpu_memory
from csbdeep.io import load_training_data
from csbdeep.models import Config, CARE
Important: As the TensorFlow backend uses all available GPU memory by default, please make sure that all other notebooks that use the GPU (e.g. training/prediction notebooks) are shut down before running this notebook. This can be done via the "Running" tab in the main "Home" notebook server page.
Here we load the data patches generated via 1_datagen.ipynb, and split them into 95% actual training data and 5% validation data. The latter is used during model training as independent indicator of the restoration accuracy. Model performance on the training data is often better than on the validation data, in which case the model is overfitting. Monitoring the validation performance gives us a chance to detect that.
(X,Y), (X_val,Y_val), axes = load_training_data('data/my_training_data.npz', validation_split=0.05, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
number of training images: 4668 number of validation images: 246 image size (2D): (128, 128) axes: SYXC channels in / out: 1 / 1
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
Before we construct the actual CARE model, we have to define its configuration via a Config
object, which includes
The defaults should be sensible in many cases, so a change should only be necessary if the training process fails.
Important: Note that for this notebook we use a very small number of update steps per epoch for immediate feedback, whereas this number should be increased considerably (e.g. train_steps_per_epoch=400
) to obtain a well-trained model.
config = Config(axes, n_channel_in, n_channel_out, unet_kern_size=3, train_batch_size=8, train_steps_per_epoch=40)
print(config)
vars(config)
Config(axes='YXC', n_channel_in=1, n_channel_out=1, n_dim=2, probabilistic=False, train_batch_size=8, train_checkpoint='weights_best.h5', train_checkpoint_epoch='weights_now.h5', train_checkpoint_last='weights_last.h5', train_epochs=100, train_learning_rate=0.0004, train_loss='mae', train_reduce_lr={'factor': 0.5, 'patience': 10, 'min_delta': 0}, train_steps_per_epoch=40, train_tensorboard=True, unet_input_shape=(None, None, 1), unet_kern_size=3, unet_last_activation='linear', unet_n_depth=2, unet_n_first=32, unet_residual=True)
{'n_dim': 2, 'axes': 'YXC', 'n_channel_in': 1, 'n_channel_out': 1, 'train_checkpoint': 'weights_best.h5', 'train_checkpoint_last': 'weights_last.h5', 'train_checkpoint_epoch': 'weights_now.h5', 'probabilistic': False, 'unet_residual': True, 'unet_n_depth': 2, 'unet_kern_size': 3, 'unet_n_first': 32, 'unet_last_activation': 'linear', 'unet_input_shape': (None, None, 1), 'train_loss': 'mae', 'train_epochs': 100, 'train_steps_per_epoch': 40, 'train_learning_rate': 0.0004, 'train_batch_size': 8, 'train_tensorboard': True, 'train_reduce_lr': {'factor': 0.5, 'patience': 10, 'min_delta': 0}}
We now create a CARE model with the chosen configuration:
model = CARE(config, 'my_model', basedir='models')
We can inspect the created neural network:
model.keras_model.summary()
Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input (InputLayer) [(None, None, None, 0 [] 1)] down_level_0_no_0 (Conv2D) (None, None, None, 320 ['input[0][0]'] 32) down_level_0_no_1 (Conv2D) (None, None, None, 9248 ['down_level_0_no_0[0][0]'] 32) max_0 (MaxPooling2D) (None, None, None, 0 ['down_level_0_no_1[0][0]'] 32) down_level_1_no_0 (Conv2D) (None, None, None, 18496 ['max_0[0][0]'] 64) down_level_1_no_1 (Conv2D) (None, None, None, 36928 ['down_level_1_no_0[0][0]'] 64) max_1 (MaxPooling2D) (None, None, None, 0 ['down_level_1_no_1[0][0]'] 64) middle_0 (Conv2D) (None, None, None, 73856 ['max_1[0][0]'] 128) middle_2 (Conv2D) (None, None, None, 73792 ['middle_0[0][0]'] 64) up_sampling2d (UpSampling2D) (None, None, None, 0 ['middle_2[0][0]'] 64) concatenate (Concatenate) (None, None, None, 0 ['up_sampling2d[0][0]', 128) 'down_level_1_no_1[0][0]'] up_level_1_no_0 (Conv2D) (None, None, None, 73792 ['concatenate[0][0]'] 64) up_level_1_no_2 (Conv2D) (None, None, None, 18464 ['up_level_1_no_0[0][0]'] 32) up_sampling2d_1 (UpSampling2D) (None, None, None, 0 ['up_level_1_no_2[0][0]'] 32) concatenate_1 (Concatenate) (None, None, None, 0 ['up_sampling2d_1[0][0]', 64) 'down_level_0_no_1[0][0]'] up_level_0_no_0 (Conv2D) (None, None, None, 18464 ['concatenate_1[0][0]'] 32) up_level_0_no_2 (Conv2D) (None, None, None, 9248 ['up_level_0_no_0[0][0]'] 32) conv2d (Conv2D) (None, None, None, 33 ['up_level_0_no_2[0][0]'] 1) add (Add) (None, None, None, 0 ['conv2d[0][0]', 1) 'input[0][0]'] activation (Activation) (None, None, None, 0 ['add[0][0]'] 1) ================================================================================================== Total params: 332,641 Trainable params: 332,641 Non-trainable params: 0 __________________________________________________________________________________________________
Training the model will likely take some time. We recommend to monitor the progress with TensorBoard (example below), which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.
You can monitor the progress during training with TensorBoard by starting it from the current working directory:
$ tensorboard --logdir=.
Then connect to http://localhost:6006/ with your browser.
history = model.train(X,Y, validation_data=(X_val,Y_val))
Epoch 1/100 WARNING:tensorflow:AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe20d0> and will run it as-is. Cause: could not parse the source code of <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe20d0>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names. The matching definitions were: Match 0: (lambda x: K.mean(x, axis=(- 1))) Match 1: (lambda x: x) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe20d0> and will run it as-is. Cause: could not parse the source code of <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe20d0>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names. The matching definitions were: Match 0: (lambda x: K.mean(x, axis=(- 1))) Match 1: (lambda x: x) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe28b0> and will run it as-is. Cause: could not parse the source code of <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe28b0>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names. The matching definitions were: Match 0: (lambda x: K.mean(x, axis=(- 1))) Match 1: (lambda x: x) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe28b0> and will run it as-is. Cause: could not parse the source code of <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe28b0>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names. The matching definitions were: Match 0: (lambda x: K.mean(x, axis=(- 1))) Match 1: (lambda x: x) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe29d0> and will run it as-is. Cause: could not parse the source code of <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe29d0>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names. The matching definitions were: Match 0: (lambda x: K.mean(x, axis=(- 1))) Match 1: (lambda x: x) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe29d0> and will run it as-is. Cause: could not parse the source code of <function _mean_or_not.<locals>.<lambda> at 0x7f1b5cbe29d0>: found multiple definitions with identical signatures at the location. This error may be avoided by defining each lambda on a single line and with unique argument names. The matching definitions were: Match 0: (lambda x: K.mean(x, axis=(- 1))) Match 1: (lambda x: x) To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert 40/40 [==============================] - 4s 56ms/step - loss: 0.0880 - mse: 0.0171 - mae: 0.0880 - val_loss: 0.0594 - val_mse: 0.0090 - val_mae: 0.0594 - lr: 4.0000e-04 Epoch 2/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0525 - mse: 0.0083 - mae: 0.0525 - val_loss: 0.0410 - val_mse: 0.0066 - val_mae: 0.0410 - lr: 4.0000e-04 Epoch 3/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0418 - mse: 0.0064 - mae: 0.0418 - val_loss: 0.0403 - val_mse: 0.0065 - val_mae: 0.0403 - lr: 4.0000e-04 Epoch 4/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0377 - mse: 0.0051 - mae: 0.0377 - val_loss: 0.0387 - val_mse: 0.0065 - val_mae: 0.0387 - lr: 4.0000e-04 Epoch 5/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0398 - mse: 0.0065 - mae: 0.0398 - val_loss: 0.0352 - val_mse: 0.0058 - val_mae: 0.0352 - lr: 4.0000e-04 Epoch 6/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0332 - mse: 0.0045 - mae: 0.0332 - val_loss: 0.0352 - val_mse: 0.0058 - val_mae: 0.0352 - lr: 4.0000e-04 Epoch 7/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0331 - mse: 0.0042 - mae: 0.0331 - val_loss: 0.0383 - val_mse: 0.0059 - val_mae: 0.0383 - lr: 4.0000e-04 Epoch 8/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0346 - mse: 0.0056 - mae: 0.0346 - val_loss: 0.0365 - val_mse: 0.0054 - val_mae: 0.0365 - lr: 4.0000e-04 Epoch 9/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0407 - mse: 0.0075 - mae: 0.0407 - val_loss: 0.0361 - val_mse: 0.0056 - val_mae: 0.0361 - lr: 4.0000e-04 Epoch 10/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0377 - mse: 0.0059 - mae: 0.0377 - val_loss: 0.0336 - val_mse: 0.0053 - val_mae: 0.0336 - lr: 4.0000e-04 Epoch 11/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0362 - mse: 0.0057 - mae: 0.0362 - val_loss: 0.0391 - val_mse: 0.0065 - val_mae: 0.0391 - lr: 4.0000e-04 Epoch 12/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0316 - mse: 0.0041 - mae: 0.0316 - val_loss: 0.0327 - val_mse: 0.0053 - val_mae: 0.0327 - lr: 4.0000e-04 Epoch 13/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0347 - mse: 0.0059 - mae: 0.0347 - val_loss: 0.0338 - val_mse: 0.0060 - val_mae: 0.0338 - lr: 4.0000e-04 Epoch 14/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0328 - mse: 0.0050 - mae: 0.0328 - val_loss: 0.0330 - val_mse: 0.0055 - val_mae: 0.0330 - lr: 4.0000e-04 Epoch 15/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0328 - mse: 0.0047 - mae: 0.0328 - val_loss: 0.0327 - val_mse: 0.0054 - val_mae: 0.0327 - lr: 4.0000e-04 Epoch 16/100 40/40 [==============================] - 1s 22ms/step - loss: 0.0320 - mse: 0.0045 - mae: 0.0320 - val_loss: 0.0326 - val_mse: 0.0052 - val_mae: 0.0326 - lr: 4.0000e-04 Epoch 17/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0359 - mse: 0.0061 - mae: 0.0359 - val_loss: 0.0349 - val_mse: 0.0055 - val_mae: 0.0349 - lr: 4.0000e-04 Epoch 18/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0334 - mse: 0.0047 - mae: 0.0334 - val_loss: 0.0334 - val_mse: 0.0057 - val_mae: 0.0334 - lr: 4.0000e-04 Epoch 19/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0334 - mse: 0.0054 - mae: 0.0334 - val_loss: 0.0339 - val_mse: 0.0058 - val_mae: 0.0339 - lr: 4.0000e-04 Epoch 20/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0321 - mse: 0.0050 - mae: 0.0321 - val_loss: 0.0330 - val_mse: 0.0051 - val_mae: 0.0330 - lr: 4.0000e-04 Epoch 21/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0341 - mse: 0.0052 - mae: 0.0341 - val_loss: 0.0355 - val_mse: 0.0048 - val_mae: 0.0355 - lr: 4.0000e-04 Epoch 22/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0303 - mse: 0.0043 - mae: 0.0303 - val_loss: 0.0344 - val_mse: 0.0057 - val_mae: 0.0344 - lr: 4.0000e-04 Epoch 23/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0336 - mse: 0.0055 - mae: 0.0336 - val_loss: 0.0317 - val_mse: 0.0056 - val_mae: 0.0317 - lr: 4.0000e-04 Epoch 24/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0384 - mse: 0.0076 - mae: 0.0384 - val_loss: 0.0320 - val_mse: 0.0051 - val_mae: 0.0320 - lr: 4.0000e-04 Epoch 25/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0307 - mse: 0.0041 - mae: 0.0307 - val_loss: 0.0316 - val_mse: 0.0056 - val_mae: 0.0316 - lr: 4.0000e-04 Epoch 26/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0308 - mse: 0.0046 - mae: 0.0308 - val_loss: 0.0336 - val_mse: 0.0060 - val_mae: 0.0336 - lr: 4.0000e-04 Epoch 27/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0373 - mse: 0.0067 - mae: 0.0373 - val_loss: 0.0347 - val_mse: 0.0063 - val_mae: 0.0347 - lr: 4.0000e-04 Epoch 28/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0360 - mse: 0.0061 - mae: 0.0360 - val_loss: 0.0355 - val_mse: 0.0059 - val_mae: 0.0355 - lr: 4.0000e-04 Epoch 29/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0292 - mse: 0.0036 - mae: 0.0292 - val_loss: 0.0319 - val_mse: 0.0056 - val_mae: 0.0319 - lr: 4.0000e-04 Epoch 30/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0313 - mse: 0.0049 - mae: 0.0313 - val_loss: 0.0323 - val_mse: 0.0049 - val_mae: 0.0323 - lr: 4.0000e-04 Epoch 31/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0288 - mse: 0.0036 - mae: 0.0288 - val_loss: 0.0307 - val_mse: 0.0053 - val_mae: 0.0307 - lr: 4.0000e-04 Epoch 32/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0354 - mse: 0.0058 - mae: 0.0354 - val_loss: 0.0331 - val_mse: 0.0047 - val_mae: 0.0331 - lr: 4.0000e-04 Epoch 33/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0298 - mse: 0.0036 - mae: 0.0298 - val_loss: 0.0311 - val_mse: 0.0054 - val_mae: 0.0311 - lr: 4.0000e-04 Epoch 34/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0303 - mse: 0.0041 - mae: 0.0303 - val_loss: 0.0336 - val_mse: 0.0057 - val_mae: 0.0336 - lr: 4.0000e-04 Epoch 35/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0337 - mse: 0.0056 - mae: 0.0337 - val_loss: 0.0311 - val_mse: 0.0052 - val_mae: 0.0311 - lr: 4.0000e-04 Epoch 36/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0355 - mse: 0.0062 - mae: 0.0355 - val_loss: 0.0307 - val_mse: 0.0052 - val_mae: 0.0307 - lr: 4.0000e-04 Epoch 37/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0303 - mse: 0.0043 - mae: 0.0303 - val_loss: 0.0320 - val_mse: 0.0054 - val_mae: 0.0320 - lr: 4.0000e-04 Epoch 38/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0291 - mse: 0.0041 - mae: 0.0291 - val_loss: 0.0307 - val_mse: 0.0053 - val_mae: 0.0307 - lr: 4.0000e-04 Epoch 39/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0329 - mse: 0.0055 - mae: 0.0329 - val_loss: 0.0307 - val_mse: 0.0044 - val_mae: 0.0307 - lr: 4.0000e-04 Epoch 40/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0335 - mse: 0.0056 - mae: 0.0335 - val_loss: 0.0304 - val_mse: 0.0049 - val_mae: 0.0304 - lr: 4.0000e-04 Epoch 41/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0319 - mse: 0.0054 - mae: 0.0319 - val_loss: 0.0303 - val_mse: 0.0049 - val_mae: 0.0303 - lr: 4.0000e-04 Epoch 42/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0304 - mse: 0.0047 - mae: 0.0304 - val_loss: 0.0306 - val_mse: 0.0050 - val_mae: 0.0306 - lr: 4.0000e-04 Epoch 43/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0304 - mse: 0.0046 - mae: 0.0304 - val_loss: 0.0319 - val_mse: 0.0044 - val_mae: 0.0319 - lr: 4.0000e-04 Epoch 44/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0284 - mse: 0.0036 - mae: 0.0284 - val_loss: 0.0306 - val_mse: 0.0053 - val_mae: 0.0306 - lr: 4.0000e-04 Epoch 45/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0323 - mse: 0.0053 - mae: 0.0323 - val_loss: 0.0299 - val_mse: 0.0048 - val_mae: 0.0299 - lr: 4.0000e-04 Epoch 46/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0368 - mse: 0.0068 - mae: 0.0368 - val_loss: 0.0308 - val_mse: 0.0048 - val_mae: 0.0308 - lr: 4.0000e-04 Epoch 47/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0293 - mse: 0.0038 - mae: 0.0293 - val_loss: 0.0306 - val_mse: 0.0048 - val_mae: 0.0306 - lr: 4.0000e-04 Epoch 48/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0271 - mse: 0.0033 - mae: 0.0271 - val_loss: 0.0301 - val_mse: 0.0051 - val_mae: 0.0301 - lr: 4.0000e-04 Epoch 49/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0291 - mse: 0.0041 - mae: 0.0291 - val_loss: 0.0306 - val_mse: 0.0054 - val_mae: 0.0306 - lr: 4.0000e-04 Epoch 50/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0296 - mse: 0.0042 - mae: 0.0296 - val_loss: 0.0300 - val_mse: 0.0043 - val_mae: 0.0300 - lr: 4.0000e-04 Epoch 51/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0290 - mse: 0.0040 - mae: 0.0290 - val_loss: 0.0296 - val_mse: 0.0048 - val_mae: 0.0296 - lr: 4.0000e-04 Epoch 52/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0355 - mse: 0.0061 - mae: 0.0355 - val_loss: 0.0313 - val_mse: 0.0042 - val_mae: 0.0313 - lr: 4.0000e-04 Epoch 53/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0326 - mse: 0.0049 - mae: 0.0326 - val_loss: 0.0298 - val_mse: 0.0045 - val_mae: 0.0298 - lr: 4.0000e-04 Epoch 54/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0319 - mse: 0.0050 - mae: 0.0319 - val_loss: 0.0320 - val_mse: 0.0057 - val_mae: 0.0320 - lr: 4.0000e-04 Epoch 55/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0348 - mse: 0.0064 - mae: 0.0348 - val_loss: 0.0324 - val_mse: 0.0052 - val_mae: 0.0324 - lr: 4.0000e-04 Epoch 56/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0299 - mse: 0.0040 - mae: 0.0299 - val_loss: 0.0309 - val_mse: 0.0050 - val_mae: 0.0309 - lr: 4.0000e-04 Epoch 57/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0314 - mse: 0.0053 - mae: 0.0314 - val_loss: 0.0304 - val_mse: 0.0053 - val_mae: 0.0304 - lr: 4.0000e-04 Epoch 58/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0300 - mse: 0.0042 - mae: 0.0300 - val_loss: 0.0302 - val_mse: 0.0047 - val_mae: 0.0302 - lr: 4.0000e-04 Epoch 59/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0300 - mse: 0.0042 - mae: 0.0300 - val_loss: 0.0293 - val_mse: 0.0048 - val_mae: 0.0293 - lr: 4.0000e-04 Epoch 60/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0292 - mse: 0.0042 - mae: 0.0292 - val_loss: 0.0304 - val_mse: 0.0053 - val_mae: 0.0304 - lr: 4.0000e-04 Epoch 61/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0296 - mse: 0.0044 - mae: 0.0296 - val_loss: 0.0297 - val_mse: 0.0048 - val_mae: 0.0297 - lr: 4.0000e-04 Epoch 62/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0255 - mse: 0.0029 - mae: 0.0255 - val_loss: 0.0294 - val_mse: 0.0049 - val_mae: 0.0294 - lr: 4.0000e-04 Epoch 63/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0302 - mse: 0.0046 - mae: 0.0302 - val_loss: 0.0292 - val_mse: 0.0046 - val_mae: 0.0292 - lr: 4.0000e-04 Epoch 64/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0318 - mse: 0.0049 - mae: 0.0318 - val_loss: 0.0306 - val_mse: 0.0054 - val_mae: 0.0306 - lr: 4.0000e-04 Epoch 65/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0318 - mse: 0.0054 - mae: 0.0318 - val_loss: 0.0332 - val_mse: 0.0045 - val_mae: 0.0332 - lr: 4.0000e-04 Epoch 66/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0344 - mse: 0.0059 - mae: 0.0344 - val_loss: 0.0312 - val_mse: 0.0052 - val_mae: 0.0312 - lr: 4.0000e-04 Epoch 67/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0302 - mse: 0.0044 - mae: 0.0302 - val_loss: 0.0298 - val_mse: 0.0041 - val_mae: 0.0298 - lr: 4.0000e-04 Epoch 68/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0330 - mse: 0.0052 - mae: 0.0330 - val_loss: 0.0309 - val_mse: 0.0054 - val_mae: 0.0309 - lr: 4.0000e-04 Epoch 69/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0283 - mse: 0.0034 - mae: 0.0283 - val_loss: 0.0293 - val_mse: 0.0048 - val_mae: 0.0293 - lr: 4.0000e-04 Epoch 70/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0313 - mse: 0.0047 - mae: 0.0313 - val_loss: 0.0296 - val_mse: 0.0051 - val_mae: 0.0296 - lr: 4.0000e-04 Epoch 71/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0297 - mse: 0.0045 - mae: 0.0297 - val_loss: 0.0300 - val_mse: 0.0043 - val_mae: 0.0300 - lr: 4.0000e-04 Epoch 72/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0295 - mse: 0.0041 - mae: 0.0295 - val_loss: 0.0287 - val_mse: 0.0046 - val_mae: 0.0287 - lr: 4.0000e-04 Epoch 73/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0340 - mse: 0.0056 - mae: 0.0340 - val_loss: 0.0296 - val_mse: 0.0047 - val_mae: 0.0296 - lr: 4.0000e-04 Epoch 74/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0294 - mse: 0.0039 - mae: 0.0294 - val_loss: 0.0288 - val_mse: 0.0044 - val_mae: 0.0288 - lr: 4.0000e-04 Epoch 75/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0334 - mse: 0.0051 - mae: 0.0334 - val_loss: 0.0304 - val_mse: 0.0053 - val_mae: 0.0304 - lr: 4.0000e-04 Epoch 76/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0288 - mse: 0.0040 - mae: 0.0288 - val_loss: 0.0308 - val_mse: 0.0053 - val_mae: 0.0308 - lr: 4.0000e-04 Epoch 77/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0270 - mse: 0.0034 - mae: 0.0270 - val_loss: 0.0289 - val_mse: 0.0043 - val_mae: 0.0289 - lr: 4.0000e-04 Epoch 78/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0261 - mse: 0.0026 - mae: 0.0261 - val_loss: 0.0306 - val_mse: 0.0057 - val_mae: 0.0306 - lr: 4.0000e-04 Epoch 79/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0327 - mse: 0.0052 - mae: 0.0327 - val_loss: 0.0404 - val_mse: 0.0061 - val_mae: 0.0404 - lr: 4.0000e-04 Epoch 80/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0345 - mse: 0.0061 - mae: 0.0345 - val_loss: 0.0299 - val_mse: 0.0050 - val_mae: 0.0299 - lr: 4.0000e-04 Epoch 81/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0355 - mse: 0.0066 - mae: 0.0355 - val_loss: 0.0318 - val_mse: 0.0045 - val_mae: 0.0318 - lr: 4.0000e-04 Epoch 82/100 37/40 [==========================>...] - ETA: 0s - loss: 0.0298 - mse: 0.0041 - mae: 0.0298 Epoch 82: ReduceLROnPlateau reducing learning rate to 0.00019999999494757503. 40/40 [==============================] - 1s 21ms/step - loss: 0.0292 - mse: 0.0040 - mae: 0.0292 - val_loss: 0.0305 - val_mse: 0.0053 - val_mae: 0.0305 - lr: 4.0000e-04 Epoch 83/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0266 - mse: 0.0037 - mae: 0.0266 - val_loss: 0.0282 - val_mse: 0.0043 - val_mae: 0.0282 - lr: 2.0000e-04 Epoch 84/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0292 - mse: 0.0043 - mae: 0.0292 - val_loss: 0.0294 - val_mse: 0.0051 - val_mae: 0.0294 - lr: 2.0000e-04 Epoch 85/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0323 - mse: 0.0058 - mae: 0.0323 - val_loss: 0.0281 - val_mse: 0.0038 - val_mae: 0.0281 - lr: 2.0000e-04 Epoch 86/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0287 - mse: 0.0040 - mae: 0.0287 - val_loss: 0.0299 - val_mse: 0.0048 - val_mae: 0.0299 - lr: 2.0000e-04 Epoch 87/100 40/40 [==============================] - 1s 22ms/step - loss: 0.0305 - mse: 0.0045 - mae: 0.0305 - val_loss: 0.0300 - val_mse: 0.0046 - val_mae: 0.0300 - lr: 2.0000e-04 Epoch 88/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0287 - mse: 0.0047 - mae: 0.0287 - val_loss: 0.0293 - val_mse: 0.0050 - val_mae: 0.0293 - lr: 2.0000e-04 Epoch 89/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0293 - mse: 0.0040 - mae: 0.0293 - val_loss: 0.0295 - val_mse: 0.0047 - val_mae: 0.0295 - lr: 2.0000e-04 Epoch 90/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0301 - mse: 0.0045 - mae: 0.0301 - val_loss: 0.0285 - val_mse: 0.0041 - val_mae: 0.0285 - lr: 2.0000e-04 Epoch 91/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0296 - mse: 0.0047 - mae: 0.0296 - val_loss: 0.0284 - val_mse: 0.0042 - val_mae: 0.0284 - lr: 2.0000e-04 Epoch 92/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0278 - mse: 0.0037 - mae: 0.0278 - val_loss: 0.0278 - val_mse: 0.0038 - val_mae: 0.0278 - lr: 2.0000e-04 Epoch 93/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0300 - mse: 0.0045 - mae: 0.0300 - val_loss: 0.0292 - val_mse: 0.0044 - val_mae: 0.0292 - lr: 2.0000e-04 Epoch 94/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0284 - mse: 0.0039 - mae: 0.0284 - val_loss: 0.0285 - val_mse: 0.0043 - val_mae: 0.0285 - lr: 2.0000e-04 Epoch 95/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0325 - mse: 0.0053 - mae: 0.0325 - val_loss: 0.0289 - val_mse: 0.0039 - val_mae: 0.0289 - lr: 2.0000e-04 Epoch 96/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0281 - mse: 0.0039 - mae: 0.0281 - val_loss: 0.0279 - val_mse: 0.0042 - val_mae: 0.0279 - lr: 2.0000e-04 Epoch 97/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0272 - mse: 0.0035 - mae: 0.0272 - val_loss: 0.0293 - val_mse: 0.0048 - val_mae: 0.0293 - lr: 2.0000e-04 Epoch 98/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0255 - mse: 0.0027 - mae: 0.0255 - val_loss: 0.0293 - val_mse: 0.0049 - val_mae: 0.0293 - lr: 2.0000e-04 Epoch 99/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0267 - mse: 0.0037 - mae: 0.0267 - val_loss: 0.0285 - val_mse: 0.0038 - val_mae: 0.0285 - lr: 2.0000e-04 Epoch 100/100 40/40 [==============================] - 1s 21ms/step - loss: 0.0295 - mse: 0.0043 - mae: 0.0295 - val_loss: 0.0290 - val_mse: 0.0046 - val_mae: 0.0290 - lr: 2.0000e-04 Loading network weights from 'weights_best.h5'.
Plot final training history (available in TensorBoard during training):
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss'],['mse','val_mse','mae','val_mae']);
['loss', 'lr', 'mae', 'mse', 'val_loss', 'val_mae', 'val_mse']
plt.figure(figsize=(20,12))
_P = model.keras_model.predict(X_val[:5])
if config.probabilistic:
_P = _P[...,:(_P.shape[-1]//2)]
plot_some(X_val[:5],Y_val[:5],_P,pmax=99.5)
plt.suptitle('5 example validation patches\n'
'top row: input (source), '
'middle row: target (ground truth), '
'bottom row: predicted from source');
See https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.
model.export_TF()
WARNING:tensorflow:From /home/uwe/sw/miniconda3/envs/ws/lib/python3.8/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:203: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info. INFO:tensorflow:No assets to save. INFO:tensorflow:No assets to write. INFO:tensorflow:SavedModel written to: /tmp/tmpenfv2ywj/model/saved_model.pb Model exported in TensorFlow's SavedModel format: /home/uwe/research/csbdeep/examples/examples/denoising2D/models/my_model/TF_SavedModel.zip
***IMPORTANT NOTE*** You are using 'tensorflow' 2.x, hence it is likely that the exported model *will not work* in associated ImageJ/Fiji plugins (e.g. CSBDeep and StarDist). If you indeed have problems loading the exported model in Fiji, the current workaround is to load the trained model in a Python environment with installed 'tensorflow' 1.x and then export it again. If you need help with this, please read: https://gist.github.com/uschmidt83/4b747862fe307044c722d6d1009f6183