Padding=padding,name=name,activation=activation,kernel_initializer='he_uniform')(x) X = Conv2DTranspose(filters=filters, kernel_size=(kernel_size, 1), strides=(strides, 1), X = Lambda(lambda x: K.expand_dims(x, axis=2))(input_tensor) Kernel_size: int, size of the convolution kernel the output tensor will have the shape of (batch_size, time_steps, filters) Input_tensor: tensor, with the shape (batch_size, time_steps, dims)įilters: int, output dimension, i.e. Here is the model: def Conv1DTranspose(input_tensor, filters, kernel_size,name,activation, strides=2, padding='same'): Is this even possible, I know the network is attempting to minimize the the sum of the two, but it would be nice to be able to tell when the mse is actually minimized. This will print the mse and regularizer as one total loss, how can I split this into two separate components so I can save weights whenever mse is lowest. Implemented in keras using: model.get_layer("pool3_encoder").activity_regularizer=tf.1(0.01)Īnd lets say my loss function is standard mse implemented using: pile(optimizer=tf.(learning_rate=lr_schedule),loss='mse') Say I use L1 activity regularizer on the outputs of my autoencoder.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |