You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Getting the same issue (using Keras and Tensorflow), any help would be greatly appreciated.
My image directory is set to be folder/folder/images - for both training and testing data.
What's going on in the code:
I made a loop to test the different depths/nb_layers in a Resnet, as well as some hyper parameters like learning rate, batch size, etc. The test went from 4, 6, 8, 10 - all the way to 20, then gave me output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
I don't understand why it can work for a handful of the iterations, then fail.
I read here to update keras to 2.0, but was told not to change the version of keras by my boss. I'm on version 1.2.0
I read here to convert all labels to a numpy array, but keras documentation states this already happens to labels while using the 'categorical' attribute in flow_from_directory
Then I read here to put my train_generator in a function, then create an infinite while loop and yield the results, but this results in the data to be loaded over and over at the start of the program. "Found 350 images belonging to 7 classes" (repeated 10 times), which then results in an error "output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: <keras.preprocessing.image.DirectoryIterator object at 0x0000000063494BE0>"
Here's my stack trace for the original error:
Traceback (most recent call last):
File "", line 1, in
runfile('K:/Manufacturing Operations/Yield/Tools_Yield/PythonScripts/AI/ISL_DI/Resnet/resISL_Depth.py', wdir='K:/Manufacturing Operations/Yield/Tools_Yield/PythonScripts/AI/ISL_DI/Resnet')
File "C:\Users\paul\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\paul.\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "K:/Manufacturing Operations/Yield/Tools_Yield/PythonScripts/AI/ISL_DI/Resnet/resISL_Depth.py", line 233, in
callbacks=callbacks_list)
File "C:\Users\paul\AppData\Roaming\Python\Python35\site-packages\keras\engine\training.py", line 1481, in fit_generator
str(generator_output))
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
Here's the code other than vars
`rep=0
#####################################
#new model loop
for i in range(retrainings + 1):
#lr_init = [5, 1, .1, .01]
while rep != len(layers) - 1:
lr_init = [5, 1]
for lr_val in lr_init:
decay_init = .1
epochs_drop = 20
patience=60
# learning rate schedule
def step_decay(epoch):
initial_lrate = lr_val
drop = 0.1
epochs_drop = 60
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
#print('\nLR: {:.6f}\n'.format(lrate))
return lrate
momentum_init=0.9
sgd = SGD(lr=lr_val, decay=decay_init, momentum=momentum_init, nesterov=False)
##reduce learning rate when loss has stopped improving
#lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=np.sqrt(0.1), cooldown=0, patience=5, min_lr=0.5e-6)
##stop training when accuracy has stopped improving
early_stopper = EarlyStopping(monitor='val_acc', min_delta=0.001, patience=50)
#csv_logger = CSVLogger('resnet18_cifar10.csv')
repititions = 3
#epochs=[105]
epochs=[200]
drop_out=[0]
#batchsize=[2, 4, 8, 10]
batchsize=[2, 5, 10]
zoom=[0]
shear=[0]
channelshift=[0]
featurewise=[False]
samplewise=[False]
rotation=[0]
nb_train_samples = 350
nb_validation_samples = 140
colormode='rgb'
# input image dimensions
img_width, img_height = 224, 224
nb_classes=7
img_channels = 3
for epoch_val in epochs:
for dropout_val in drop_out:
for batchsize_val in batchsize:
for zoom_val in zoom:
for shear_val in shear:
for channelshift_val in channelshift:
for featurewise_val in featurewise:
for samplewise_val in samplewise:
for rotation_val in rotation:
for r in range(repititions):
# np.random.seed(7)
# tf.set_random_seed(7)
train_data_dir = basepath + pathlist[0]
validation_data_dir = basepath + pathlist[1]
#############################################
#############################################
params={}
params['epochs']=epoch_val
params['drop_out']=dropout_val
params['batchsize']=batchsize_val
params['zoom']=zoom_val
params['shear']=shear_val
params['channelshift']=channelshift_val
params['featurewise']=featurewise_val
params['samplewise']=samplewise_val
params['rotation']=rotation_val
params['lr_init']=lr_val
params['momentum_init']=momentum_init
params['decay_init']=decay_init
params['epochs_drop']=epochs_drop
params['img_size']=list([img_width,img_height])
params['patience']=patience
total = 0
currentlayer = [i * 2 for i in layers[rep]]
total = sum(currentlayer) + 2
savefilename='resnet_' + str(total) + '_BKM_lr_' + str(lr_val) + '_batchSize_' + str(batchsize_val) + '_repition' + str((r+1)) + '_Study'
total = 0
with tf.device('/gpu:0'):
model = resnet_iter.ResnetBuilder.build_resnet_34((img_channels, img_width, img_height), nb_classes, layers[rep])
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=featurewise_val, # divide inputs by std of the dataset
samplewise_std_normalization=samplewise_val, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
channel_shift_range=channelshift_val, #VGG set to 0
fill_mode="reflect", #VGG set to reflect
rotation_range=rotation_val, # randomly rotate images in the range (degrees, 0 to 180)
rescale=1./255, #VGG set to 1./255
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) - VGG set to 0
height_shift_range=0.1, # randomly shift images vertically (fraction of total height) - VGG set to 0
shear_range=shear_val, #VGG set to 0
zoom_range=zoom_val, #VGG set to 0.1
horizontal_flip=True, # randomly flip images
vertical_flip=True) # randomly flip images VGG set to True
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batchsize_val,
shuffle=True,
color_mode=colormode,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batchsize_val,
shuffle=True,
color_mode=colormode,
class_mode='categorical')
lrate = LearningRateScheduler(step_decay)
callbacks_list = [lrate, early_stopper]
try:
A=model.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
nb_epoch=epoch_val,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples,
callbacks=callbacks_list)
except:
print("train_generator: " + train_generator)
print("train_data_dir: " + train_data_dir)
files=os.listdir(train_data_dir)
print(len(files))`
The text was updated successfully, but these errors were encountered:
Anyway, try without generator using the fit function on a subset of your dataset. Using fit_generator hides the true error, because pcs buffers are not flushed when they die.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Getting the same issue (using Keras and Tensorflow), any help would be greatly appreciated.
My image directory is set to be folder/folder/images - for both training and testing data.
What's going on in the code:
I made a loop to test the different depths/nb_layers in a Resnet, as well as some hyper parameters like learning rate, batch size, etc. The test went from 4, 6, 8, 10 - all the way to 20, then gave me output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
I don't understand why it can work for a handful of the iterations, then fail.
I read here to update keras to 2.0, but was told not to change the version of keras by my boss. I'm on version 1.2.0
I read here to convert all labels to a numpy array, but keras documentation states this already happens to labels while using the 'categorical' attribute in flow_from_directory
Then I read here to put my train_generator in a function, then create an infinite while loop and yield the results, but this results in the data to be loaded over and over at the start of the program. "Found 350 images belonging to 7 classes" (repeated 10 times), which then results in an error "output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: <keras.preprocessing.image.DirectoryIterator object at 0x0000000063494BE0>"
Here's my stack trace for the original error:
Traceback (most recent call last):
File "", line 1, in
runfile('K:/Manufacturing Operations/Yield/Tools_Yield/PythonScripts/AI/ISL_DI/Resnet/resISL_Depth.py', wdir='K:/Manufacturing Operations/Yield/Tools_Yield/PythonScripts/AI/ISL_DI/Resnet')
File "C:\Users\paul\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\paul.\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "K:/Manufacturing Operations/Yield/Tools_Yield/PythonScripts/AI/ISL_DI/Resnet/resISL_Depth.py", line 233, in
callbacks=callbacks_list)
File "C:\Users\paul\AppData\Roaming\Python\Python35\site-packages\keras\engine\training.py", line 1481, in fit_generator
str(generator_output))
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None
Here's the code other than vars
`rep=0
#####################################
#new model loop
for i in range(retrainings + 1):
#lr_init = [5, 1, .1, .01]
while rep != len(layers) - 1:
lr_init = [5, 1]
for lr_val in lr_init:
The text was updated successfully, but these errors were encountered: