-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError #401
Comments
I added the "--workers" to execute the code to solve this issue, but a new error message appeared. dataset_root: lmdb/training
|
I executed the following code:
python train.py --train_data lmdb/training --valid_data lmdb/validation --select_data MJ-ST --batch_ratio 0.5-0.5 --Transformation None --FeatureExtraction VGG --SequenceModeling BiLSTM --Prediction CTC --saved_model None-VGG-BiLSTM-CTC.pth --num_iter 2000 --valInterval 20 --FT --data_filtering_off
The execution result is as follows:
dataset_root: lmdb/training
opt.select_data: ['MJ', 'ST']
opt.batch_ratio: ['0.5', '0.5']
dataset_root: lmdb/training dataset: MJ
sub-directory: /MJ num samples: 1000
num total samples of MJ: 1000 x 1.0 (total_data_usage_ratio) = 1000
num samples of MJ per batch: 192 x 0.5 (batch_ratio) = 96
Traceback (most recent call last):
File "train.py", line 317, in
train(opt)
File "train.py", line 31, in train
train_dataset = Batch_Balanced_Dataset(opt)
File "C:\Users\user\deep-text-recognition-benchmark-master\dataset.py", line 69, in init
self.dataloader_iter_list.append(iter(_data_loader))
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\site-packages\torch\utils\data\dataloader.py", line 435, in iter
return self._get_iterator()
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\site-packages\torch\utils\data\dataloader.py", line 381, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\site-packages\torch\utils\data\dataloader.py", line 1034, in init
w.start()
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle Environment objects
(EasyOCR) C:\Users\user\deep-text-recognition-benchmark-master>Traceback (most recent call last):
File "", line 1, in
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\user\anaconda3\envs\EasyOCR\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
How can I resolve this issue?
The text was updated successfully, but these errors were encountered: