Issue
I am using MMSegmentainon library to train my model for instance image segmentation, during the traingin, I craete the model(Vision Transformer) and when I try to fit the model using this:
I get this error:
RuntimeError:CaughtRuntimeErrorinDataLoaderworkerprocess0.OriginalTraceback(mostrecentcalllast): File"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py",line287,in _worker_loop data=fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 47, infetch returnself.collate_fn(data) File "/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py", line 81, in collateforkeyinbatch[0] File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line81,in <dictcomp> forkey in batch[0] File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line59,incollatestacked.append(default_collate(padded_samples)) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 56, indefault_collate returntorch.stack(batch,0,out=out)
RuntimeError: stack expects each tensor to be equal size, but got [1, 256, 256, 256] at entry0 and[1,256,256] at entry3
** I must also mention that I have tested my own dataset with other models available in their library but all of them works properly.
tried :
model=build_segmentor(cfg.model,train_cfg=cfg.get('train_cfg'),test_cfg=cfg.get('test_cfg'))train_segmentor(model,datasets,cfg,distributed=False,validate=True,
meta=dict())
Solution
It seems that images in your dataset might not have the same size, as in the VIT model https://arxiv.org/abs/2010.11929, you are using an MLP model,
if it was not the case, it is worth checking if your labels are all in the expected dimension. presumably, MMsegmentattion expects the output to be just the annotation map (a 2D array). It is recommended that you revise your dataset and prepare the annotation map.
Answered By - AmirMasoud
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.