Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi GPUs training problem #1

Closed
Leesoon1984 opened this issue May 16, 2024 · 6 comments
Closed

Multi GPUs training problem #1

Leesoon1984 opened this issue May 16, 2024 · 6 comments

Comments

@Leesoon1984
Copy link

Traceback (most recent call last):
File "/home/mnt/lee/Med/CrossMatch/ACDC/train_cross_match.py", line 420, in
main()
File "/home/mnt/lee/Med/CrossMatch/ACDC/train_cross_match.py", line 183, in main
for i, (
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 630, in next
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
return self._process_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/_utils.py", line 694, in reraise
raise exception
OSError: Caught OSError in DataLoader worker process 11.
Original Traceback (most recent call last):
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/mnt/lee/Med/CrossMatch/ACDC/dataset/acdc.py", line 37, in getitem
sample = h5py.File(os.path.join(self.root, id), 'r')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/h5py/_hl/files.py", line 562, in init
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/h5py/_hl/files.py", line 235, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
OSError: [Errno 9] Unable to synchronously open file (unable to lock file, errno = 9, error message = 'Bad file descriptor')

@AiEson AiEson closed this as completed May 16, 2024
@AiEson AiEson reopened this May 16, 2024
@AiEson
Copy link
Owner

AiEson commented May 16, 2024

I executed all the processes in the README and found no problems.
The possible reason is that you did not clone the repository correctly, causing some files to be damaged.

bash scripts/train.sh 1 12333                                                                                                                                (CrossMatch)
[2024-05-16 17:19:25,385][    INFO] {'T': 1.0,
 'batch_size': 12,
 'conf_thresh': 0.95,
 'config': 'configs/acdc.yaml',
 'crop_size': 256,
 'data_root': './dataset/datasets/ACDC',
 'dataset': 'acdc',
 'drop_rate': 0.5,
 'epochs': 300,
 'eta': 0.3,
 'labeled_id_path': 'splits/acdc/3/labeled.txt',
 'local_rank': 0,
 'lr': 0.01,
 'nclass': 4,
 'ngpus': 1,
 'port': 12333,
 'save_path': 'exp/acdc/train_cross_match_cross_match/unet/3/eta_0.3',
 'unlabeled_id_path': 'splits/acdc/3/unlabeled.txt',
 'use_threshold_relax': False}

[2024-05-16 17:19:25,466][    INFO] Total params: 1.8M

[2024-05-16 17:19:26,427][    INFO] ===========> Epoch: 0, LR: 0.01000, Previous best: 0.00, TH: 0.9500
[2024-05-16 17:19:32,535][    INFO] Iters: 0, Total loss: 0.821, Loss x: 1.175, Loss s: 0.693, Loss KD: 0.595, Mask ratio: 0.000
[2024-05-16 17:19:36,077][    INFO] Iters: 12, Total loss: 0.834, Loss x: 1.011, Loss s: 0.752, Loss KD: 0.738, Mask ratio: 0.001
fish: Job 1, 'bash scripts/train.sh 1 12333' has stopped

@AiEson AiEson closed this as completed May 16, 2024
@Leesoon1984
Copy link
Author

Leesoon1984 commented May 16, 2024

I executed all the processes in the README and found no problems. The possible reason is that you did not clone the repository correctly, causing some files to be damaged.

bash scripts/train.sh 1 12333                                                                                                                                (CrossMatch)
[2024-05-16 17:19:25,385][    INFO] {'T': 1.0,
 'batch_size': 12,
 'conf_thresh': 0.95,
 'config': 'configs/acdc.yaml',
 'crop_size': 256,
 'data_root': './dataset/datasets/ACDC',
 'dataset': 'acdc',
 'drop_rate': 0.5,
 'epochs': 300,
 'eta': 0.3,
 'labeled_id_path': 'splits/acdc/3/labeled.txt',
 'local_rank': 0,
 'lr': 0.01,
 'nclass': 4,
 'ngpus': 1,
 'port': 12333,
 'save_path': 'exp/acdc/train_cross_match_cross_match/unet/3/eta_0.3',
 'unlabeled_id_path': 'splits/acdc/3/unlabeled.txt',
 'use_threshold_relax': False}

[2024-05-16 17:19:25,466][    INFO] Total params: 1.8M

[2024-05-16 17:19:26,427][    INFO] ===========> Epoch: 0, LR: 0.01000, Previous best: 0.00, TH: 0.9500
[2024-05-16 17:19:32,535][    INFO] Iters: 0, Total loss: 0.821, Loss x: 1.175, Loss s: 0.693, Loss KD: 0.595, Mask ratio: 0.000
[2024-05-16 17:19:36,077][    INFO] Iters: 12, Total loss: 0.834, Loss x: 1.011, Loss s: 0.752, Loss KD: 0.738, Mask ratio: 0.001
fish: Job 1, 'bash scripts/train.sh 1 12333' has stopped

[2024-05-16 16:12:13,033][    INFO] ===========> Epoch: 62, LR: 0.00812, Previous best: 81.34, TH: 0.9500
[2024-05-16 16:12:17,256][    INFO] Iters: 0, Total loss: 0.068, Loss x: 0.038, Loss s: 0.051, Loss KD: 0.115, Mask ratio: 0.977
[2024-05-16 16:12:18,553][    INFO] Iters: 3, Total loss: 0.048, Loss x: 0.040, Loss s: 0.031, Loss KD: 0.075, Mask ratio: 0.979
[2024-05-16 16:12:19,844][    INFO] Iters: 6, Total loss: 0.055, Loss x: 0.041, Loss s: 0.035, Loss KD: 0.089, Mask ratio: 0.978
Traceback (most recent call last):
  File "/home/mnt/lee/Med/CrossMatch/ACDC/train_cross_match.py", line 420, in <module>
    main()
  File "/home/mnt/lee/Med/CrossMatch/ACDC/train_cross_match.py", line 183, in main
    for i, (
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
    data = self._next_data()
           ^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
    return self._process_data(data)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
    data.reraise()
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/_utils.py", line 694, in reraise
    raise exception
OSError: Caught OSError in DataLoader worker process 9.
Original Traceback (most recent call last):
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
    data = fetcher.fetch(index)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
            ~~~~~~~~~~~~^^^^^
  File "/home/mnt/lee/Med/CrossMatch/ACDC/dataset/acdc.py", line 37, in __getitem__
    sample = h5py.File(os.path.join(self.root, id), 'r')
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/h5py/_hl/files.py", line 562, in __init__
    fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/h5py/_hl/files.py", line 235, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 102, in h5py.h5f.open
OSError: [Errno 9] Unable to synchronously open file (unable to lock file, errno = 9, error message = 'Bad file descriptor')

[2024-05-16 16:12:22,257] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 30 closing signal SIGTERM
[2024-05-16 16:12:22,258] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 32 closing signal SIGTERM
[2024-05-16 16:12:22,258] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 33 closing signal SIGTERM
[2024-05-16 16:12:22,377] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 31) of binary: /home/mnt/lee/miniconda3/envs/CrossMatch/bin/python
Traceback (most recent call last):
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/bin/torchrun", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/distributed/run.py", line 806, in main
    run(args)
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/distributed/run.py", line 797, in run
    elastic_launch(
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mnt/lee/miniconda3/envs/CrossMatch/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
train_cross_match.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-05-16_16:12:22
  host      : pt-r5qx9yk4-worker-0.pt-r5qx9yk4.ns-sensetime2023-cb5b8530.svc.cluster.local
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 31)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

@AiEson
Copy link
Owner

AiEson commented May 17, 2024

Please try running the code below:

import h5py
import glob
import tqdm

ACDC_DIR = '/home/mnt/lee/Med/CrossMatch/ACDC/dataset/datasets/ACDC'

print("Checking the dataset... type: Slices")
for file_name in tqdm.tqdm(glob.glob(f'{ACDC_DIR}/data/slices/*.h5')):
    sample = h5py.File(file_name, 'r')
    img = sample['image'][:]
    mask = sample['label'][:]
print("Finished!")
    
print("Checking the dataset... type: 3D Images")
for file_name in tqdm.tqdm(glob.glob(f'{ACDC_DIR}/data/*.h5')):
    sample = h5py.File(file_name, 'r')
    img = sample['image'][:]
    mask = sample['label'][:]
print("Finished!")

to check if your dataset can be read correctly.

@AiEson AiEson reopened this May 17, 2024
@Leesoon1984
Copy link
Author

Please try running the code below:

import h5py
import glob
import tqdm

ACDC_DIR = '/home/mnt/lee/Med/CrossMatch/ACDC/dataset/datasets/ACDC'

print("Checking the dataset... type: Slices")
for file_name in tqdm.tqdm(glob.glob(f'{ACDC_DIR}/data/slices/*.h5')):
    sample = h5py.File(file_name, 'r')
    img = sample['image'][:]
    mask = sample['label'][:]
print("Finished!")
    
print("Checking the dataset... type: 3D Images")
for file_name in tqdm.tqdm(glob.glob(f'{ACDC_DIR}/data/*.h5')):
    sample = h5py.File(file_name, 'r')
    img = sample['image'][:]
    mask = sample['label'][:]
print("Finished!")

to check if your dataset can be read correctly.
image

@Leesoon1984
Copy link
Author

h5py/h5py#1101

@AiEson
Copy link
Owner

AiEson commented May 17, 2024

h5py/h5py#1101

Thanks for you to provide the solution

@AiEson AiEson closed this as completed May 17, 2024
@AiEson AiEson pinned this issue May 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants