Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run on Mac M1 - Cannot convert a MPS Tensor to float64 #1533

Closed
1 task done
J-Iszatt opened this issue Dec 8, 2023 · 16 comments · Fixed by #1618
Closed
1 task done

Can't run on Mac M1 - Cannot convert a MPS Tensor to float64 #1533

J-Iszatt opened this issue Dec 8, 2023 · 16 comments · Fixed by #1618

Comments

@J-Iszatt
Copy link

J-Iszatt commented Dec 8, 2023

Describe the bug

I get a "Cannot convert a MPS Tensor to float64" when running the train.py script on a Mac M1

It seems that the Mac GPU interface can't handle 64bit tensors.. I am unsure where to cast the tensor or how to properly do it but from what I can tell data is loaded in ligthining_fabric/apply_func.py.
I tried changing stuff to "data_output= data.type(torch.float32).to(device, **kwargs)" (~line 95) but this does not work.
Looking forward to any help :)

Regards JI

Dataset

MVTec

Model

PADiM

Steps to reproduce the behavior

On a Mac M1 / Apple Silicon:

  • Install as defined in the how to
  • load dataset and put it in the correct folder
  • Run the train.py (

OS information

OS information:

  • OS: Mac OS Ventura 13.5
  • Python version: 3.10.13
  • Anomalib version: 1.0dev
  • PyTorch version: 2.1.1
  • GPU models and configuration: MPS

Expected behavior

A working training to get going with this cool lib

Screenshots

No response

Pip/GitHub

GitHub

What version/branch did you use?

1.0dev

Configuration YAML

dataset:
  name: mvtec
  format: mvtec
  path: ./datasets/MVTec
  category: bottle
  task: segmentation
  train_batch_size: 32
  eval_batch_size: 32
  num_workers: 8
  image_size: 256 # dimensions to which images are resized (mandatory)
  center_crop: null # dimensions to which images are center-cropped after resizing (optional)
  normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
  transform_config:
    train: null
    eval: null
  test_split_mode: from_dir # options: [from_dir, synthetic]
  test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
  val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
  val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
  tiling:
    apply: false
    tile_size: null
    stride: null
    remove_border_count: 0
    use_random_tiling: False
    random_tile_count: 16

model:
  name: padim
  backbone: resnet18
  pre_trained: true
  layers:
    - layer1
    - layer2
    - layer3
  normalization_method: min_max # options: [none, min_max, cdf]

metrics:
  image:
    - F1Score
    - AUROC
  pixel:
    - F1Score
    - AUROC
  threshold:
    method: adaptive #options: [adaptive, manual]
    manual_image: null
    manual_pixel: null

visualization:
  show_images: True # show images on the screen
  save_images: True # save images to the file system
  log_images: True # log images to the available loggers (if any)
  image_save_path: null # path to which images will be saved
  mode: full # options: ["full", "simple"]

project:
  seed: 42
  path: ./results

logging:
  logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
  log_graph: false # Logs the model graph to respective logger.

optimization:
  export_mode: null # options: torch, onnx, openvino

# PL Trainer Args. Don't add extra parameter here.
trainer:
  enable_checkpointing: true
  default_root_dir: null
  gradient_clip_val: 0
  gradient_clip_algorithm: norm
  num_nodes: 1
  devices: 1
  enable_progress_bar: true
  overfit_batches: 0.0
  track_grad_norm: -1
  check_val_every_n_epoch: 1 # Don't validate before extracting features.
  fast_dev_run: false
  accumulate_grad_batches: 1
  max_epochs: 1
  min_epochs: null
  max_steps: -1
  min_steps: null
  max_time: null
  limit_train_batches: 1.0
  limit_val_batches: 1.0
  limit_test_batches: 1.0
  limit_predict_batches: 1.0
  val_check_interval: 1.0 # Don't validate before extracting features.
  log_every_n_steps: 50
  accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
  strategy: null
  sync_batchnorm: false
  precision: 32
  enable_model_summary: true
  num_sanity_val_steps: 0
  profiler: null
  benchmark: false
  deterministic: false
  reload_dataloaders_every_n_epochs: 0
  auto_lr_find: false
  replace_sampler_ddp: true
  detect_anomaly: false
  auto_scale_batch_size: false
  plugins: null
  move_metrics_to_cpu: false
  multiple_trainloader_mode: max_size_cycle

Logs

python tools/train.py --config src/anomalib/models/padim/custom_config.yaml
/Users/justiniszatt/Desktop/Programming/python/anomalib_env/anomalib/src/anomalib/config/config.py:280: UserWarning: config.project.unique_dir is set to False. This does not ensure that your results will be written in an empty directory and you may overwrite files.
  warn(
Global seed set to 42
2023-12-08 17:43:18,916 - anomalib.data - INFO - Loading the datamodule
2023-12-08 17:43:18,917 - anomalib.data.utils.transform - INFO - No config file has been provided. Using default transforms.
2023-12-08 17:43:18,917 - anomalib.data.utils.transform - INFO - No config file has been provided. Using default transforms.
2023-12-08 17:43:18,917 - anomalib.models - INFO - Loading the model.
2023-12-08 17:43:18,917 - anomalib.models.components.base.anomaly_module - INFO - Initializing PadimLightning model.
/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `PrecisionRecallCurve` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
2023-12-08 17:43:18,920 - anomalib.models.components.feature_extractors.timm - WARNING - FeatureExtractor is deprecated. Use TimmFeatureExtractor instead. Both FeatureExtractor and TimmFeatureExtractor will be removed in a future release.
2023-12-08 17:43:19,209 - timm.models.helpers - INFO - Loading pretrained weights from url (https://download.pytorch.org/models/resnet18-5c106cde.pth)
2023-12-08 17:43:19,294 - anomalib.utils.loggers - INFO - Loading the experiment logger(s)
2023-12-08 17:43:19,294 - anomalib.utils.callbacks - INFO - Loading the callbacks
/Users/justiniszatt/Desktop/Programming/python/anomalib_env/anomalib/src/anomalib/utils/callbacks/__init__.py:153: UserWarning: Export option: None not found. Defaulting to no model export
  warnings.warn(f"Export option: {config.optimization.export_mode} not found. Defaulting to no model export")
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - GPU available: True (mps), used: True
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_train_batches=1.0)` was configured so 100% of the batches per epoch will be used..
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_val_batches=1.0)` was configured so 100% of the batches will be used..
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_predict_batches=1.0)` was configured so 100% of the batches will be used..
2023-12-08 17:43:19,324 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(val_check_interval=1.0)` was configured so validation will run at the end of the training epoch..
2023-12-08 17:43:19,324 - anomalib - INFO - Training the model.
2023-12-08 17:43:19,327 - anomalib.data.mvtec - INFO - Found the dataset.
/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `ROC` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:613: UserWarning: Checkpoint directory results/padim/mvtec/bottle/run/weights/lightning exists and is not empty.
  rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py:183: UserWarning: `LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer
  rank_zero_warn(
2023-12-08 17:43:19,428 - pytorch_lightning.callbacks.model_summary - INFO - 
  | Name                  | Type                     | Params
-------------------------------------------------------------------
0 | image_threshold       | AnomalyScoreThreshold    | 0     
1 | pixel_threshold       | AnomalyScoreThreshold    | 0     
2 | model                 | PadimModel               | 2.8 M 
3 | image_metrics         | AnomalibMetricCollection | 0     
4 | pixel_metrics         | AnomalibMetricCollection | 0     
5 | normalization_metrics | MinMax                   | 0     
-------------------------------------------------------------------
2.8 M     Trainable params
0         Non-trainable params
2.8 M     Total params
11.131    Total estimated model params size (MB)
Epoch 0:   0%|                                           | 0/10 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/anomalib/tools/train.py", line 79, in <module>
    train(args)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/anomalib/tools/train.py", line 64, in train
    trainer.fit(model=model, datamodule=datamodule)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
    call._call_and_handle_interrupt(
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in _run
    results = self._run_stage()
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1191, in _run_stage
    self._run_train()
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1214, in _run_train
    self.fit_loop.run()
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 187, in advance
    batch = next(data_fetcher)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/utilities/fetching.py", line 184, in __next__
    return self.fetching_function()
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/utilities/fetching.py", line 275, in fetching_function
    return self.move_to_device(batch)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/utilities/fetching.py", line 294, in move_to_device
    batch = self.batch_to_device(batch)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 261, in batch_to_device
    batch = self.trainer._call_strategy_hook("batch_to_device", batch, dataloader_idx=0)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1494, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 273, in batch_to_device
    return model._apply_batch_transfer_handler(batch, device=device, dataloader_idx=dataloader_idx)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 342, in _apply_batch_transfer_handler
    batch = self._call_batch_hook("transfer_batch_to_device", batch, device, dataloader_idx)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 330, in _call_batch_hook
    return trainer_method(hook_name, *args)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1356, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/pytorch_lightning/core/hooks.py", line 632, in transfer_batch_to_device
    return move_data_to_device(batch, device)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/lightning_fabric/utilities/apply_func.py", line 102, in move_data_to_device
    return apply_to_collection(batch, dtype=_TransferableDataType, function=batch_to)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/lightning_utilities/core/apply_func.py", line 72, in apply_to_collection
    return _apply_to_collection_slow(
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/lightning_utilities/core/apply_func.py", line 104, in _apply_to_collection_slow
    v = _apply_to_collection_slow(
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/lightning_utilities/core/apply_func.py", line 96, in _apply_to_collection_slow
    return function(data, *args, **kwargs)
  File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/lib/python3.10/site-packages/lightning_fabric/utilities/apply_func.py", line 95, in batch_to
    data_output = data.to(device, **kwargs)
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Epoch 0:   0%|          | 0/10 [00:17<?, ?it/s]

Code of Conduct

  • I agree to follow this project's Code of Conduct
@blaz-r
Copy link
Contributor

blaz-r commented Dec 9, 2023

You could try changing the accelerator inside trainer section of config from auto to cpu, maybe that solves the issue.

@J-Iszatt
Copy link
Author

J-Iszatt commented Dec 9, 2023

Hallo blaz-r,

And first of all thanks for the tip! That was exactly the hint I needed ;)

While it made the model run for a bit, I got a follow up crash with a reshape problem in
return visualization.generate() File "/Users/justiniszatt/Desktop/Programming/python/anomalib_env/anomalib/src/anomalib/post_processing/visualizer.py", line 287, in generate img = img.reshape(self.figure.canvas.get_width_height()[::-1] + (3,)) ValueError: cannot reshape array of size 15000000 into shape (500,2500,3)
I already played around with the visualisation part of the config but had no luck so far -> any tips here would also be appreciated

Regarding the MPS, it still might be interesting to further investigate as the processing times are suboptimal on CPU..
I tried to convert some of the tensors to float32 but had no luck at all

@jasonjin34
Copy link

I had the same visualization error when I tried to log image results out. I have tried to run the same code in Linux environment, and everything works just fine,

if you change the config file as follows, you should be able to log some image results out.

visualization:
  show_images: False # show images on the screen
  save_images: True # save images to the file system
  log_images: False # log images to the available loggers (if any)
  image_save_path: null # path to which images will be saved
  mode: simple

I am using macOS 12.3.1 with miniconda env

@jasonjin34
Copy link

by using the MPS accelerator, we might have to modify the defaults datamodule to typecast all the data from default torch.int64 to torch.int32. No sure if it worth it

@samet-akcay
Copy link
Contributor

just got my new macbook and can confirm that this is indeed a problem :D

@samet-akcay samet-akcay linked a pull request Jan 10, 2024 that will close this issue
8 tasks
@samet-akcay
Copy link
Contributor

samet-akcay commented Jan 10, 2024

I've address the float64 issue in #1618, but the visualisation issue #1621 is still there. I have not seen this issue on Windows/Linux or on my old mac. It needs more investigation

@jahad9819jjj
Copy link

jahad9819jjj commented Jan 16, 2024

@samet-akcay

but the visualisation issue #1621 is still there.

This was solved?
I switched repository at v1 branch like the below.

git checkout v1
git fetch
git pull

and I executed.

anomalib train --model Patchcore --data anomalib.data.MVTec 

but I got the same error...(visualization issue #1621 )

@samet-akcay
Copy link
Contributor

Hi @jahad9819jjj, just to clarify, did you get the visualisation issue or tensor to float64 issue? We have fixed the tensor to float64 issue, but the visualisation issue is still there.

@jahad9819jjj
Copy link

jahad9819jjj commented Jan 16, 2024

@samet-akcay
Sorry this is not just visualization issue, but I got these errors:

float64 error
dyld[29303]: Assertion failed: (this->magic == kMagic), function matchesPath, file Loader.cpp, line 154.

Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x12cb30ee0>
Traceback (most recent call last):
  File "/opt/homebrew/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1479, in __del__
    self._shutdown_workers()
  File "/opt/homebrew/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1443, in _shutdown_workers
    w.join(timeout=_utils.MP_STATUS_CHECK_INTERVAL)
  File "/opt/homebrew/Cellar/[email protected]/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 149, in join
    res = self._popen.wait(timeout)
  File "/opt/homebrew/Cellar/[email protected]/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/popen_fork.py", line 40, in wait
    if not wait([self.sentinel], timeout):
  File "/opt/homebrew/Cellar/[email protected]/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/connection.py", line 931, in wait
    ready = selector.select(timeout)
  File "/opt/homebrew/Cellar/[email protected]/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/selectors.py", line 416, in select
    fd_event_list = self._selector.poll(timeout)
  File "/opt/homebrew/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 29303) is killed by signal: Abort trap: 6. 
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/homebrew/bin/anomalib:8 in <module>                                                         │
│                                                                                                  │
│   5 from anomalib.cli.cli import main                                                            │
│   6 if __name__ == '__main__':                                                                   │
│   7 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                         │
│ ❱ 8 │   sys.exit(main())                                                                         │
│   9                                                                                              │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/cli/cli.py:376 in main                               │
│                                                                                                  │
│   373 def main() -> None:                                                                        │
│   374 │   """Trainer via Anomalib CLI."""                                                        │
│   375 │   configure_logger()                                                                     │
│ ❱ 376 │   AnomalibCLI()                                                                          │
│   377                                                                                            │
│   378                                                                                            │
│   379 if __name__ == "__main__":                                                                 │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/cli/cli.py:64 in __init__                            │
│                                                                                                  │
│    61 │   │   run: bool = True,                                                                  │
│    62 │   │   auto_configure_optimizers: bool = True,                                            │
│    63 │   ) -> None:                                                                             │
│ ❱  64 │   │   super().__init__(                                                                  │
│    65 │   │   │   AnomalyModule,                                                                 │
│    66 │   │   │   AnomalibDataModule,                                                            │
│    67 │   │   │   save_config_callback,                                                          │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/cli.py:386 in __init__              │
│                                                                                                  │
│   383 │   │   self.instantiate_classes()                                                         │
│   384 │   │                                                                                      │
│   385 │   │   if self.subcommand is not None:                                                    │
│ ❱ 386 │   │   │   self._run_subcommand(self.subcommand)                                          │
│   387 │                                                                                          │
│   388 │   def _setup_parser_kwargs(self, parser_kwargs: Dict[str, Any]) -> Tuple[Dict[str, Any   │
│   389 │   │   subcommand_names = self.subcommands().keys()                                       │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/cli/cli.py:294 in _run_subcommand                    │
│                                                                                                  │
│   291 │   │   if self.config["subcommand"] in (*self.subcommands(), "train", "export", "predic   │
│   292 │   │   │   fn = getattr(self.engine, subcommand)                                          │
│   293 │   │   │   fn_kwargs = self._prepare_subcommand_kwargs(subcommand)                        │
│ ❱ 294 │   │   │   fn(**fn_kwargs)                                                                │
│   295 │   │   else:                                                                              │
│   296 │   │   │   self.config_init = self.parser.instantiate_classes(self.config)                │
│   297 │   │   │   getattr(self, f"{subcommand}")()                                               │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/engine/engine.py:478 in train                        │
│                                                                                                  │
│   475 │   │   """                                                                                │
│   476 │   │   self._setup_trainer(model)                                                         │
│   477 │   │   self._setup_dataset_task(train_dataloaders, val_dataloaders, test_dataloaders, d   │
│ ❱ 478 │   │   self.trainer.fit(model, train_dataloaders, val_dataloaders, datamodule, ckpt_pat   │
│   479 │   │   self.trainer.test(model, test_dataloaders, ckpt_path=ckpt_path, datamodule=datam   │
│   480 │                                                                                          │
│   481 │   def export(                                                                            │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:544 in fit       │
│                                                                                                  │
│    541 │   │   self.state.fn = TrainerFn.FITTING                                                 │
│    542 │   │   self.state.status = TrainerStatus.RUNNING                                         │
│    543 │   │   self.training = True                                                              │
│ ❱  544 │   │   call._call_and_handle_interrupt(                                                  │
│    545 │   │   │   self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule,  │
│    546 │   │   )                                                                                 │
│    547                                                                                           │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py:44 in               │
│ _call_and_handle_interrupt                                                                       │
│                                                                                                  │
│    41 │   try:                                                                                   │
│    42 │   │   if trainer.strategy.launcher is not None:                                          │
│    43 │   │   │   return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer,    │
│ ❱  44 │   │   return trainer_fn(*args, **kwargs)                                                 │
│    45 │                                                                                          │
│    46 │   except _TunerExitException:                                                            │
│    47 │   │   _call_teardown_hook(trainer)                                                       │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:580 in _fit_impl │
│                                                                                                  │
│    577 │   │   │   model_provided=True,                                                          │
│    578 │   │   │   model_connected=self.lightning_module is not None,                            │
│    579 │   │   )                                                                                 │
│ ❱  580 │   │   self._run(model, ckpt_path=ckpt_path)                                             │
│    581 │   │                                                                                     │
│    582 │   │   assert self.state.stopped                                                         │
│    583 │   │   self.training = False                                                             │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:989 in _run      │
│                                                                                                  │
│    986 │   │   # ----------------------------                                                    │
│    987 │   │   # RUN THE TRAINER                                                                 │
│    988 │   │   # ----------------------------                                                    │
│ ❱  989 │   │   results = self._run_stage()                                                       │
│    990 │   │                                                                                     │
│    991 │   │   # ----------------------------                                                    │
│    992 │   │   # POST-Training CLEAN UP                                                          │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:1035 in          │
│ _run_stage                                                                                       │
│                                                                                                  │
│   1032 │   │   │   with isolate_rng():                                                           │
│   1033 │   │   │   │   self._run_sanity_check()                                                  │
│   1034 │   │   │   with torch.autograd.set_detect_anomaly(self._detect_anomaly):                 │
│ ❱ 1035 │   │   │   │   self.fit_loop.run()                                                       │
│   1036 │   │   │   return None                                                                   │
│   1037 │   │   raise RuntimeError(f"Unexpected state {self.state}")                              │
│   1038                                                                                           │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:202 in run        │
│                                                                                                  │
│   199 │   │   while not self.done:                                                               │
│   200 │   │   │   try:                                                                           │
│   201 │   │   │   │   self.on_advance_start()                                                    │
│ ❱ 202 │   │   │   │   self.advance()                                                             │
│   203 │   │   │   │   self.on_advance_end()                                                      │
│   204 │   │   │   │   self._restarting = False                                                   │
│   205 │   │   │   except StopIteration:                                                          │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:359 in advance    │
│                                                                                                  │
│   356 │   │   │   )                                                                              │
│   357 │   │   with self.trainer.profiler.profile("run_training_epoch"):                          │
│   358 │   │   │   assert self._data_fetcher is not None                                          │
│ ❱ 359 │   │   │   self.epoch_loop.run(self._data_fetcher)                                        │
│   360 │                                                                                          │
│   361 │   def on_advance_end(self) -> None:                                                      │
│   362 │   │   trainer = self.trainer                                                             │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py:137 in │
│ run                                                                                              │
│                                                                                                  │
│   134 │   │   while not self.done:                                                               │
│   135 │   │   │   try:                                                                           │
│   136 │   │   │   │   self.advance(data_fetcher)                                                 │
│ ❱ 137 │   │   │   │   self.on_advance_end(data_fetcher)                                          │
│   138 │   │   │   │   self._restarting = False                                                   │
│   139 │   │   │   except StopIteration:                                                          │
│   140 │   │   │   │   break                                                                      │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py:285 in │
│ on_advance_end                                                                                   │
│                                                                                                  │
│   282 │   │   │   │   # clear gradients to not leave any unused memory during validation         │
│   283 │   │   │   │   call._call_lightning_module_hook(self.trainer, "on_validation_model_zero   │
│   284 │   │   │                                                                                  │
│ ❱ 285 │   │   │   self.val_loop.run()                                                            │
│   286 │   │   │   self.trainer.training = True                                                   │
│   287 │   │   │   self.trainer._logger_connector._first_loop_iter = first_loop_iter              │
│   288                                                                                            │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/utilities.py:182 in           │
│ _decorator                                                                                       │
│                                                                                                  │
│   179 │   │   else:                                                                              │
│   180 │   │   │   context_manager = torch.no_grad                                                │
│   181 │   │   with context_manager():                                                            │
│ ❱ 182 │   │   │   return loop_run(self, *args, **kwargs)                                         │
│   183 │                                                                                          │
│   184 │   return _decorator                                                                      │
│   185                                                                                            │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py:113 in run │
│                                                                                                  │
│   110 │   │   if self.skip:                                                                      │
│   111 │   │   │   return []                                                                      │
│   112 │   │   self.reset()                                                                       │
│ ❱ 113 │   │   self.on_run_start()                                                                │
│   114 │   │   data_fetcher = self._data_fetcher                                                  │
│   115 │   │   assert data_fetcher is not None                                                    │
│   116 │   │   previous_dataloader_idx = 0                                                        │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py:243 in     │
│ on_run_start                                                                                     │
│                                                                                                  │
│   240 │   │   hooks."""                                                                          │
│   241 │   │   self._verify_dataloader_idx_requirement()                                          │
│   242 │   │   self._on_evaluation_model_eval()                                                   │
│ ❱ 243 │   │   self._on_evaluation_start()                                                        │
│   244 │   │   self._on_evaluation_epoch_start()                                                  │
│   245 │                                                                                          │
│   246 │   def on_run_end(self) -> List[_OUT_DICT]:                                               │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py:289 in     │
│ _on_evaluation_start                                                                             │
│                                                                                                  │
│   286 │   │                                                                                      │
│   287 │   │   hook_name = "on_test_start" if trainer.testing else "on_validation_start"          │
│   288 │   │   call._call_callback_hooks(trainer, hook_name, *args, **kwargs)                     │
│ ❱ 289 │   │   call._call_lightning_module_hook(trainer, hook_name, *args, **kwargs)              │
│   290 │   │   call._call_strategy_hook(trainer, hook_name, *args, **kwargs)                      │
│   291 │                                                                                          │
│   292 │   def _on_evaluation_model_eval(self) -> None:                                           │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py:157 in              │
│ _call_lightning_module_hook                                                                      │
│                                                                                                  │
│   154 │   pl_module._current_fx_name = hook_name                                                 │
│   155 │                                                                                          │
│   156 │   with trainer.profiler.profile(f"[LightningModule]{pl_module.__class__.__name__}.{hoo   │
│ ❱ 157 │   │   output = fn(*args, **kwargs)                                                       │
│   158 │                                                                                          │
│   159 │   # restore current_fx when nested context                                               │
│   160 │   pl_module._current_fx_name = prev_fx_name                                              │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/models/components/base/memory_bank_module.py:37 in   │
│ on_validation_start                                                                              │
│                                                                                                  │
│   34 │   def on_validation_start(self) -> None:                                                  │
│   35 │   │   """Ensure that the model is fitted before validation starts."""
│   36 │   │   if not self._is_fitted:                                                             │
│ ❱ 37 │   │   │   self.fit()                                                                      │
│   38 │   │   │   self._is_fitted = torch.tensor([True])                                          │
│   39                                                                                             │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/models/image/patchcore/lightning_model.py:95 in fit  │
│                                                                                                  │
│    92 │   │   embeddings = torch.vstack(self.embeddings)                                         │
│    93 │   │                                                                                      │
│    94 │   │   logger.info("Applying core-set subsampling to get the embedding.")                 │
│ ❱  95 │   │   self.model.subsample_embedding(embeddings, self.coreset_sampling_ratio)            │
│    96 │                                                                                          │
│    97 │   def validation_step(self, batch: dict[str, str | torch.Tensor], *args, **kwargs) ->    │
│    98 │   │   """Get batch of anomaly maps from input image batch.                               │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/models/image/patchcore/torch_model.py:153 in         │
│ subsample_embedding                                                                              │
│                                                                                                  │
│   150 │   │   """
│   151 │   │   # Coreset Subsampling                                                              │
│   152 │   │   sampler = KCenterGreedy(embedding=embedding, sampling_ratio=sampling_ratio)        │
│ ❱ 153 │   │   coreset = sampler.sample_coreset()                                                 │
│   154 │   │   self.memory_bank = coreset                                                         │
│   155 │                                                                                          │
│   156 │   @staticmethod                                                                          │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/models/components/sampling/k_center_greedy.py:127 in │
│ sample_coreset                                                                                   │
│                                                                                                  │
│   124 │   │   │   >>> coreset.shape                                                              │
│   125 │   │   │   torch.Size([219, 1536])                                                        │
│   126 │   │   """                                                                                │
│ ❱ 127 │   │   idxs = self.select_coreset_idxs(selected_idxs)                                     │
│   128 │   │   return self.embedding[idxs]                                                        │
│   129                                                                                            │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/models/components/sampling/k_center_greedy.py:90 in  │
│ select_coreset_idxs                                                                              │
│                                                                                                  │
│    87 │   │   │   selected_idxs = []                                                             │
│    88 │   │                                                                                      │
│    89 │   │   if self.embedding.ndim == 2:                                                       │
│ ❱  90 │   │   │   self.model.fit(self.embedding)                                                 │
│    91 │   │   │   self.features = self.model.transform(self.embedding)                           │
│    92 │   │   │   self.reset_distances()                                                         │
│    93 │   │   else:                                                                              │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/models/components/dimensionality_reduction/random_pr │
│ ojection.py:136 in fit                                                                           │
│                                                                                                  │
│   133 │   │   # (Could not run 'aten::empty_strided' with arguments from the 'SparseCsrCUDA' b   │
│   134 │   │   # hence sparse matrix is stored as a dense matrix on the device                    │
│   135 │   │   # self.sparse_random_matrix = self._sparse_random_matrix(n_features=n_features).   │
│ ❱ 136 │   │   self.sparse_random_matrix = self._sparse_random_matrix(n_features=n_features).to   │
│   137 │   │                                                                                      │
│   138 │   │   return self                                                                        │
│   139                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Epoch 0: 100%|██████████| 7/7 [00:13<00:00,  0.53it/s, v_num=5]   

I modified this like the below but I don't get it whether this is appropriate or not :(
src/anomalib/models/components/dimensionality_reduction/random_projection.py

        if self.device.type == "mps":
            self.sparse_random_matrix = self._sparse_random_matrix(n_features=n_features).to(device, dtype=torch.float32)

then I executed and got errors as the below:

anomalib train --model Patchcore --data anomalib.data.MVTec 
visualization error
ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/m1/qnfmlldx0ql5wt_hc_5054fw0000gn/T/org.python.python.savedState
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/homebrew/bin/anomalib:8 in <module>                                                         │
│                                                                                                  │
│   5 from anomalib.cli.cli import main                                                            │
│   6 if __name__ == '__main__':                                                                   │
│   7sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                         │
│ ❱ 8sys.exit(main())                                                                         │
│   9                                                                                              │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/cli/cli.py:376 in main                               │
│                                                                                                  │
│   373 def main() -> None:                                                                        │
│   374"""Trainer via Anomalib CLI."""                                                        │
│   375configure_logger()                                                                     │
│ ❱ 376AnomalibCLI()                                                                          │
│   377                                                                                            │
│   378                                                                                            │
│   379 if __name__ == "__main__":                                                                 │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/cli/cli.py:64 in __init__                            │
│                                                                                                  │
│    61 │   │   run: bool = True,                                                                  │
│    62 │   │   auto_configure_optimizers: bool = True,                                            │
│    63 │   ) -> None:                                                                             │
│ ❱  64 │   │   super().__init__(                                                                  │
│    65 │   │   │   AnomalyModule,                                                                 │
│    66 │   │   │   AnomalibDataModule,                                                            │
│    67 │   │   │   save_config_callback,                                                          │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/cli.py:386 in __init__              │
│                                                                                                  │
│   383 │   │   self.instantiate_classes()                                                         │
│   384 │   │                                                                                      │
│   385 │   │   if self.subcommand is not None:                                                    │
│ ❱ 386 │   │   │   self._run_subcommand(self.subcommand)                                          │
│   387 │                                                                                          │
│   388def _setup_parser_kwargs(self, parser_kwargs: Dict[str, Any]) -> Tuple[Dict[str, Any   │
│   389 │   │   subcommand_names = self.subcommands().keys()                                       │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/cli/cli.py:294 in _run_subcommand                    │
│                                                                                                  │
│   291 │   │   if self.config["subcommand"] in (*self.subcommands(), "train", "export", "predic   │
│   292 │   │   │   fn = getattr(self.engine, subcommand)                                          │
│   293 │   │   │   fn_kwargs = self._prepare_subcommand_kwargs(subcommand)                        │
│ ❱ 294 │   │   │   fn(**fn_kwargs)                                                                │
│   295 │   │   else:                                                                              │
│   296 │   │   │   self.config_init = self.parser.instantiate_classes(self.config)                │
│   297 │   │   │   getattr(self, f"{subcommand}")()                                               │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/engine/engine.py:435 in predict                      │
│                                                                                                  │
│   432 │   │                                                                                      │
│   433 │   │   self._setup_dataset_task(dataloaders, datamodule)                                  │
│   434 │   │                                                                                      │
│ ❱ 435 │   │   return self.trainer.predict(model, dataloaders, datamodule, return_predictions,    │
│   436 │                                                                                          │
│   437def train(                                                                             │
│   438 │   │   self,                                                                              │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:864 in predict   │
│                                                                                                  │
│    861 │   │   self.state.fn = TrainerFn.PREDICTING                                              │
│    862 │   │   self.state.status = TrainerStatus.RUNNING                                         │
│    863 │   │   self.predicting = True                                                            │
│ ❱  864 │   │   return call._call_and_handle_interrupt(                                           │
│    865 │   │   │   self, self._predict_impl, model, dataloaders, datamodule, return_predictions  │
│    866 │   │   )                                                                                 │
│    867                                                                                           │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py:44 in               │
│ _call_and_handle_interrupt                                                                       │
│                                                                                                  │
│    41try:                                                                                   │
│    42 │   │   if trainer.strategy.launcher is not None:                                          │
│    43 │   │   │   return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer,    │
│ ❱  44 │   │   return trainer_fn(*args, **kwargs)                                                 │
│    45 │                                                                                          │
│    46except _TunerExitException:                                                            │
│    47 │   │   _call_teardown_hook(trainer)                                                       │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:903 in           │
│ _predict_impl                                                                                    │
│                                                                                                  │
│    900 │   │   ckpt_path = self._checkpoint_connector._select_ckpt_path(                         │
│    901 │   │   │   self.state.fn, ckpt_path, model_provided=model_provided, model_connected=sel  │
│    902 │   │   )                                                                                 │
│ ❱  903 │   │   results = self._run(model, ckpt_path=ckpt_path)                                   │
│    904 │   │                                                                                     │
│    905 │   │   assert self.state.stopped                                                         │
│    906 │   │   self.predicting = False                                                           │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:989 in _run      │
│                                                                                                  │
│    986 │   │   # ----------------------------                                                    │987 │   │   # RUN THE TRAINER                                                                 │988 │   │   # ----------------------------                                                    │
│ ❱  989 │   │   results = self._run_stage()                                                       │
│    990 │   │                                                                                     │
│    991 │   │   # ----------------------------                                                    │992 │   │   # POST-Training CLEAN UP                                                          │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:1030 in          │
│ _run_stage                                                                                       │
│                                                                                                  │
│   1027 │   │   if self.evaluating:                                                               │
│   1028 │   │   │   return self._evaluation_loop.run()                                            │
│   1029 │   │   if self.predicting:                                                               │
│ ❱ 1030 │   │   │   return self.predict_loop.run()                                                │
│   1031 │   │   if self.training:                                                                 │
│   1032 │   │   │   with isolate_rng():                                                           │
│   1033 │   │   │   │   self._run_sanity_check()                                                  │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/utilities.py:182 in           │
│ _decorator                                                                                       │
│                                                                                                  │
│   179 │   │   else:                                                                              │
│   180 │   │   │   context_manager = torch.no_grad                                                │
│   181 │   │   with context_manager():                                                            │
│ ❱ 182 │   │   │   return loop_run(self, *args, **kwargs)                                         │
│   183 │                                                                                          │
│   184return _decorator                                                                      │
│   185                                                                                            │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/prediction_loop.py:122 in run │
│                                                                                                  │
│   119 │   │   │   │   │   batch, batch_idx, dataloader_idx = next(data_fetcher)                  │
│   120 │   │   │   │   self.batch_progress.is_last_batch = data_fetcher.done                      │
│   121 │   │   │   │   # run step hooks                                                           │
│ ❱ 122 │   │   │   │   self._predict_step(batch, batch_idx, dataloader_idx, dataloader_iter)      │
│   123 │   │   │   except StopIteration:                                                          │
│   124 │   │   │   │   # this needs to wrap the `*_step` call too (not just `next`) for `datalo   │125 │   │   │   │   break                                                                      │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/loops/prediction_loop.py:263 in     │
│ _predict_step                                                                                    │
│                                                                                                  │
│   260 │   │   │   dataloader_idx = data_fetcher._dataloader_idx                                  │
│   261 │   │   │   hook_kwargs = self._build_kwargs(batch, batch_idx, dataloader_idx if self.nu   │
│   262 │   │                                                                                      │
│ ❱ 263 │   │   call._call_callback_hooks(trainer, "on_predict_batch_end", predictions, *hook_kw   │
│   264 │   │   call._call_lightning_module_hook(trainer, "on_predict_batch_end", predictions, *   │
│   265 │   │                                                                                      │
│   266 │   │   self.batch_progress.increment_completed()                                          │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py:208 in              │
│ _call_callback_hooks                                                                             │
│                                                                                                  │
│   205 │   │   fn = getattr(callback, hook_name)                                                  │
│   206 │   │   if callable(fn):                                                                   │
│   207 │   │   │   with trainer.profiler.profile(f"[Callback]{callback.state_key}.{hook_name}")   │
│ ❱ 208 │   │   │   │   fn(trainer, trainer.lightning_module, *args, **kwargs)                     │
│   209 │                                                                                          │
│   210if pl_module:                                                                          │
│   211 │   │   # restore current_fx when nested context                                           │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/callbacks/visualizer/visualizer_image.py:60 in       │
│ on_predict_batch_end                                                                             │
│                                                                                                  │
│    57 │   │   del trainer, pl_module, batch, batch_idx, dataloader_idx  # These variables are    │58 │   │   assert outputs is not None                                                         │
│    59 │   │                                                                                      │
│ ❱  60 │   │   for i, image in enumerate(self.visualizer.visualize_batch(outputs)):               │
│    61 │   │   │   if "image_path" in outputs:                                                    │
│    62 │   │   │   │   filename = Path(outputs["image_path"][i])                                  │
│    63 │   │   │   elif "video_path" in outputs:                                                  │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/utils/visualization.py:124 in visualize_batch        │
│                                                                                                  │
│   121 │   │   │   │   pred_boxes=batch["pred_boxes"][i].cpu().numpy() if "pred_boxes" in batch   │
│   122 │   │   │   │   box_labels=batch["box_labels"][i].cpu().numpy() if "box_labels" in batch   │
│   123 │   │   │   )                                                                              │
│ ❱ 124 │   │   │   yield self.visualize_image(image_result)                                       │
│   125 │                                                                                          │
│   126def visualize_image(self, image_result: ImageResult) -> np.ndarray:                    │
│   127 │   │   """Generate the visualization for an image.                                        │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/utils/visualization.py:136 in visualize_image        │
│                                                                                                  │
│   133 │   │   │   The full or simple visualization for the image, depending on the specified m   │
│   134 │   │   """                                                                                │
│   135 │   │   if self.mode == VisualizationMode.FULL:                                            │
│ ❱ 136 │   │   │   return self._visualize_full(image_result)                                      │
│   137 │   │   if self.mode == VisualizationMode.SIMPLE:                                          │
│   138 │   │   │   return self._visualize_simple(image_result)                                    │
│   139 │   │   msg = f"Unknown visualization mode: {self.mode}"                                   │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/utils/visualization.py:185 in _visualize_full        │
│                                                                                                  │
│   182 │   │   │   │   image_classified = add_normal_label(image_result.image, 1 - image_result   │
│   183 │   │   │   visualization.add_image(image=image_classified, title="Prediction")            │
│   184 │   │                                                                                      │
│ ❱ 185 │   │   return visualization.generate()                                                    │
│   186 │                                                                                          │
│   187def _visualize_simple(self, image_result: ImageResult) -> np.ndarray:                  │
│   188 │   │   """Generate a simple visualization for an image.                                   │
│                                                                                                  │
│ /Volumes/SSD_USB/副業/anomalib/src/anomalib/utils/visualization.py:296 in generate               │
│                                                                                                  │
│   293 │   │   self.figure.canvas.draw()                                                          │
│   294 │   │   # convert canvas to numpy array to prepare for visualization with opencv           │295 │   │   img = np.frombuffer(self.figure.canvas.tostring_rgb(), dtype=np.uint8)             │
│ ❱ 296 │   │   img = img.reshape(self.figure.canvas.get_width_height()[::-1] + (3,))              │
│   297 │   │   plt.close(self.figure)                                                             │
│   298 │   │   return img                                                                         │
│   299                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: cannot reshape array of size 15000000 into shape (500,2500,3)

@samet-akcay
Copy link
Contributor

@jahad9819jjj, yeah you are right, the problem was not fully resolved. I've also spotted this, and created #1644 . Once it is merged, it should hopefully be ok :)

@Yiiipu
Copy link

Yiiipu commented Jan 19, 2024

i still got the float64 error, unless changing accelerator to cpu.

@Yiiipu
Copy link

Yiiipu commented Jan 19, 2024

I had the same visualization error when I tried to log image results out. I have tried to run the same code in Linux environment, and everything works just fine,

if you change the config file as follows, you should be able to log some image results out.

visualization:
  show_images: False # show images on the screen
  save_images: True # save images to the file system
  log_images: False # log images to the available loggers (if any)
  image_save_path: null # path to which images will be saved
  mode: simple

I am using macOS 12.3.1 with miniconda env

I also need to turn save_images off, or there will be similar dimension issues

@samet-akcay
Copy link
Contributor

@Yiiipu, which branch are you using? Note that all these fixes are made to the v1 branch

@Yiiipu
Copy link

Yiiipu commented Jan 19, 2024

@Yiiipu, which branch are you using? Note that all these fixes are made to the v1 branch

I was working on the main branch. It makes sense now. Thank you!

@Mr-Corentin
Copy link

If you want to solve the visualisation issue you should add matplotlib.use('Agg') to the top of the visualizer.py file. There is some issue with Mac Os and that's why you need to specify this matplotlib backend.
It worked for me, I hope it will work for you too

@samet-akcay
Copy link
Contributor

Thanks for sharing, @Mr-Corentin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants