|Apple released their M series chips a few years ago, but it's only relatively recently that libraries such as PyTorch have provided support.
The benefits are dramatic. Some benchmarks show an over 20X increase in speed for the M1 GPU over just the CPU when running AI training and inference on an M1 Mac using the GPU vs the CPU.
So naturally I dug in and added M1/M2 GPU support to CodeProject.AI. From reading the docs it seemed super simple:
1. Install torch and torchvision via pip
torch.backends.mps.is_available to test for MPS (Metal Performance Shaders)
3. set the torch device as "mps"
4. Bask in GPU awesomeness
Yeah, nah. It won't work out of the box.
First, PyTorch has M1 GPU support. Sort of, kind of. There's lots and lots of things that haven't yet been included, and it's a rapidly evolving library meaning you will want to include the nightly builds in your requirements.txt file (for PIP installs)
This allows PIP to use pre-release, it forces a reinstall to avoid caching, and it tells PIP where to find this magical nightly build version.
Second, Python is cross platform so if you only run the code on a machine where MPS has been included in the pytorch package you install, you're fine. The rest of the world needs something like
use_MPS = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
Third, you may need to adjust your code and force some things to use the CPU instead of the GPU. For example, currently TorchVision for Apple M1 doesn't support nms, so our YOLO code must be changed from
pred = non_max_suppression(pred, confidence, 0.45, classes=None, agnostic=False)
pred = non_max_suppression(pred.cpu(), confidence, 0.45, classes=None, agnostic=False)
The list of not-yet-supported methods is on GitHub[^]. If you have a fave issue go and vote for it.
So far I've seen a roughly doubling of performance for inference using the YOLOv5 model in CodeProject.AI, which is a nice boost for just a couple of hours of work. The changes are integrated into the 18.104.22.168 codebase of CodeProject.AI.