|
IF YOU ARE HAVING A PROBLEM
- Take a look at the logs in
C:\Program Files\CodeProject\AI\logs and see if there's anything in there that screams 'something broke'.
- Check the FAQs in the CodeProject.AI Server documentation
- Make sure you've tested the server using the Explorer (blue link, top middle of the dashboard) to ensure it's a server issue rather than something else such as Blue Iris or another app using CodeProject.AI server.
- If there's no obvious answer, then copy and paste into a message the contents of the System Info tab, describe what you are doing, and what you see, and what you would expect.
Always include a copy and paste from the System Info tab of the dashboard. It gives us a ton of info on your setup. If an individual module is failing, click the 'Info' button to the right of the module's name in the status list and copy and paste that info too.
How to reinstall a module
Option 1. Go to the install modules tab on the dashboard and try re-installing the package. Make sure you have enough disk space and a reliable internet connection.
Option 2: (Option 1 with a vengeance): If that fails, head to the module's folder ([app root]\modules\module-id), open a terminal in admin mode, and run ..\..\setup . This will force a manual reinstall using the install script.
Docker: In Docker you will need to open a terminal into the docker container. You can do this using Docker Desktop, or Visual Studio Code with the Docker remote extension, or on the command using using docker attach . Then do a cd /app/modules/module-id where module-id is the id of the module you need to resinstall. Next, run sudo bash ../../setup.sh --verbosity info to force a manual reinstall of that module. (Set verbosity as quiet, info or loud to get less or more info)
cheers
Chris Maunder
modified 18-Feb-24 15:48pm.
|
|
|
|
|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.
Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
This message has been flagged as potential spam and is awaiting moderation
|
|
|
|
|
I only have one module installed on my CPAI, Object Detection Coral 2.3.4. If I have it started, the autostart is set to true. If I restart my system, CPAI starts up fine, but the coral module is not running unless I go in and manually start it. If I check the info before starting it, the autostart is set to false. In Blue Iris, I have Auto start/stop with Blue Iris unchecked.
Blue Iris 5.9.6.4
CodeProject 2.6.5.0
|
|
|
|
|
CodeProject 2.6.5.0 has an issues with saving settings CodeProject 2.8 fixes the issue.
|
|
|
|
|
I installed 2.8 and it fixed the issue. Thanks for your help, Mike!
|
|
|
|
|
I installed version 2.1.1 (2024-06-21) and I did not use download cache.
I receive the following error when I press start, and OCR aborts. System setup below.
Am I doing something wrong?
07:26:18:Update OCR. Setting AutoStart=true
07:26:18:Restarting Optical Character Recognition to apply settings change
07:26:18:
07:26:18:Module 'Optical Character Recognition' 2.1.1 (ID: OCR)
07:26:18:Valid: True
07:26:18:Module Path: <root>\modules\OCR
07:26:18:Module Location: Internal
07:26:18:AutoStart: True
07:26:18:Queue: ocr_queue
07:26:18:Runtime: python3.9
07:26:18:Runtime Location: Local
07:26:18:FilePath: OCR_adapter.py
07:26:18:Start pause: 1 sec
07:26:18:Parallelism: 0
07:26:18:LogVerbosity:
07:26:18:Platforms: all,!windows-arm64
07:26:18:GPU Libraries: installed if available
07:26:18:GPU: use if supported
07:26:18:Accelerator:
07:26:18:Half Precision: enable
07:26:18:Environment Variables
07:26:18:MIN_COMPUTE_CAPABILITY = 6
07:26:18:MIN_CUDNN_VERSION = 7
07:26:18:
07:26:18:Started Optical Character Recognition module
07:26:19:OCR_adapter.py: Using PIL for image manipulation (Either OpenCV or numpy not available for this module)
07:26:19:OCR_adapter.py: Traceback (most recent call last):
07:26:19:OCR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\OCR\OCR_adapter.py", line 12, in
07:26:19:OCR_adapter.py: from OCR import init_detect_ocr, read_text
07:26:19:OCR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\OCR\OCR.py", line 10, in
07:26:19:OCR_adapter.py: from paddleocr import PaddleOCR
07:26:19:OCR_adapter.py: ImportError: cannot import name 'PaddleOCR' from 'paddleocr' (unknown location)
07:26:19:Module OCR has shutdown
07:26:19:OCR_adapter.py: has exited
---
System Setup + Coral dual TPU
---
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 11 version 10.0.22631)
CPUs: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (Intel)
1 CPU x 6 cores. 12 logical processors (x64)
GPU (Primary): Intel(R) UHD Graphics 630 (1,024 MiB) (Intel Corporation)
Driver: 30.0.100.9864
System RAM: 16 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 8.0.8
.NET SDK: 8.0.304
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) UHD Graphics 630:
Driver Version 30.0.100.9864
Video Processor Intel(R) UHD Graphics Family
Microsoft Remote Display Adapter:
Driver Version 10.0.22621.3672
Video Processor
System GPU info:
GPU 3D Usage 30%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
I have an NVIDIA GeForce GTX 1060 card in my Windows 10 machine. It is a gen 7 Intel core i7 processor.
Should I be able to just run the latest of all the CUDA stuff, including the CUDA drivers, CUDA Toolkit, cuDNN together with the latest NVIDIA graphics and the Microsoft .NET stuff? I am only using CP.AI for use with BlueIris.
Currently I am confused from the article where it lists restriction for CUDA with NVIDIA cards - "CodeProject.AI Server: AI the easy way." at:
CodeProject.AI Server: AI the easy way.[^]
and:
GPU Not Being Used · Issue #26 · codeproject/CodeProject.AI-Server · GitHub[^]
The article also mentions "Newer cards such as the GTX 10xx, 20xx and 30xx series, RTX, MX series are fully supported", so does this mean that my GTX 2060 card does not have these restrictions?
I have researched my card in the NVIDIA documentation. It seems that my card is "NVIDIA Pascal" with a "CUDA Compute Capability" of 6.0. according to the table at the link below, unless I am analysing this wrongly, I should be good for the latest versions such as "cuDNN 9.4.0 for CUDA 12.x, "CUDA Toolkit Version" and "NVIDIA Driver Version for Windows" etc.
Support Matrix — NVIDIA cuDNN v9.4.0 documentation[^].
Currently I tried to keep with mostly with the article recommended versions, but it doesn't seem to be working correctly. The versions I am running are:
CP.AI Server 2.6.5
NVIDIA CUDA - 11.8
NVIDIA Graphics driver - 560.81
CUDNN - 9.0
(Not sure how relevent the below Mircosoft .NET stuff is, if at all)
Microsoft .NET Runtime - 7.0.20.33717
Microsoft ASP.NET Core - 7.0.20.2469
Microsoft .NET SDK 7.0.410 - 7.4.1024.27207
One possible issue could be running cuDNN 9.0 instead of 8.9.4. I used 9.0 as it comes with a proper windows installer whereas 8.9.4 is either a rather messy manual install method or a batch file method install script (future maintenance issue risks such as uninstall etc) from the above CP.AI article link. I would rather install all the latest possible drivers, runtimes, SDK's though.
I can add CP.AI log files that might show what is going wrong, but can't see how to upload the file here (my first post).
I will just past the first part of the log here:
|
|
|
|
|
I think you should lower the graphics driver. I'm using Tesla P4 which is also Pascal.
Here is the data on my GPU:
GPU (Primary): Tesla P4 (8 GiB) (NVIDIA)
Driver: 538.15, CUDA: 11.8.89 (up to: 12.2), Compute: 6.1, cuDNN: 8.9
I remember that I was trying to update the driver version, and I think, above 538.15 the CPAI stops using my GPU for some reason. I kept it this specific setup because it works. This is the website where I got the driver "Drivers for NVIDIA RTX Virtual Workstation (vWS) | Compute Engine Documentation | Google Cloud[^]
I did not look into why the newer driver will make CPAI stop using the GPU, but I believe it has something to do with the compatibility of other elements such as cuDNN, CUDA, the modules in the ALPR or YOLO (Object Detection in general). They all need to be aligned in order to make it work.
I hope it helps.
|
|
|
|
|
Thanks, there seems to be dependency issues as you have also found. I don't even need anything as fancy as ALPR or Face Recognition. Just need basic YOLO object detection.
Also I wish I new how to attach my log file here. I tried to add it online here but doesn't take and reports my post as spam and warns my that I might annoy other users if the message is over a certain length (maybe less than 200 lines wich I don't think is excessive).
I really need to know what the issue is, or is it just that CP.AI is not keeping up with keeping compatible with the latest drives etc that it is dependent on. If there are intricate dependencies, then we need clear documentation for it. We don't want a try this or try that or try different combinations hoping all the planets will align.
I am hoping that a CP.AI developer or at least someone intimately familiar with the project code can explain why we are having these issues and what needs to be done on the CP.AI side to address it. And also add detailed documentation related to dependencies, particularly when utilising hardware such as an NVIDIA card to speed up the processing.
I will try adding a small portion of the log again below. It starts to go wrong I thing where I have shown the output in bold, which sows as red on the actual server hlog screen.
Quote: 12:30:36:System: Windows
12:30:36:Operating System: Windows (Microsoft Windows 10.0.19045)
12:30:36:CPUs: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (Intel)
12:30:36: 1 CPU x 4 cores. 8 logical processors (x64)
12:30:36:GPU (Primary): NVIDIA GeForce GTX 1060 (6 GiB) (NVIDIA)
12:30:36: Driver: 560.81, CUDA: 11.8.89 (up to: 12.6), Compute: 6.1, cuDNN: 9.0
12:30:36:System RAM: 32 GiB
12:30:36:Platform: Windows
12:30:36:BuildConfig: Release
12:30:36:Execution Env: Native
12:30:36:Runtime Env: Production
12:30:36:Runtimes installed:
12:30:36: .NET runtime: 7.0.20
12:30:36: .NET SDK: 7.0.410
12:30:36: Default Python: Not found
12:30:36: Go: Not found
12:30:36: NodeJS: Not found
12:30:36: Rust: Not found
12:30:36:App DataDir: C:\ProgramData\CodeProject\AI
12:30:36:Video adapter info:
12:30:36: DisplayLink USB Device:
12:30:36: Driver Version 11.2.3146.0
12:30:36: Video Processor
12:30:36: Intel(R) HD Graphics 630:
12:30:36: Driver Version 23.20.16.4849
12:30:36: Video Processor Intel(R) HD Graphics Family
12:30:36: NVIDIA GeForce GTX 1060:
12:30:36: Driver Version 32.0.15.6081
12:30:36: Video Processor NVIDIA GeForce GTX 1060
12:30:36: DisplayLink USB Device:
12:30:36: Driver Version 11.2.3146.0
12:30:36: Video Processor
12:30:36: DisplayLink USB Device:
12:30:36: Driver Version 11.2.3146.0
12:30:36: Video Processor
12:30:36: DisplayLink USB Device:
12:30:36: Driver Version 11.2.3146.0
12:30:36: Video Processor
12:30:36:STARTING CODEPROJECT.AI SERVER
12:30:36:RUNTIMES_PATH = C:\Program Files\CodeProject\AI\runtimes
12:30:36:PREINSTALLED_MODULES_PATH = C:\Program Files\CodeProject\AI\preinstalled-modules
12:30:36:DEMO_MODULES_PATH = C:\Program Files\CodeProject\AI\src\demos\modules
12:30:36:EXTERNAL_MODULES_PATH =
12:30:36:MODULES_PATH = C:\Program Files\CodeProject\AI\modules
12:30:36:PYTHON_PATH = \bin\windows\%PYTHON_NAME%\venv\Scripts\python
12:30:36:Data Dir = C:\ProgramData\CodeProject\AI
12:30:36:Server version: 2.6.5
12:30:39:
12:30:39:Module 'Object Detection (YOLOv5 6.2)' 1.9.2 (ID: ObjectDetectionYOLOv5-6.2)
12:30:39:Valid: True
12:30:39:Module Path: <root>\modules\ObjectDetectionYOLOv5-6.2
12:30:39:Module Location: Internal
12:30:39:AutoStart: True
12:30:39:Queue: objectdetection_queue
12:30:39:Runtime: python3.7
12:30:39:Runtime Location: Shared
12:30:39:FilePath: detect_adapter.py
12:30:39:Start pause: 1 sec
12:30:39:Parallelism: 0
12:30:39:LogVerbosity:
12:30:39:Platforms: all,!raspberrypi,!jetson
12:30:39:GPU Libraries: installed if available
12:30:39:GPU: use if supported
12:30:39:Accelerator:
12:30:39:Half Precision: enable
12:30:39:Environment Variables
12:30:39:APPDIR = <root>\modules\ObjectDetectionYOLOv5-6.2
12:30:39:CUSTOM_MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\custom-models
12:30:39:MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\assets
12:30:39:MODEL_SIZE = Medium
12:30:39:USE_CUDA = True
12:30:39:YOLOv5_AUTOINSTALL = false
12:30:39:YOLOv5_VERBOSE = false
12:30:39:
12:30:39:Started Object Detection (YOLOv5 6.2) module
14:15:04:Response rec'd from Object Detection (YOLOv5 6.2) command 'detect' (...64a7a1) ['No objects found'] took 4784ms
14:15:04:Object Detection (YOLOv5 6.2): Detecting using actionnetv2
14:15:04:Response rec'd from Object Detection (YOLOv5 6.2) command 'detect' (...06ba2f)
14:15:04:Object Detection (YOLOv5 6.2): [NotImplementedError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 140, in do_detection
det = detector(img, size=640)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 715, in forward
max_det=self.max_det) # NMS
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 942, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\_ops.py", line 442, in __call__
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:140 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\functorch\DynamicLayer.cpp:488 [backend fallback]
Functionalize: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\FunctionalizeFallbackKernel.cpp:291 [backend fallback]
Named: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:51 [backend fallback]
AutogradMPS: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:59 [backend fallback]
AutogradXPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:43 [backend fallback]
AutogradHPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:68 [backend fallback]
AutogradLazy: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:55 [
...
modified 8hrs ago.
|
|
|
|
|
^^ Basically what the previous poster said on the NV drivers. I'm also running BI on a WIN 10 machine, I7-8086 & RTX 2060 Super. I rolled the driver all the way back to 516.94 as it's the last version the CPAI crew says they can guarantee is compatible.
|
|
|
|
|
My CPAI is claiming to use the Multi-TPU, but the results seem too slow to be using the Coral, and my TPUs' temperatures don't go up when I use the "benchmark" feature. But my server's CPU usage is through the roof whenever I run a benchmark.
How can I disable falling back to CPU to really make sure the inferences are running on the Coral TPUs and not on my CPU?
|
|
|
|
|
What model and size, and coral are you using? I've been testing my coral (usb and M2) this weekend and is working very well.
Using yolov8 medium size model and get inference speeds of around 20ms.
I go into the json in the coral folder though and update multi-tpu to 'false' and also autostart up 'true' with the model and size.
Not sure why it does not update this.
|
|
|
|
|
If you're on version 2.6.5, it was a known bug that the file settings were not being saved in the JSON. It was discussed at length here. You'll want to jump to the pre-release version 2.8 to fix that, but your work-around is fine.
|
|
|
|
|
Ah many thanks. Sorry been offline for a while and didn't see that. Yes figured it out myself. Working in IT the json was the first place I looked and fixed it manually.
I'll have a look at upgrading to the beta version cheers
|
|
|
|
|
How do I Install the beta version from cpai?
|
|
|
|
|
On the main CPAI page there is a Pre release section containing version 2.8. I've been running it for weeks without any issues.
|
|
|
|
|
Hello All,"
Nothing changed in my settings but, alpr is no longer detecting plates. I went into the Code Project Explorer and check with a well known to the system and it came back with "no predictions". I uninstalled and reinstalled with cache disabled, to no avail.
|
|
|
|
|
Maybe this might not help you but I think the ALPR needs to work with Object Detection. You will need to ensure you have Object Detection running at the same time in order to make ALPR to work.
|
|
|
|
|
so I should put some like ipcam-combined,object ,alpr?
|
|
|
|
|
You need one of the below object detections running
|
|
|
|
|
I have Object Detection (Coral) 2.3.4 running, but License Plate Reader 3.2.2 continues to fail to start. Even with trace level logging, this is all I see:
19:12:03:Started License Plate Reader module
19:12:05:Module ALPR has shutdown
19:12:05:ALPR_adapter.py: has exited
|
|
|
|
|
I tried running setup for ALPR from the command line and the self test ends with this error:
Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 11, in <module>
from ALPR import init_detect_platenumber, detect_platenumber
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR.py", line 17, in <module>
from paddleocr import PaddleOCR
File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddleocr\__init__.py", line 14, in <module>
from .paddleocr import *
File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddleocr\paddleocr.py", line 21, in <module>
import paddle
File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddle\__init__.py", line 28, in <module>
from .base import core # noqa: F401
File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddle\base\__init__.py", line 36, in <module>
from . import core
File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddle\base\core.py", line 375, in <module>
if not avx_supported() and libpaddle.is_compiled_with_avx():
NameError: name 'libpaddle' is not defined
modified yesterday.
|
|
|
|
|
That means you are missing the module of "libpaddle". Most likely when the ALPR was installed, that module was not installed for some reason.
Couple ways to tackle this:
1. You can run the Command Prompt (CMD) and then navigate to "C:\Program Files\CodeProject\AI\modules\ALPR" and then, I think, type in the command "../../setup.bat". That will run the installation for the ALPR.
If that fails, you can run the venv and install the missing modules yourself, which is what I have been doing and that solve the problem.
You can do it by following the step:
1. Run CMD.
2. Type in "cd C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\Scripts"
3. Type in "activate". (You should see this in your CMD: "(venv) C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\Scripts>")
4. Type in "cd C:\Program Files\CodeProject\AI\modules\ALPR"
5. Type in "pip install paddlepaddle"
After doing step 1 to 5, you can try to start the ALPR module and see if it handles that. If you see the error again, look at what module is missing (or not defined) and then follow the same step to get the rest of the missing modules installed manually.
I hope this help.
|
|
|
|
|
Thank you so much. I have now gotten this in Code Project Server Log:
08:07:57:OCR_adapter.py: [2024/09/16 08:07:57] ppocr WARNING: Since the angle classifier is not initialized, it will not be used during the forward process
|
|
|
|
|