|
With a little Google-fu and some PITA finagling pt models can be converted to onnx. Other than that, that is correct that there are not many out of the box
|
|
|
|
|
Yeah... Easier said than done. I think I spent about 8 hours trying to get them converted yesterday before posting.
|
|
|
|
|
Were you able to convert the model? It normally takes me less then 5 minutes to convert to ONNX. If you need help let me know.
|
|
|
|
|
|
|
Hm, alright. I may try that one first if you've done it before. I'll keep you updated.
|
|
|
|
|
I was able to convert it. Although I used the reference I mentioned. CodeProject fails to load the converted module. Shame.
|
|
|
|
|
Do you have a link to the model you are trying to convert so I can give it a try.
|
|
|
|
|
|
Is there, or will there be support for the Coral TPU on x86 hardware?
I have a Coral TPU (PCIe version) that I'm dying to use, but from what I can tell Coral support is only for the ARM64 docker image.
|
|
|
|
|
IIRC there was a comment/post about this. Seems like there is a weird Windows Dll issue that's preventing it from running properly (at least for the USB Coral, I don't recall seeing anything for any other model - though I'd *think* it ought to work for all versions). Something like the code sees it but it won't respond. - I'm waiting semi-patiently for PCIe Coral support myself.
|
|
|
|
|
Yup, PCI support would be excellent, I got a couple myself and I will patiently wait for support.
At least, I could avoid having to use a GPU for detection, which consumes a lot 24/7!
|
|
|
|
|
YeAH - I *tried* to get my PCIe Coral working on my Pi.... but zero success. Agreed on the GPU as well... would cut ~ 30w from my overall rack usage. May not seem like much, but went from ~ 1200w 24/7 to around 450-500w. If I can get below that 400 mark I'd be happy.
|
|
|
|
|
Did you try the Raspberry Pi Docker in docker? The tensorflow-lite module *should* just work with Coral as PCIe, but I don't have a unit to test it on
cheers
Chris Maunder
|
|
|
|
|
I did - but due to time constraints with a million other things going on I give up and repurposed my pi for something else. I'll just keep the PCIe Coral in my CP.AI machine with a Telsa T4 and wait anxiously patient until you get around to Coral support in Windows - DLL's be damned!
|
|
|
|
|
I haven't yet. I'm trying to not use my docker machine for this as I want it all to be on my Blue Iris server.
Any idea on when the Windows exe will support coral?
|
|
|
|
|
I could have sworn the TF-Lite Coral module was greyed-out/ unavailable in previous versions... but in the 2.1.8 flavor on my x64 box, it showed up as installable. I haven't touched or tried it since it's working w/ my GPU and I don't want to mess it up!
I was previously running CPAI on a 2GB Pi via Docker, but with v2.1.4, it'd die after running for awhile.
I'd love to get this going on Debian or an x64 docker VM. And while I'd expect the pcie version to be faster, Google's docs say the mpcie version to has the same "4 TOPS total peak performance (int8)" performance as the USB device.
|
|
|
|
|
I keep getting the following errors:
[0;32m[49m========================================================================[0m
2023-05-09 19:43:43: ALPR: [0;32m[49m CodeProject.AI Installer [0m
2023-05-09 19:43:43: ALPR: [0;32m[49m========================================================================[0m
2023-05-09 19:43:43: ALPR: [0;39m[49mCUDA Present...[0m[0;92m[49mTrue[0m
2023-05-09 19:43:43: ALPR: [0;39m[49mAllowing GPU Support: [0m[0;92m[49mYes[0m
2023-05-09 19:43:43: ALPR: [0;39m[49mAllowing CUDA Support: [0m[0;92m[49mYes[0m
2023-05-09 19:43:43: ALPR: [0;97m[104mGeneral CodeProject.AI setup [0m
2023-05-09 19:43:43: ALPR: [0;39m[49mCreating Directories...[0m[0;92m[49mDone[0m
2023-05-09 19:43:43: ALPR: [0;97m[104mInstalling module ALPR [0m
2023-05-09 19:43:43: ALPR: [0;93m[49mInstalling python37 in C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37[0m
2023-05-09 19:43:43: ALPR: [0;90m[49mChecking for python37 download...[0m[0;92m[49mPresent[0m
2023-05-09 19:43:49: ALPR: [0;39m[49mCreating Virtual Environment...[0m[0;92m[49mDone[0m
2023-05-09 19:43:49: ALPR: [0;39m[49mEnabling our Virtual Environment...[0m[0;92m[49mDone[0m
2023-05-09 19:43:49: ALPR: [0;39m[49mConfirming we have Python 3.7...[0m[0;92m[49mpresent[0m
2023-05-09 19:43:50: ALPR: [0;39m[49mEnsuring Python package manager (pip) is installed...[0m[0;92m[49mDone[0m
2023-05-09 19:43:57: ALPR: [0;39m[49mEnsuring Python package manager (pip) is up to date...[0m[0;92m[49mDone[0m
2023-05-09 19:43:57: ALPR: [0;93m[49mChoosing Python packages from requirements.windows.cuda.txt[0m
2023-05-09 19:44:01: ALPR: WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000021DF1C34748>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')': /whl/windows/mkl/avx/stable.html
2023-05-09 19:44:04: ALPR: WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000021DF1C90D08>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')': /whl/windows/mkl/avx/stable.html
2023-05-09 19:44:07: ALPR: WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000021DF1CB4788>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')': /whl/windows/mkl/avx/stable.html
2023-05-09 19:44:11: ALPR: WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000021DF1CB4F08>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')': /whl/windows/mkl/avx/stable.html
2023-05-09 19:44:17: ALPR: WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x0000021DF1CB5708>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')': /whl/windows/mkl/avx/stable.html
2023-05-09 19:44:19: ALPR: ERROR: Could not find a version that satisfies the requirement paddlepaddle-gpu==2.3.2.post116 (from versions: 1.4.0, 1.4.1, 1.5.0.post87, 1.5.0.post97, 1.5.1.post87, 1.5.1.post97, 1.5.2.post87, 1.5.2.post97, 1.5.2.post107, 1.6.0.post107, 1.6.1.post97, 1.6.1.post107, 1.6.2.post97, 1.6.2.post107, 1.6.3.post97, 1.7.0.post97, 1.7.0.post107, 1.7.1.post97, 1.7.1.post107, 1.7.2.post97, 1.7.2.post107, 1.8.0.post97, 1.8.0.post107, 1.8.1.post97, 1.8.1.post107, 1.8.2.post97, 1.8.2.post107, 1.8.3.post97, 1.8.3.post107, 1.8.4.post97, 1.8.4.post107, 1.8.5.post97, 1.8.5.post107, 2.0.0a0, 2.0.0rc1, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.2.0rc0, 2.2.0, 2.2.1, 2.2.2, 2.3.0rc0, 2.3.0, 2.3.1, 2.3.2, 2.4.0rc0, 2.4.0, 2.4.1, 2.4.2, 2.5.0rc0)
2023-05-09 19:44:19: ALPR: ERROR: No matching distribution found for paddlepaddle-gpu==2.3.2.post116
2023-05-09 19:44:19: ALPR: [0;39m[49mInstalling Packages into Virtual Environment...[0m[0;92m[49mSuccess[0m
2023-05-09 19:44:20: ALPR: [0;39m[49mEnsuring Python package manager (pip) is installed...[0m[0;92m[49mDone[0m
2023-05-09 19:44:21: ALPR: [0;39m[49mEnsuring Python package manager (pip) is up to date...[0m[0;92m[49mDone[0m
2023-05-09 19:44:21: ALPR: [0;93m[49mChoosing Python packages from requirements.txt[0m
2023-05-09 19:44:26: ALPR: [0;39m[49mInstalling Packages into Virtual Environment...[0m[0;92m[49mSuccess[0m
2023-05-09 19:44:26: ALPR: [0;93m[49mApplying patch for PaddlePaddle[0m
2023-05-09 19:44:26: ALPR: The system cannot find the path specified.
2023-05-09 19:44:26: ALPR: 0 file(s) copied.
2023-05-09 19:44:26: ALPR: [0;97m[49mDownloading ALPR models...[0m[0;93m[49malready exists...[0m[0;93m[49mExpanding...[0m[0;92m[49mDone.[0m
2023-05-09 19:44:26: ALPR: [0;97m[42mModule setup complete [0m
2023-05-09 19:44:26: Module ALPR installed successfully.
2023-05-09 19:44:26: GetCommandByRuntime: Runtime=python37, Location=Local
2023-05-09 19:44:26: Command: C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37\venv\scripts\Python
2023-05-09 19:44:26:
2023-05-09 19:44:26: Attempting to start ALPR with C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37\venv\scripts\Python "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py"
2023-05-09 19:44:26: Starting C:\Program Files...ws\python37\venv\scripts\Python "C:\Program Files...\modules\ALPR\ALPR_adapter.py"
2023-05-09 19:44:26:
2023-05-09 19:44:26: ** Module 'License Plate Reader' (ID: ALPR)
2023-05-09 19:44:26: ** Module Path: C:\Program Files\CodeProject\AI\modules\ALPR
2023-05-09 19:44:26: ** AutoStart: True
2023-05-09 19:44:26: ** Queue: alpr_queue
2023-05-09 19:44:26: ** Platforms: windows,linux,macos,macos-arm64
2023-05-09 19:44:26: ** GPU: Support enabled
2023-05-09 19:44:26: ** Parallelism: 0
2023-05-09 19:44:26: ** Accelerator:
2023-05-09 19:44:26: ** Half Precis.: enable
2023-05-09 19:44:26: ** Runtime: python37
2023-05-09 19:44:26: ** Runtime Loc: Local
2023-05-09 19:44:26: ** FilePath: ALPR_adapter.py
2023-05-09 19:44:26: ** Pre installed: False
2023-05-09 19:44:26: ** Start pause: 1 sec
2023-05-09 19:44:26: ** LogVerbosity:
2023-05-09 19:44:26: ** Valid: True
2023-05-09 19:44:26: ** Environment Variables
2023-05-09 19:44:26: ** AUTO_PLATE_ROTATE = True
2023-05-09 19:44:26: ** PLATE_CONFIDENCE = 0.7
2023-05-09 19:44:26: ** PLATE_RESCALE_FACTOR = 2
2023-05-09 19:44:26: ** PLATE_ROTATE_DEG = 0
2023-05-09 19:44:26:
2023-05-09 19:44:26: Started License Plate Reader module
2023-05-09 19:44:26: Installer exited with code 1
2023-05-09 19:44:27: ALPR_adapter.py: Traceback (most recent call last):
2023-05-09 19:44:27: ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 16, in <module>
2023-05-09 19:44:27: ALPR_adapter.py: from ALPR import init_detect_platenumber, detect_platenumber
2023-05-09 19:44:27: ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR.py", line 8, in <module>
2023-05-09 19:44:27: ALPR_adapter.py: import utils.tools as tool
2023-05-09 19:44:27: ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\utils\tools.py", line 2, in <module>
2023-05-09 19:44:27: ALPR_adapter.py: import cv2
2023-05-09 19:44:27: ALPR_adapter.py: ModuleNotFoundError: No module named 'cv2'
2023-05-09 19:44:27: ** Module ALPR has shutdown
2023-05-09 19:44:27: ALPR_adapter.py: has exited
2023-05-09 19:44:27: Module ALPR started successfully.
|
|
|
|
|
Forgot to add:
Server version: 2.1.8-Beta
Operating System: Windows (Microsoft Windows 11 version 10.0.22621)
CPUs: Intel(R) Core(TM) i5-9500 CPU @ 3.00GHz
1 CPU x 6 cores. 6 logical processors (x64)
GPU: Tesla P4 (8 GiB) (NVidia)
Driver: 528.95 CUDA: 12.0 Compute: 6.1
System RAM: 16 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.5
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 3.7 GiB
0
Global Environment variables:
CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
CPAI_PORT = 32168
|
|
|
|
|
 ALSO getting the following errors when trying to install and/or copy the required dependencies over after installing them with anaconda
10:33:13:Started License Plate Reader module
10:33:13:ALPR_adapter.py: Traceback (most recent call last):
10:33:13:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 16, in
10:33:13:ALPR_adapter.py: from ALPR import init_detect_platenumber, detect_platenumber
10:33:13:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR.py", line 8, in
10:33:13:ALPR_adapter.py: import utils.tools as tool
10:33:13:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\utils\tools.py", line 2, in
10:33:13:ALPR_adapter.py: import cv2
10:33:13:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37\venv\lib\site-packages\cv2\__init__.py", line 11, in
10:33:13:ALPR_adapter.py: import numpy
10:33:13:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37\venv\lib\site-packages\numpy\__init__.py", line 125, in
10:33:13:ALPR_adapter.py: from numpy.__config__ import show as show_config
10:33:13:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python37\venv\lib\site-packages\numpy\__config__.py", line 12, in
10:33:13:ALPR_adapter.py: os.add_dll_directory(extra_dll_dir)
10:33:13:ALPR_adapter.py: AttributeError: module 'os' has no attribute 'add_dll_directory'
|
|
|
|
|
Yes, this is a big problem with PaddlePaddle (what we use for ALPR) but we have a module update I'll try and get out today that fixes this. The servers that host PaddlePaddle are terribly unreliable. We've switched to PyPi.
As to your other post: I wouldn't mix Conda and our PIP install. Conda does its own thing to work around the general issues with using vanilla PIP installs, so setups may not always be compatible
cheers
Chris Maunder
|
|
|
|
|
Oddly enough I got it all working (with 'Conda still installed) in a VERY weird spit-ball round-about way. I was able to get all the modules installed in a Windows system with NO Cuda/NVIDIA drivers or Conda or Python and I just copied over the ALPR module folder from the system it DID install on over and replaced the folder on the system that it refused to install on. I also made sure I have most - if not all - the folders in Window's PATH environment variables.
|
|
|
|
|
I have been trying to register a face, and it isn't working. any ideas why?
sys info:
(Forgot to say, I am running the Docker "codeproject/ai-server:rpi64" latest which at the time is: rpi64-2.1.4 )
Server version: 2.1.4-Beta
Operating System: Linux (Linux 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023)
CPUs: 1 CPU. (Arm64)
System RAM: 910 MiB
Target: Linux-Arm64
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
.NET framework: .NET 7.0.5
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Video adapter info:
Global Environment variables:
CPAI_APPROOTPATH = /app
CPAI_PORT = 32168
logs:
Face Processing: Retrieved faceprocessing_queue command
12:07:13:Face Processing: Queue request for Face Processing command 'list' (...a7eb35) took 163ms
12:07:17:face.py: Fusing layers...
12:07:41:Request 'detect' dequeued from 'objectdetection_queue' (...161f3a)
12:07:41:Client request 'detect' in queue 'objectdetection_queue' (...161f3a)
12:07:41:ObjectDetection (TF-Lite): Retrieved objectdetection_queue command
12:07:41:ObjectDetection (TF-Lite): Queue request for ObjectDetection (TF-Lite) command 'detect' (...161f3a) took 190ms
12:07:41:Response received (...161f3a): Found truck, car, car
12:07:42:Client request 'register' in queue 'faceprocessing_queue' (...512711)
12:07:52:Module FaceProcessing has shutdown
12:07:52:face.py: has exited
|
|
|
|
|
Dell Optiplex 5050 has a on board Intel 630 GPU. I have been trying to get CodeProject to "See" or use the DirectML GPU with the Yolo v5.Net module.
All that would work, is CPU on either Yolo v.5.6.2 or Yolo v5.Net
System Info:
Server version: 2.1.8-Beta
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
1 CPU x 4 cores. 4 logical processors (x64)
System RAM: 16 GiB
Target: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
.NET framework: .NET 7.0.5
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
0
Global Environment variables:
CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
CPAI_PORT = 32168
Module Info:
Module 'Object Detection (YOLOv5 .NET)' (ID: ObjectDetectionNet)
Module Path: <root>\modules\ObjectDetectionNet
AutoStart: False
Queue: objectdetection_queue
Platforms: windows,linux,linux-arm64,macos,macos-arm64
GPU: Support enabled
Parallelism: 0
Accelerator:
Half Precis.: enable
Runtime: execute
Runtime Loc: Shared
FilePath: ObjectDetectionNet.exe
Pre installed: False
Start pause: 1 sec
LogVerbosity:
Valid: True
Environment Variables
CUSTOM_MODELS_DIR = %CURRENT_MODULE_PATH%\custom-models
MODELS_DIR = %CURRENT_MODULE_PATH%\assets
MODEL_SIZE = MEDIUM
Started:
LastSeen:
Status: NotEnabled
Processed: 0
Provider:
HardwareType: CPU
I have tried uninstalling CodeProject and deleting the Program Files and Program Data folders, and reinstalling the application. Lately, (the last install) if I go to the Code Project AI Explorer, and try to Detect Objects, I receive a result Yolo returned null.
Also, while Blue Iris was pointed to Localhost port 32168, I got many results of command completed in 0 seconds.
NOTE: during this time, I did see that the Object Detection (Yolo v5.Net) showed GPU DirectML, not CPU.
So I looked at the file structure of both the ObjectDetectionNet and ObjectDetectionYolo, I do not see custom models, either .pt or .onnx files.
So, I tried to reinstall the Yolo v5 .Net Module.
I deleted the module, and then tried to instaLL the module. The install apparently downloaded, expanded and apparently timed out.
Last info from the CPAI server log:
09:31:14:Sending shutdown request to ObjectDetectionNet/ObjectDetectionNet
09:31:14:ObjectDetectionNet.exe: at CodeProject.AI.Modules.ObjectDetection.Yolo.ObjectDetectionWorker.ProcessRequest(BackendRequest request)Unable to get request from objectdetection_queue for ObjectDetectionNet
09:31:14:ObjectDetectionNet.exe: Unable to get request from objectdetection_queue for ObjectDetectionNet
09:31:14:ObjectDetectionNet.exe: Unable to get request from objectdetection_queue for ObjectDetectionNet
09:31:14:ObjectDetectionNet.exe: Shutdown signal received. Ending loop
09:31:47:Forcing shutdown of ObjectDetectionNet/ObjectDetectionNet
09:31:47:Module ObjectDetectionNet has shutdown
09:31:47:ObjectDetectionNet.exe: has exited
09:31:47:Will wait a moment: sometimes a delete just needs time to complete
09:31:47:Unable to delete install folder for ObjectDetectionNet (Access to the path 'DirectML.dll' is denied.)
09:31:47:Unable to delete install folder for ObjectDetectionNet
09:31:58:ObjectDetectionNet doesn't appear in the Process list, so can't stop it.
09:32:07:Preparing to install module 'ObjectDetectionNet'
09:32:07:Downloading module 'ObjectDetectionNet'
09:32:07:Installing module 'ObjectDetectionNet'
09:32:10:ObjectDetectionNet: Installing CodeProject.AI Analysis Module
09:32:10:ObjectDetectionNet: ========================================================================
09:32:10:ObjectDetectionNet: CodeProject.AI Installer
09:32:10:ObjectDetectionNet: ========================================================================
09:32:10:ObjectDetectionNet: CUDA Present...False
09:32:10:ObjectDetectionNet: Allowing GPU Support: Yes
09:32:10:ObjectDetectionNet: Allowing CUDA Support: No
09:32:10:ObjectDetectionNet: General CodeProject.AI setup
09:32:10:ObjectDetectionNet: Creating Directories...Done
09:32:10:ObjectDetectionNet: Installing module ObjectDetectionNet
09:32:11:ObjectDetectionNet: Downloading ObjectDetectionNet module...already exists...Expanding...Done.
09:42:09:Module ObjectDetectionNet installed successfully.
09:42:09:Module ObjectDetectionNet not configured to AutoStart.
09:42:09:ObjectDetectionNet: Downloading YOLO ONNX models...
09:42:09:Timed out attempting to install Module 'ObjectDetectionNet' ($A task was canceled.)
Oh, I see now. Apparently the onnx models are failing to download.
If I place them manually, where do they belong? I have them saved from earlier versions I used. The folder structure looks different than before.
I am currently back to using Deepstack on my Jetson Nano, which is functioning fine with Blue Iris.
BTW, using custom models with Deepstack on an external machine has gotten to be very tricky due to the latest changes that Ken made to tie Blue Iris to CodeProject.AI, but it can be done.
Thanks;
Steve
|
|
|
|
|
Hi, i had the same problem with 2.1.6. I tried the same steps you have taken. Untill i found out that the problem was solved after i selected " use custom models" and " use GPU" in the Blue IRIS settings-AI tab.
|
|
|
|
|