|
IF YOU ARE HAVING A PROBLEM
- Take a look at the logs in
C:\Program Files\CodeProject\AI\logs and see if there's anything in there that screams 'something broke'.
- Check the FAQs in the CodeProject.AI Server documentation
- Make sure you've tested the server using the Explorer (blue link, top middle of the dashboard) to ensure it's a server issue rather than something else such as Blue Iris or another app using CodeProject.AI server.
- If there's no obvious answer, then copy and paste into a message the contents of the System Info tab, describe what you are doing, and what you see, and what you would expect.
Always include a copy and paste from the System Info tab of the dashboard. It gives us a ton of info on your setup. If an individual module is failing, click the 'Info' button to the right of the module's name in the status list and copy and paste that info too.
How to reinstall a module
Option 1. Go to the install modules tab on the dashboard and try re-installing the package. Make sure you have enough disk space and a reliable internet connection.
Option 2: (Option 1 with a vengeance): If that fails, head to the module's folder ([app root]\modules\module-id), open a terminal in admin mode, and run ..\..\setup . This will force a manual reinstall using the install script.
Docker: In Docker you will need to open a terminal into the docker container. You can do this using Docker Desktop, or Visual Studio Code with the Docker remote extension, or on the command using using docker attach . Then do a cd /app/modules/module-id where module-id is the id of the module you need to resinstall. Next, run sudo bash ../../setup.sh --verbosity info to force a manual reinstall of that module. (Set verbosity as quiet, info or loud to get less or more info)
cheers
Chris Maunder
modified 18-Feb-24 15:48pm.
|
|
|
|
|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.
Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
I've installed the latest version of AI server and I've been having issues with it. The service stopped working and would not load manually, so I removed AI server and deleted the programdata and program files folders. I reinstalled the server and now i'm getting another error with aiohttp. I installed aiohttp via python CLI, but that didn't seem to work.
15:00:11:System: Windows
15:00:11:Operating System: Windows (Microsoft Windows 11 version 10.0.22631)
15:00:11:CPUs: AMD Ryzen 7 7800X3D 8-Core Processor (AMD)
15:00:11: 1 CPU x 8 cores. 16 logical processors (x64)
15:00:11:GPU (Primary): NVIDIA GeForce RTX 4090 (24 GiB) (NVIDIA)
15:00:11: Driver: 551.86, CUDA: 12.4 (up to: 12.4), Compute: 8.9, cuDNN: 8.5
15:00:11:System RAM: 63 GiB
15:00:11:Platform: Windows
15:00:11:BuildConfig: Release
15:00:11:Execution Env: Native
15:00:11:Runtime Env: Production
15:00:11:Runtimes installed:
15:00:11: .NET runtime: 8.0.2
15:00:11: .NET SDK: Not found
15:00:11: Default Python: 3.12.2
15:00:11: Go: Not found
15:00:11: NodeJS: Not found
15:00:11: Rust: Not found
15:00:11:App DataDir: C:\ProgramData\CodeProject\AI
15:00:11:Video adapter info:
15:00:11: AMD Radeon(TM) Graphics:
15:00:11: Driver Version 31.0.24002.92
15:00:11: Video Processor AMD Radeon Graphics Processor (0x164E)
15:00:11: NVIDIA GeForce RTX 4090:
15:00:11: Driver Version 31.0.15.5186
15:00:11: Video Processor NVIDIA GeForce RTX 4090
15:00:11:STARTING CODEPROJECT.AI SERVER
15:00:11:RUNTIMES_PATH = C:\Program Files\CodeProject\AI\runtimes
15:00:11:PREINSTALLED_MODULES_PATH = C:\Program Files\CodeProject\AI\preinstalled-modules
15:00:11:DEMO_MODULES_PATH = C:\Program Files\CodeProject\AI\demos\modules
15:00:11:MODULES_PATH = C:\Program Files\CodeProject\AI\modules
15:00:11:PYTHON_PATH = \bin\windows\%PYTHON_NAME%\venv\Scripts\python
15:00:11:Data Dir = C:\ProgramData\CodeProject\AI
15:00:11:Server version: 2.6.2
15:00:14:
15:00:14:Module 'Object Detection (YOLOv5 6.2)' 1.9.1 (ID: ObjectDetectionYOLOv5-6.2)
15:00:14:Valid: True
15:00:14:Module Path: <root>\modules\ObjectDetectionYOLOv5-6.2
15:00:14:AutoStart: True
15:00:14:Queue: objectdetection_queue
15:00:14:Runtime: python3.7
15:00:14:Runtime Loc: Shared
15:00:14:FilePath: detect_adapter.py
15:00:14:Start pause: 1 sec
15:00:14:Parallelism: 0
15:00:14:LogVerbosity:
15:00:14:Platforms: all,!raspberrypi,!jetson
15:00:14:GPU Libraries: installed if available
15:00:14:GPU Enabled: enabled
15:00:14:Accelerator:
15:00:15:Half Precis.: enable
15:00:15:Environment Variables
15:00:15:APPDIR = <root>\modules\ObjectDetectionYOLOv5-6.2
15:00:15:CPAI_MODULE_ENABLE_GPU = True
15:00:15:CUSTOM_MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\custom-models
15:00:15:MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\assets
15:00:15:MODEL_SIZE = Medium
15:00:15:USE_CUDA = True
15:00:15:YOLOv5_AUTOINSTALL = false
15:00:15:YOLOv5_VERBOSE = false
15:00:15:
15:00:15:Started Object Detection (YOLOv5 6.2) module
15:00:15:detect_adapter.py: Traceback (most recent call last):
15:00:15:detect_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect_adapter.py", line 13, in
15:00:15:detect_adapter.py: from module_runner import ModuleRunner
15:00:15:detect_adapter.py: File "../../SDK/Python\module_runner.py", line 30, in
15:00:15:detect_adapter.py: import aiohttp
15:00:15:detect_adapter.py: ModuleNotFoundError: No module named 'aiohttp'
15:00:15:Module ObjectDetectionYOLOv5-6.2 has shutdown
15:00:15:detect_adapter.py: has exited
15:00:16:Server: This is the latest version
15:02:55:
15:02:55:Module 'Object Detection (YOLOv5 6.2)' 1.9.1 (ID: ObjectDetectionYOLOv5-6.2)
15:02:55:Valid: True
15:02:55:Module Path: <root>\modules\ObjectDetectionYOLOv5-6.2
15:02:55:AutoStart: True
15:02:55:Queue: objectdetection_queue
15:02:55:Runtime: python3.7
15:02:55:Runtime Loc: Shared
15:02:55:FilePath: detect_adapter.py
15:02:55:Start pause: 1 sec
15:02:55:Parallelism: 0
15:02:55:LogVerbosity:
15:02:55:Platforms: all,!raspberrypi,!jetson
15:02:55:GPU Libraries: installed if available
15:02:55:GPU Enabled: enabled
15:02:55:Accelerator:
15:02:55:Half Precis.: enable
15:02:55:Environment Variables
15:02:55:APPDIR = <root>\modules\ObjectDetectionYOLOv5-6.2
15:02:55:CPAI_MODULE_ENABLE_GPU = True
15:02:55:CUSTOM_MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\custom-models
15:02:55:MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\assets
15:02:55:MODEL_SIZE = Medium
15:02:55:USE_CUDA = True
15:02:55:YOLOv5_AUTOINSTALL = false
15:02:55:YOLOv5_VERBOSE = false
15:02:55:
15:02:55:Started Object Detection (YOLOv5 6.2) module
15:02:55:detect_adapter.py: Traceback (most recent call last):
15:02:55:detect_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect_adapter.py", line 13, in
15:02:55:detect_adapter.py: from module_runner import ModuleRunner
15:02:55:detect_adapter.py: File "../../SDK/Python\module_runner.py", line 30, in
15:02:55:detect_adapter.py: import aiohttp
15:02:55:detect_adapter.py: ModuleNotFoundError: No module named 'aiohttp'
15:02:55:Module ObjectDetectionYOLOv5-6.2 has shutdown
15:02:55:detect_adapter.py: has exited
|
|
|
|
|
I'm having the same issue. just did a install and i'm getting the same error. here is log file:
20:51:49:Update ObjectDetectionYOLOv5-6.2. Setting Restart=now
20:51:49:
20:51:49:Module 'Object Detection (YOLOv5 6.2)' 1.9.1 (ID: ObjectDetectionYOLOv5-6.2)
20:51:49:Valid: True
20:51:49:Module Path: <root>\modules\ObjectDetectionYOLOv5-6.2
20:51:49:AutoStart: True
20:51:49:Queue: objectdetection_queue
20:51:49:Runtime: python3.7
20:51:49:Runtime Loc: Shared
20:51:49:FilePath: detect_adapter.py
20:51:49:Start pause: 1 sec
20:51:49:Parallelism: 0
20:51:49:LogVerbosity:
20:51:49:Platforms: all,!raspberrypi,!jetson
20:51:49:GPU Libraries: installed if available
20:51:49:GPU Enabled: enabled
20:51:49:Accelerator:
20:51:49:Half Precis.: enable
20:51:49:Environment Variables
20:51:49:APPDIR = <root>\modules\ObjectDetectionYOLOv5-6.2
20:51:49:CUSTOM_MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\custom-models
20:51:49:MODELS_DIR = <root>\modules\ObjectDetectionYOLOv5-6.2\assets
20:51:49:MODEL_SIZE = Medium
20:51:49:USE_CUDA = True
20:51:49:YOLOv5_AUTOINSTALL = false
20:51:49:YOLOv5_VERBOSE = false
20:51:49:
20:51:49:Started Object Detection (YOLOv5 6.2) module
20:51:50:detect_adapter.py: Traceback (most recent call last):
20:51:50:detect_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect_adapter.py", line 13, in
20:51:50:detect_adapter.py: from module_runner import ModuleRunner
20:51:50:detect_adapter.py: File "../../SDK/Python\module_runner.py", line 30, in
20:51:50:detect_adapter.py: import aiohttp
20:51:50:detect_adapter.py: ModuleNotFoundError: No module named 'aiohttp'
20:51:50:Module ObjectDetectionYOLOv5-6.2 has shutdown
20:51:50:detect_adapter.py: has exited
Logging level
|
|
|
|
|
My dream is to create a custom AI model to identify make/model/color of vehicles.
I have zero AI experience.
I installed CodeProject and tested the built in modules, works well.
Thought I would follow 'How to Train a Custom YOLOv5 Model to Detect Objects' which is a tutorial on this site.
No luck, I can't even get to the training part. It crashes complaining a sample has no detections, but having no experience with this I can't figure out what the issue is.
If this tutorial outdated? Should it still work? Can someone suggest a tutorial that actually functions?
|
|
|
|
|
This is a demo I've been wanting to write for months and months. The short version is you should use a car make/model AI model as a custom mode (eg This YOLO model) to get the make/model. I would then also run an image segmentation (available in our current YOLOv8 module) to get the polynomial shape of the car, then run a quick image analysis to get a histogram of the colours in the cutout of the car. Choose the most prominent colour and you're done.
cheers
Chris Maunder
|
|
|
|
|
Can someone help troubleshoot this error? The short of it is this happens when I try to use CodeProject with my BI system's GPU (Intel iGPU) this error comes up in the logs:
ObjectDetectionYOLOv5Net.exe: 2024-05-02 18:31:29.4590658 [E:onnxruntime:, inference_session.cc:1799 onnxruntime::InferenceSession::Initialize::::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\DmlGraphFusionHelper.cpp(432)\onnxruntime.DLL!00007FFE2712F11B: (caller: 00007FFE270B44E6) Exception(3) tid(2154) 80004005 Unspecified error
If I downgrade back to older versions (I believe it was older than CodeProject AI 2.3.4) I am able to use the GPU. Blue Iris support basically told me to come here for assistance. Any help with this is appreciated!
|
|
|
|
|
We absolutely need your system info (see pinned message) so we can start to suggest solutions.
cheers
Chris Maunder
|
|
|
|
|
Hi Chris,
Here is what you asked for. One thing I didn't think about before - according to this CP thinks my primary GPU Microsoft RDP. I use this to manage my BI machine but I'm not sure if this could be an issue:
Server version: 2.6.2
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz (Intel)
1 CPU x 2 cores. 4 logical processors (x64)
GPU (Primary): Microsoft Remote Display Adapter (Microsoft)
Driver: 10.0.19041.4355
System RAM: 8 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.18
.NET SDK: 7.0.408
Default Python: Not found
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Microsoft Remote Display Adapter:
Driver Version 10.0.19041.4355
Video Processor
Intel(R) HD Graphics 4400:
Driver Version 20.19.15.5063
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 52 KiB
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
Hi,
I recently set up code project to work with Blue Iris. Blue Iris is running on Windows 11 in a Proxmox VM.
I have successfully passed through a coral USB to the VM and CPAI. Everything seems to work fine for about 10 minutes then CPAI seems to revert back to CPU. The coral is still present in the device manager. I've turned off any USB power management in Windows to no avail.
Any Suggestions would be greatly appreciated.
|
|
|
|
|
Forgive me if I missed this being posted somewhere already.
I want to setup a central AI server in our data center. I want our developers to be able to direct their projects to that central server for testing. When I try testing to the machines ip address with port 32168 yields no connection.
http://machine ip:32168/
Seems simple but I'm missing something.
A similar question was posted with no answer. "how to connect a Blue Iris machine to another machine running CodeProject AI?"
This is not a Blue Iris question, just a reference above.
modified yesterday.
|
|
|
|
|
Well, assuming successful installation, that should work.
In my case, 192.168.50.17:32168 connects to a Linux (Debian) system running AI server in a Docker container.
If you are doing an install to Windows, did you use the script?
Did you get any errors during install? Any errors in system logs?
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
Have you opened port 32168 for HTTP (in and out) on your server's firewall (and possibly also on your developer's firewalls?)
cheers
Chris Maunder
|
|
|
|
|
Thank you, I did not take the time to trace this out.
I found a second layer that was blocking the port.
Juggling too many things at once.
Thank you all for the support.
|
|
|
|
|
Is it better to run CodeProject on the same Windows PC Blue Iris is running, or run it on a virtual machine running Docker?
|
|
|
|
|
Probably depends on which machine has the better GPU, if you are using same. We run with CPAI on a VM (Debian with Docker) because the system with the virtual machine has better video card (Linux is a little leaner). You do have to have a virtual host that allows PCI pass-through to use the video card, we run on ESXi. In the earlier days, it seemed to be easier when doing CPAI updates. If our BI system had a better video card, I would run "native".
Just my $0.02.
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
The PC I am using has an older GPU GTX960 and I was told I cannot use CUDA since my GPU does not support it. So I've only been using Yolov5.net.
I have an AMD Ryzen 5700G 8 core with 16GB RAM 3600mhz. Would I benefit of running CP on a virtual machine?
|
|
|
|
|
Again, if the VM is on another PC that has faster performance, it could be of benefit. Keep in mind there is overhead in the networking.
It is pretty easy to set up the VM and do a test. It is only a small configuration change in the BI system once you work out the Docker set up. Then look at the ms alert times.
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
What would be an acceptable speed for alerts?
Just want to know what is considered normal or too slow.
thanks.
|
|
|
|
|
I consider mine mediocre at best, 60-80 msec. But I run an old low memory video card P620, only 2GB.
That seems to do the job, mostly we filter for false alerts due to shadows.
We plan to upgrade it although we only use AI on 5 of 14 cameras.
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
Maybe I'm looking at the Status logs wrong, but mine only shows ms when it doesn't find anything.
When it does detects and triggers, there is no ms at all. Is this normal?
For example,
as you can see, it detected a person 88%, but no ms...then below it found nothing and alert cancelled, but it shows 238ms.
|
|
|
|
|
on the alert page, select "save AI analysis details."
Open the log file. Make sure you select "save to file".
>64
It’s weird being the same age as old people. Live every day like it is your last; one day, it will be.
|
|
|
|
|
It shows person 92% at 236ms. That seems very slow then. I am using Nvidia as HA since I have the GTX 960 card, but it appears it's not doing anything.
Something has to be wrong with my settings or something. Isn't using the graphics card for HA supposed to help alerts be faster?
|
|
|
|
|
This is a Blue Iris issue, send an email to Blue Iris Support describing the issue.
|
|
|
|
|
Hello everyone,
I recently got a M.2 Coral device and successfully ran it using the default settings on CP.AI. I wanted to try some other models, so I attempted to use YoloV8.
I clicked on download model and I very quickly get this:
Preparing to download model 'objectdetection-yolov8-medium-edgetpu.zip' for module ObjectDetectionCoral
Downloading module 'objectdetection-yolov8-medium-edgetpu.zip' to 'C:\Program Files\CodeProject\AI\downloads\modules\ObjectDetectionCoral\objectdetection-yolov8-medium-edgetpu.zip'
(using cached download for 'objectdetection-yolov8-medium-edgetpu.zip')
objectdetection-yolov8-medium-edgetpu.zip has been downloaded and installed.
Since the file was already downloaded in that directory beforehand by a fresh done install, when I have it "attempt to download" it just erases it. It seems that no install is actually done because when I attempt to use the model, I get this error:
objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m__segment_0_of_2_edgetpu.tflite doesn't exist
objectdetection_coral_adapter.py: WARNING:root:Model file not found: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m__segment_0_of_2_edgetpu.tflite'
objectdetection_coral_adapter.py: WARNING:root:No Coral TPUs found or able to be initialized. Using CPU.
objectdetection_coral_adapter.py: ERROR:root:TFLite file C:\Program Files\CodeProject\AI\modules\ObjectDetectionCoral\assets\yolov8m-416_640px.tflite doesn't exist
objectdetection_coral_adapter.py: WARNING:root:Unable to create interpreter for CPU using edgeTPU library: [Errno 2] No such file or directory: 'C:\\Program Files\\CodeProject\\AI\\modules\\ObjectDetectionCoral\\assets\\yolov8m-416_640px.tflite'
objectdetection_coral_adapter.py: TPU detected
Then CP.AI defaults to CPU and the TPU isn't used.
YoloV8 is also not displayed in green in the Download Models drop down menu (which I'm assuming is meaning that it's not installed), just MobileNet Large Medium Small and Tiny
So it seems that CP.AI isn't actually installing the model, and there is in fact no yolov8m__segment_0_of_2_edgetpu.tflite file in any directory or .zip
|
|
|
|
|