|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.

Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
If you've come across an issue in building, installing, running or configuring CodeProject.AI Server we're here to help. We just ask that you provide enough info for us to dig in quickly.
Please include:
Environment:
- What version of CodeProject.AI Server are you using?
- What Operating system (include Windows version, or if docker just 'Docker')
- Are you using a GPU? If so:
- What brand / model of GPU
- What driver version
- If the card is Nvidia, what version of CUDA is installed?
Some tips:
- In the root directory of CodeProject.AI Server is a logs/ directory. Take a look in there to see if you spot any logs that might be worth including in your post (remove personal info though!)
- Have you changed any settings? If so, let us know.
- We can usually only help with questions around CodeProject.AI Server. Questions about third party apps are usually outside our scope, so please keep the focus on CodeProject.AI Server.
cheers
Chris Maunder
modified 19-Dec-22 18:14pm.
|
|
|
|
|
This message has been flagged as potential spam and is awaiting moderation
|
|
|
|
|
Blue Iris: 5.7.3.0
CodeProject AI: 2.0.8-Beta
CPU: i9-13900K
RAM: 64GB DDR5
Storage: 512GB NVME SSD/20TB HDD
GPU: NVIDIA GeForce 2080ti
Benchmark: 64.6 OP/s @ pexels-thirdman-7652055
The errors return to Blue Iris in my Alerts, so I am trying to figure out how to debug this, the logs only have queue stuff, nothing about errors or failures.
Separately, with previous installations of CodeProject.AI it used to require you to hunt down cuDNN/CUDA, is that no longer required? I dont see it installing anywhere or the binaries located anywhere on the machine?
modified yesterday.
|
|
|
|
|
If you test CodeProject.AI server by opening the dashboard (localhost:32168) and then click "Explorer" to play with some images, do you see any errors?
cheers
Chris Maunder
|
|
|
|
|
Installed Docker O.K. and tested it. It does work.
Uploaded the CodeProject.AI image. No errors.
The command below that I copied from CodeProject web site fails
docker run --name CodeProject.AI-Server -d -p 32168:32168 ^
--mount type=bind,source=/etc/codeproject/ai,target=/etc/codeproject/ai \
--mount type=bind,source=/opt/codeproject/ai,target=/app/modules \
codeproject/ai-server:arm64
I'm not an Unix guy and I all I do is copy/paste. Please help.
|
|
|
|
|
I just noticed that command was wrong. The "^" should be \, but try this instead as one long line
docker run --name CodeProject.AI-Server -d -p 32168:32168 --mount type=bind,source=/etc/codeproject/ai,target=/etc/codeproject/ai --mount type=bind,source=/opt/codeproject/ai,target=/app/modules codeproject/ai-server:arm64
I'm running this on my Pi right now. A tip: When you launch the docker container, open the dashboard (localhost:32168) and disable the ObjectDetectionYOLO and fire up the ObjectDetection (.NET). Also select (using the "...") the "Tiny" model for the RPi.
I'm getting image detection in around 700ms using just the Pi's onboard CPU.
cheers
Chris Maunder
|
|
|
|
|
For those on older Win OS, check yor windows powershell version.
The command or cmdlet to "Expand-Archive" is not present in powershell versions earlier than 5.
I had 2.0 on win 7 Ult. 5.1 is the current version. The easiest way I found to upgrade earlier versions of windows is: https://www.microsoft.com/en-us/download/details.aspx?id=54616
Hope this helps others on earlier versions of winblows...
|
|
|
|
|
Would it be possible to make this or config CodeProject.AI to be used with Frigate NVR? Also, thanks for all the hard work involved in this project for those who has had and continue to have a hand in this work.
Using CodeProject.AI (CPU only)
Ubuntu Server 22.04
Ryzen AMD 5 3600 CPU
RX 5500 XT GPU
modified 3 days ago.
|
|
|
|
|
Point us to the docs and we'll see what we can do.
cheers
Chris Maunder
|
|
|
|
|
Ummmm, not sure if this is suppose to happen, but its crazy that its this high. Is there a coming fix? Thanks
Using CodeProject.AI (CPU only)
Ubuntu Server 22.04
Ryzen AMD 5 3600 CPU
RX 5500 XT GPU
modified 3 days ago.
|
|
|
|
|
How many modules do you have running? Any other processes running at the same time? Is the server being stopped / started repeatedly?
cheers
Chris Maunder
|
|
|
|
|
Just updated to the latest 2.0.8-Beta, and every time I try to install a module, it changes to "Unknown" in the "Install Modules" tab, the install button vanishes, and the Server log shows "Unable to unpack...." and the name of the module it downloaded.
This happens for every module.
If I leave it for a while, the dashboard refreshes itself, and the install button re-appears, then the same again when I try again.


OS is win 10 x64, going from the table my GFX card a cuda versions seem to be supported, and in fact I didn't have a problem running the background remover in the previous version.

Any thoughts anyone?
Cheers
Shawty
|
|
|
|
|
The list of modules available was incorrect. This has been fixed. Your UI should new show something more suitable for your server version. Sorry about that!
cheers
Chris Maunder
|
|
|
|
|
No worry's Chris. Bugs Happen.
While I have your attention, maybe you & I need to have a chat about you doing another session on Lidnug with me & Brian maybe promoting CP.AI.
Drop me a private message if your interested.
|
|
|
|
|
Loving CP.AI.S but I need it to be not running on port 5000 as I tend to use that for some other things normally.
Is there an easy way to change the port it listens on, or do I need to go digging in the source code?
cheers
shawty
|
|
|
|
|
CP.AI also works on Port 32168
|
|
|
|
|
ah ok, never noticed that, but once you mentioned it, I fired up TcpView and there it is.
Any idea how to turn off port 5000 then so something else can use it?
|
|
|
|
|
There's no way to stop it listening on 5000, yet, but I'll add that as an option in the next version.
cheers
Chris Maunder
|
|
|
|
|
Thanks Chris, much appreciated.
Might be an idea if an option can be added into the dashboard, so that we can install, then configure the ports it uses to be anything we need, if that's at all possible.
For example, I have a dell virtualisation server, the NIC Has 2 ports on it, I have one on my DMZ and one facing my internal LAN. Would be great to just be able to have a list where we plug in the IP/Ports the service listens on, then hit restart or something.
Cheers
Shawty
|
|
|
|
|
That should be do-able. I'll add it to the list.
The only problem with that approach is that if we use the web interface (the dashboard) to change the port, and the server restarts with a new port, and that port has a conflict, you're dead in the water. If you edit the appsettings.json file in the directory containing the CodeProject.AI server exe directly, you're safe. Not as convenient, though!
cheers
Chris Maunder
|
|
|
|
|
ok, so how about this...
you have a small cmd line task in the app, when the config is changed in the dashboard, test the new config buy seeing if that "task" can bind to the given config.
If it can, then save the config and restart the process, if not then put an error up on the dash saying what failed, and advising that the old config is still in place until the new config passes without error.
This would get round 90% of the problem, the only time it might still fail is if a new service is started/restarted at exactly the same moment that the CP service is started, and it get's in and binds millisecs before the CP.AI server.
Would be fairly easy to make a simple console app, that runs, tries to bind to what's past on the command line to it, and then unbinds and quit's, anything causes an exception, report fail, otherwise report pass.
The service app could easily run that and use the return value as an indicator via the system process calls.
Cheers
Shawty
|
|
|
|
|
yeah, that's fairly simple. Even something like
netstat -an | findstr /RC:":32168 .*LISTENING" 1>nul 2>nul && (ECHO Port is in use)
would do
cheers
Chris Maunder
|
|
|
|
|
Kewl, happy to provide the idea
If I wasn't so damn busy all the time, I'd even offer to help write it.
Sadly, to many other folks screaming at me for code at the moment.
Shawty
|
|
|
|
|
Yeah, not sure I know what that's like
cheers
Chris Maunder
|
|
|
|
|