|
Hi, great to hear progress is looking promising. Do you have an idea of how long you expect the beta cycle to be? Really looking forward to getting my hands on the new version!
|
|
|
|
|
We were hoping for a short final test cycle with a new 2.0.X out this week. Matthew and I have spent a frustrating day tidying up but we've not been able to get things wrapped up today. Hint: don't upgrade Docker if it's working. You may regret it.
cheers
Chris Maunder
|
|
|
|
|
Hello,
I got this error:
<pre lang="text">[407C:43B0][2023-01-07T11:29:29]w343: Prompt for source of package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, path: C:\Users\***\Downloads\ObjectDetector.Installer-1.6.8.0.msi
[407C:43B0][2023-01-07T11:29:29]i338: Acquiring package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, download from: https:
[407C:43B0][2023-01-07T11:30:46]e000: Error 0x80072f7d: Failed attempt to download URL: 'https://codeproject-ai.s3.ca-central-1.amazonaws.com/sense/installer/version-1.6.8.0/ObjectDetector.Installer-1.6.8.0.msi' to: 'C:\WINDOWS\Temp\{56EEC605-2698-432B-9AE0-F94A9B3459A9}\CODEPROJECTAIYOLONET'
[407C:43B0][2023-01-07T11:30:46]w343: Prompt for source of package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, path: C:\Users\***\Downloads\ObjectDetector.Installer-1.6.8.0.msi
[407C:43B0][2023-01-07T11:30:49]i338: Acquiring package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, download from: https:
[407C:43B0][2023-01-07T11:33:57]e000: Error 0x80072f7d: Failed attempt to download URL: 'https://codeproject-ai.s3.ca-central-1.amazonaws.com/sense/installer/version-1.6.8.0/ObjectDetector.Installer-1.6.8.0.msi' to: 'C:\WINDOWS\Temp\{56EEC605-2698-432B-9AE0-F94A9B3459A9}\CODEPROJECTAIYOLONET'
[407C:43B0][2023-01-07T11:33:57]w343: Prompt for source of package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, path: C:\Users\***\Downloads\ObjectDetector.Installer-1.6.8.0.msi
[407C:43B0][2023-01-07T11:34:00]i338: Acquiring package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, download from: https:
[407C:43B0][2023-01-07T11:34:02]e000: Error 0x80072f7d: Failed attempt to download URL: 'https://codeproject-ai.s3.ca-central-1.amazonaws.com/sense/installer/version-1.6.8.0/ObjectDetector.Installer-1.6.8.0.msi' to: 'C:\WINDOWS\Temp\{56EEC605-2698-432B-9AE0-F94A9B3459A9}\CODEPROJECTAIYOLONET'
[407C:43B0][2023-01-07T11:34:02]w343: Prompt for source of package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, path: C:\Users\***\Downloads\ObjectDetector.Installer-1.6.8.0.msi
[407C:43B0][2023-01-07T11:34:05]i338: Acquiring package: CODEPROJECTAIYOLONET, payload: CODEPROJECTAIYOLONET, download from: https:
[407C:43B0][2023-01-07T11:34:37]e000: Error 0x80072f7d: Failed attempt to download URL: 'https://codeproject-ai.s3.ca-central-1.amazonaws.com/sense/installer/version-1.6.8.0/ObjectDetector.Installer-1.6.8.0.msi' to: 'C:\WINDOWS\Temp\{56EEC605-2698-432B-9AE0-F94A9B3459A9}\CODEPROJECTAIYOLONET'
[407C:43B0][2023-01-07T11:34:37]e000: Error 0x80072f7d: Failed to acquire payload from: 'https://codeproject-ai.s3.ca-central-1.amazonaws.com/sense/installer/version-1.6.8.0/ObjectDetector.Installer-1.6.8.0.msi' to working path: 'C:\WINDOWS\Temp\{56EEC605-2698-432B-9AE0-F94A9B3459A9}\CODEPROJECTAIYOLONET'
[407C:43B0][2023-01-07T11:34:37]e313: Failed to acquire payload: CODEPROJECTAIYOLONET to working path: C:\WINDOWS\Temp\{56EEC605-2698-432B-9AE0-F94A9B3459A9}\CODEPROJECTAIYOLONET, error: 0x80072f7d.
[41BC:41D8][2023-01-07T11:34:37]i351: Removing cached package: CODEPROJECTAIPYTHON39, from path: C:\ProgramData\Package Cache\{CFFAF0A0-6490-4A4E-B598-30E6D3213F5A}v1.6.8.0\
[41BC:41D8][2023-01-07T11:34:37]i351: Removing cached package: CODEPROJECTAIPYTHON37, from path: C:\ProgramData\Package Cache\{502E599A-CF63-401D-9D52-79A38E1D8281}v1.6.8.0\
[41BC:41D8][2023-01-07T11:34:37]i351: Removing cached package: CODEPROJECTAISERVERSTOPPER, from path: C:\ProgramData\Package Cache\{81C2478C-7423-4C44-BBD5-B0DDB161BB76}v1.6.8.0\
[407C:4080][2023-01-07T11:34:37]e000: Error 0x80072f7d: Failed while caching, aborting execution.
[41BC:41C0][2023-01-07T11:34:37]i372: Session end, registration key: SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{3ed86a7d-4cd2-4ac0-b2e2-dbc0fbed3e90}, resume: None, restart: None, disable resume: No
[41BC:41C0][2023-01-07T11:34:37]i330: Removed bundle dependency provider: {3ed86a7d-4cd2-4ac0-b2e2-dbc0fbed3e90}
[41BC:41C0][2023-01-07T11:34:37]i352: Removing cached bundle: {3ed86a7d-4cd2-4ac0-b2e2-dbc0fbed3e90}, from path: C:\ProgramData\Package Cache\{3ed86a7d-4cd2-4ac0-b2e2-dbc0fbed3e90}\
[41BC:41C0][2023-01-07T11:34:37]i371: Updating session, registration key: SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{3ed86a7d-4cd2-4ac0-b2e2-dbc0fbed3e90}, resume: None, restart initiated: No, disable resume: No
[407C:4080][2023-01-07T11:34:37]i399: Apply complete, result: 0x80072f7d, restart: None, ba requested restart: No
I tried to manually download the file and was able to.
PC is running windows 10 with a RTX3060 card.
Not sure why.
Thanks
|
|
|
|
|
I'm not sure why you are having issues. As you say, it is possible to download the file. The permissions are set correctly in AWS. Worked on my machine, also RTX3060, but Windows 11.
How much RAM and free disk do you have?
Almost a mote point as we will be releasing a 2.0.X version in the next day or so.
"Time flies like an arrow. Fruit flies like a banana."
|
|
|
|
|
Yeah, it's weird. It has 274GB of free space and 48GB ram. It's running a virtualbox home assistance and native blue iris. Don't think either of these prevent downloads?
Is there a special setting that I might have disabled preventing downloads? I opened it as administrator.
Hope it doesn't happen with the 2.0.x.
If it does, can I just manually download the file and put in that directory?
Thanks!
|
|
|
|
|
Hi folks, after achieving reasonable processing times with the new-to-me T600 CUDA card I have been experimenting with the delivery model, which I really wanted to get going to use for alerts.
First of all, thank you to PuzzlingDad who posted the delivery.pt model.
I found that it detects FedEx but does not detect USPS or UPS, in my situation. Neither real time on a D1 substream, nor on the 4K main stream, nor on captured images in BI or directly on AI Explorer interface produced a detection.
Can anyone suggest a fix? Is there an updated delivery.pt model I could try? (I am too much of a noob to try and train my own, yet).

|
|
|
|
|
Hi folks, first off thanks for making CP-AI available! Really cool project.
I have been tinkering with BI and CP-AI for a month or two, but just yesterday added a CUDA card to speed it up to a point of being usable on my PC, and wanted to let you know that I could not make the cuDNN installation script to work. I really liked the idea of it installing everything for me, especially that it has a copy of the cuDNN so I don't need to register for as a dev with Nvidia just for that.
The script, when run, gave the error shown below:

The system is as follows:
Win 10 native no VMs, i7-4790K@4, 32Gb RAM, Nvidia Quadro T600, CUDA driver version 528.02, CP-AI 1.6.8-Beta, Toolkit 11.7, all modules disabled except YOLO.
In the end I was able to get it all working manually by using the URLs from the cuDNN install script and the instructions for cuDNN from the Nvidia doc pages, but wanted to let you know what happened with the script. Maybe you can give me a hint why it failed, just for my education or in case someone else sees the same problem.
modified 6-Jan-23 18:39pm.
|
|
|
|
|
It may be that the archive that script downloads no longer exists. I'll dig in.
cheers
Chris Maunder
|
|
|
|
|
I have been trying to slim down my VRAM usage for a long time now. I'm currently stable using 2.4GB of VRAM on my CodeProject.AI setup.
I have also been trying to use technologies that are not tied to Intel or Nvidia.
For BlueIris, I'm currently using "DirectX VA2" instead of "Intel +vpp". I do have an iGPU, but I'm choosing not to use it. It performs about on par with the iGPU with maybe a very slight CPU cost (1% or less).
Amazingly, DirectX VA2 doesn't appear to use any detectable resources, and is certainly NOT loaded in VRAM on my Nvidia card. In contrast, if I use a decoder like NVDEC, the VRAM usage is significant.
I was wondering if something similar could be achieved with this project. Possibly putting everything in main memory, or at least more aggressively GC at the cost of more CPU usage. For me, every little bit of VRAM is precious, but I have an overabundance of main memory.
overall, I think if copyback is feasible, it could potentially be game changing for people with lower end cards with extremely low VRAM.
Ignore the fact that it's running extremely well for me right now



|
|
|
|
|
Matthew has done some amazing work with reducing memory usage (and GCs) by focussing on pooling and using the new span type in .NET for the .NET ObjectDetection module. It's not as fast as the Python YOLO module in some scenarios, but on my machine I'm finding the .NET implementation low on memory use and very, very fast.
It'll be out this week.
cheers
Chris Maunder
|
|
|
|
|
 Thanks for the reply.
Me being impatient as always I had to try early. Something went wrong, but not sure what yet:
3:32:03 PM: Object Detection (YOLO): Detecting using ipcam-combined
3:32:03 PM: Object Detection (YOLO): Queue and Processing Object Detection (YOLO) command 'custom' took 16ms
3:32:15 PM: Latest version available is 1.6.7-Beta
3:32:40 PM: Sending shutdown request to python/ObjectDetectionYolo
3:32:50 PM: detect_adapter.py: Not using half-precision for the device 'NVIDIA GeForce GTX 1080'
3:32:50 PM: detect_adapter.py: Inference processing will occur on device 'NVIDIA GeForce GTX 1080'
3:32:50 PM: detect_adapter.py: has exited
3:33:11 PM: ObjectDetectionYolo went quietly
3:33:17 PM:
3:33:17 PM: Module 'Object Detection (.NET)' (ID: ObjectDetectionNet)
3:33:17 PM: Active: True
3:33:17 PM: GPU: Support enabled
3:33:17 PM: Parallelism: 0
3:33:17 PM: Platforms: windows,linux
3:33:17 PM: Runtime: execute
3:33:17 PM: Queue: detection_queue
3:33:17 PM: Start pause: 1 sec
3:33:17 PM: Valid: True
3:33:17 PM: Environment Variables
3:33:17 PM: MODEL_SIZE = MEDIUM
3:33:17 PM:
3:33:17 PM: Started Object Detection (.NET) backend
3:33:19 PM: ObjectDetectionNet.exe: Application started. Press Ctrl+C to shut down.
3:33:19 PM: ObjectDetectionNet.exe: Hosting environment: Production
3:33:19 PM: ObjectDetectionNet.exe: Content root path: C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionNet
3:33:20 PM: Object Detection (Net): Object Detection (Net) module started.
3:33:20 PM: ObjectDetectionNet.exe: Please ensure you don't enable this module along side any other Object Detection module using the 'vision/detection' route and 'detection_queue' queue (eg. ObjectDetectionYolo). There will be conflicts
3:33:46 PM: Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
3:33:46 PM: System.__Canon, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e: .b__1(Int32)
3:33:46 PM: System.__Canon, System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e: .b__1(System.Threading.Tasks.RangeWorker ByRef, Int32, Boolean ByRef)
3:33:46 PM: at System.Threading.Tasks.TaskReplicator+Replica.Execute()
3:33:46 PM: at System.Threading.Tasks.TaskReplicator+Replica+<>c.<.ctor>b__4_0(System.Object)
3:33:46 PM: at System.Threading.Tasks.Task.InnerInvoke()
3:33:46 PM: at System.Threading.Tasks.Task+<>c.<.cctor>b__272_0(System.Object)
3:33:46 PM: at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(System.Threading.Thread, System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
3:33:46 PM: at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef, System.Threading.Thread)
3:33:46 PM: at System.Threading.Tasks.Task.ExecuteEntryUnsafe(System.Threading.Thread)
3:33:46 PM: at System.Threading.ThreadPoolWorkQueue.Dispatch()
3:33:46 PM: at System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()
3:33:46 PM: at System.Threading.Thread.StartCallback()
3:33:46 PM: ObjectDetectionNet.exe: has exited
|
|
|
|
|
When looking at the API documentation to see which objects are available for AI detection, I noticed ones like "deer" and "bear" were missing. Looking at Sean Ewington's article on package detection with custom models, I figured I would add Mike Lud's "IPCam-animal.pt" model from Github to the "custom-model" filesystem folder on my server. When doing this though, it looks like these models were already added. So I have a few questions:
1)Are the models from Github already added or are these different models? Is the API documentation just a bit out of date and there are actually more objects that are able to be detected (or maybe it is intentionally left out since its a custom model)?
2)With several models being under "animal", can I just have Blue Iris confirm "animal" which will cover bear,deer,dog,cat,etc? Does this "animal" (or "bear", "deer", etc) have to be under "Custom models" in 'Blue Iris > Trigger > Artificial Intelligence' since it is in the "custom-model" filesystem folder?
|
|
|
|
|
Below are answers to your questionsQuote: Are the models from Github already added or are these different models? All except the package model are included with the current version of CodeProject.AI
Quote: Is the API documentation just a bit out of date and there are actually more objects that are able to be detected (or maybe it is intentionally left out since its a custom model)? The object list in the API documentation is for the default YOLOv5 model.
Below are the labels for each of the custom models
IPcam-combined Labels: - person, bicycle, car, motorcycle, bus, truck, bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig
IPcam-general Labels (includes dark models images): - person, vehicle
IPcam-animal Labels: - bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig
IPcam-dark Labels: - Bicycle, Bus, Car, Cat, Dog, Motorcycle, Person
Quote: With several models being under "animal", can I just have Blue Iris confirm "animal" which will cover bear,deer,dog,cat,etc? It need to be bear,deer,dog,cat,etc
Quote: Does this "animal" (or "bear", "deer", etc) have to be under "Custom models" in 'Blue Iris > Trigger > Artificial Intelligence' since it is in the "custom-model" filesystem folder? Just the model name, for example "ipcam-animal"
|
|
|
|
|
As always, thank you Mike. If I'm understanding it right, then this format would work?

Thanks again.
|
|
|
|
|
I am currently using CodeProject AI in Blue Iris and am just using the built in i5-4440 CPU to run BI with 3 cameras alongside the CodeProject server. I am getting anywhere from 3-6 second delays for processing for person/vehicle detections and am looking to speed this up. I imagine any modern Nvidia GPU would help with this, but the current server hardware is just a small 4 bay UNAS case that can't fit much. Are there any references to the effectiveness of different GPUs for processing objects/faces? Would adding something like an older Quadro P400 (256 cuda cores) make much of a difference? I am considering doing a new build anyways where I would get a newer CPU/GPU, but figured I'd ask here first. Thanks
|
|
|
|
|
Don't know about that model, I think there is a list somewhere of models that work. I use an ancient K620 and it cut the times about in half. But, I am running in virtual machines with the cuda in a Linux virtual machine and BI in another W10 VM. This is a test system with only 3 cameras.
Production is running 5 cloned cameras (out of 11) doing AI, again to a Linux system Docker, but I am just now in the process of utilizing a GPU in the Linux VM. So far, we are impressed in the reduction of false alerts due to headlights at night. Still much to learn. Happy New Year
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
2060 ti 12GB in my main rig nets a 17ms Analysis time
GTX 1650 in my secondary rig nets a 22ms Analysis time.
I would go with the cheapest card you can find that has a decent amount of ram.
All things aside these 2 gpus are just a hair faster than a 5820k @4GHz which has an analysis time of about 23ms
|
|
|
|
|
Tried both the P400 and P620. Both only have 2gb of ram and failed fairly quickly due to running out of GPU memory.
I am now using a GeForce GTX 1660 SUPER 6GB GDDR5
It works great.
There is talk about supporting newer ai hardware and even Intel Processor GPUs. Can't find the post just now...
Good Luck
|
|
|
|
|
I have found that 4gb is about the min that I could go on both of my machines. Hell the ai along with video decode enabled on the 12GB RTX 2060 will max out the full 12gb of ram.
|
|
|
|
|
Thanks (to everyone) for all of the information here. I'm going to try it with a 1650 4GB and a 1660 Super 6GB. Will report back on findings.
|
|
|
|
|
I perused this forum and others for weeks before joining, gathering my own experience with my Blue Iris setup.
My PC is a i7-4790k with 32G of RAM. All installations are direct on the PC, no virtual machines. Windows 10. AI in Blue Iris was taking 1500-3700 ms with CPU load by Python reaching 90-100% during those seconds, so I was annoyed and was in the same situation as you.
I bought a Quadro T600 on ebay after comparing with previous models for memory bandwidth and FLOPs. 4Gb of GDDR6 looked to have a better throughput than GDDR5, even if the total amount of RAM is lower.
Installed it yesterday. It took some tinkering to get CUDA to work, as the cuDNN script failed to work for me (another topic). But it eventually started working, with processing times per model at 23-79 ms, depending on the scene.
I had wanted to try a P4, which promised even greater FLOPs and bandwidth for less $ and power but I didn't know if it would work standalone (with no NVIDIA video card), and found no data on anyone having done that, so I decided to go for a video card instead of just a CUDA card. Although it is very tempting to try a Tesla card, I am out of PCIe x16 slots.
My AI processing is done on the substream. Through experimenting I found that using the 4K main stream does not improve the recognition (vehicles and persons, and animals) but predictably, takes 3 times as long since the images are 3x the size (150-210 ms for a 4k image, per model).
P.S. The T600 GPU load is 0-9% processing events only from 6 cameras.
|
|
|
|
|
I can see that you use the project rembg for your background remover.
In my installation the background is always black when using this function. White would be better but I can't see a way to tweak this.
The standard rembg uses white.
I have tried various versions of the codeproject ai docker images noting that "latest" has BackGroundRemover method "removed".
Any thoughts?
Edmund
|
|
|
|
|
I think it would be fairly easy to add another param that would allow you to choose the background colour. I'll add that to the TODO
cheers
Chris Maunder
|
|
|
|
|
I have this idea as my project for first year. But I have no idea how to implement it. Can anyone give me an idea for writing the code for this?
|
|
|
|
|
BlurIris Ver 5.6.7.1 is suppose to have their Memory leak fixed... ( looking at to many playbacks in succession will exhaust memory)
So I upgraded from 5.6.5.4 TO 5.6.7.1
Now instead of being able to see the Alert JPG with Object Identified (PIC1) by Right-Clicking on the Alert Clip and properties ... Now we get this (PIC2).
PIC1

PIC2

Reloading Saved Image ... back to 5.6.5.4 ....
Cj & Essaf
=============================================
Config:
CodeVersion 1.6.7-Beta
Windows installation
System:
Dell T3630 I9-9900K - 32GB
ZOTAC GAMING GeForce GTX 1660 SUPER 6GB GDDR5 (ZT-T16620F-10L)
GPU - Driver - 522.25-desktop-win10-win11-64bit-international-dch-whql.exe
(Ver 31.0.15.2225) - Reloaded after Win Update downgraded it to 31.0.15.1737 during CodeProject.Ai install
NVIDIA Toolkit - cuda_11.7.1_516.94_windows.exe
Microsoft Windows 11 Pro
W11-22H2-22623.746.221004-1223
Modifications:
MSMG TookKit 12.7 - Strips out the Bloat
Changed: 'Adjust for best Performance of' = Background Services
Currently installed "classic" .NET Versions in the system:
2.0.50727.4927 Service Pack 2
3.0.30729.4926 Service Pack 2
3.5.30729.4926 Service Pack 1
4.0.0.0
4.8.09032
Blueiris v 5.6.5.4
15 Cams
|
|
|
|
|