|
If Face Detection and Face Recognition are returning results, then the software is not broken.
Face Detection and Face Recognition work for me with spotty results depending on my cameras and lighting.
I manipulate the input and output confidence levels to get the best results that I can.
I have 20 pictures enrolled of me. I do not rely on it for security events.
I have 20 year old FosCams
Probably the best one can do is look at the quality of camera used.
I don't think there is a fix for an infinite number of combinations of cameras and lighting.
|
|
|
|
|
So, I have retrained it on 36 images of me with a bare face and head (not wearing glasses, not wearing anything on my head) taken from the camera in question. It is a reolink 2k doorbell camera. It is receiving images that are 1280x960.
I want to explain how "bad" this is, how I am sure I have either totally buggered things up ultra bad, or expose a mega ultra flaw.
It just detected someone wearing sun glasses, a hoodie pulled 1/2 way up, and a ball cap as me with 70% confidence. Oh, and this is the first time I saw this person after retraining.
How is this even remotely possible? The way I am seeing this, this is even worse than a random guess.
I wasn't wearing a hoodie in any of my images. I wasn't wearing sunglasses in any of my images. I wasn't wearing a ball cap in any of my images. So how could it possibly even remotely think that might in some world be me with even a 1% confidence, if "me" doesn't have glasses, and doesn't have a hat.
|
|
|
|
|
If I remember correctly, I have read that the frame size that Object Detection, Face Detection, and Face Recognition uses for inference is 640x640.
My 20 year old FosCams output at 640x480.
1280x960 is almost twice the size of a 640x640 frame.
If your cam allows for it, I would test at the lowest resolution image that the cam will output to see if that helps.
|
|
|
|
|
And that seems contradictory to what I have read which suggests I use the highest resolution for better accuracy.
I however have done something that didn't occur to me, and seem to improve accuracy. I was training it on raw images. Face, body, background, uncropped. So I decided to try and crop some images, and I seem to be getting better confidence in myself (well, the device seems to recognize me with a higher confidence). That may however be just lighting conditions. We will need to see.
|
|
|
|
|
My goal was to have my roommate and i in the facial recognition(i have about 200 cropped pictures of us each), and when it recognizes either one of us, it does not send the alert to our phones. Facial rec. with my Amcrest cameras are so spotty till its useless. I know most of it depends on the lighting and angle, which is why mine doesnt work correctly. I am in the same boat as most people and practically gave up on facial rec until significant detection improvements are made. Currently I use the geofence and profiles within Blue Iris to get closer to what i am looking for. When Blue Iris detects that either my roommate or me is home, it changes the profile from away to home and disables alerting to phones for specific zones. I found this method is much more reliable in unnecessary alert to our phones.
|
|
|
|
|
Text2Image is running well on my Windows system as service with Cuda. I wanted to have a running installation on my Unraid Server and decided to take an available docker container and install the Text2Image via install module feature. I tried a lot of different possibilities (with Cuda and without Cuda, versions 2.6.2 and 2.6.5 with Cuda 11 and 12 also Beta version 2.8.0).
The installation of Text2Image works fine but when trying to generate an image from text, there is always another error depending on the installation environment. How to proceed to get Text2Image running within Docker on Linux?
I succeeded finally the installation. In following the instruction how I proceeded. Could be helpful for others. Precondition was the already existing installation on Windows.
Installation of Text2Image (version 1.2.1) on Unraid Server, AMD Ryzen 7 5700G with Radeon Graphics:
- pull codeproject/ai-server (without cuda, here version 2.6.5)
- copy Text2Image installation files (version 1.1.2) from windows installation to /app/modules/Text2Image folder
- copy Text2Image/asset directory from Windows installation to Text2Image folder to avoid the error message:
stable_diffusion_adapter.py: Image generation failed: Unable to create pipeline from runwayml/stable-diffusion-v1-5 (runwayml/stable-diffusion-v1-5 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
stable_diffusion_adapter.py: If this is a private repository, make sure to pass a token having permission to this repo with `token` or log in with `huggingface-cli login`.)
- start docker console
- install gcc via:
apt install gcc
- install cuda 12.3 (same version as in windows installation) via:
wget https://developer.download.nvidia.com/compute/cuda/12.3.0/local_installers/cuda_12.3.0_545.23.06_linux.run
sh cuda_12.3.0_545.23.06_linux.run
-> accept, toolkit only (drivers exist already via Nvidia Drivers plugin)
- remove /usr/bin/nvcc and create symlink: /usr/bin/nvcc -> /usr/local/cuda/bin/nvcc otherwise Text2Image tries to install for cuda 11.5 and ends with rocm5.4.2
- install CUDNN (same version as in windows installation) via:
wget https://developer.download.nvidia.com/compute/cudnn/9.3.0/local_installers/cudnn-local-repo-ubuntu2204-9.3.0_1.0-1_amd64.deb
dpkg -i cudnn-local-repo-ubuntu2204-9.3.0_1.0-1_amd64.deb
cp /var/cudnn-local-repo-ubuntu2204-9.3.0/cudnn-*-keyring.gpg /usr/share/keyrings/
apt-get update
apt-get -y install cudnn-cuda-12
- install Text2Image (1.1.2):
change to /app/modules/Text2Image folder
run bash ../../setup.sh
Text2Image will install with error:
stable_diffusion_adapter.py: A module that was compiled using NumPy 1.x cannot be run in
stable_diffusion_adapter.py: NumPy 2.0.2 as it may crash. To support both 1.x and 2.x
stable_diffusion_adapter.py: versions of NumPy, modules must be compiled with NumPy 2.0.
stable_diffusion_adapter.py: Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
stable_diffusion_adapter.py: If you are a user of the module, the easiest solution will be to
stable_diffusion_adapter.py: downgrade to 'numpy<2' or try to upgrade the affected module.
stable_diffusion_adapter.py: We expect that some modules will need time to support NumPy 2.
stable_diffusion_adapter.py: Traceback (most recent call last): File "/app/modules/Text2Image/stable_diffusion_adapter.py", line 14, in
stable_diffusion_adapter.py: import torch
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/__init__.py", line 1382, in
stable_diffusion_adapter.py: from .functional import * # noqa: F403
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/functional.py", line 7, in
stable_diffusion_adapter.py: import torch.nn.functional as F
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/__init__.py", line 1, in
stable_diffusion_adapter.py: from .modules import * # noqa: F403
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/modules/__init__.py", line 35, in
stable_diffusion_adapter.py: from .transformer import TransformerEncoder, TransformerDecoder, \
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 20, in
stable_diffusion_adapter.py: device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
stable_diffusion_adapter.py: /app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
stable_diffusion_adapter.py: device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
- install numpy 1.26.4 (same version as in windows installation):
(venv) root@33674a29fd82:/app/modules/Text2Image/bin/linux/python39# pip install numpy==1.26.4
Collecting numpy==1.26.4
Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 16.0 MB/s eta 0:00:00
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 2.0.2
Uninstalling numpy-2.0.2:
Successfully uninstalled numpy-2.0.2
WARNING: Failed to remove contents in a temporary directory '/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/~umpy.libs'.
You can safely remove it manually.
WARNING: Failed to remove contents in a temporary directory '/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/~umpy'.
You can safely remove it manually.
Successfully installed numpy-1.26.4
- This message still appears:
stable_diffusion_adapter.py: Couldn't connect to the Hub: 401 Client Error. (Request ID: Root=1-66dacc6f-007b78e7281bee803c893af5;ea0a2c35-bef3-4e34-afca-2d979e04bc8b)
stable_diffusion_adapter.py: Repository Not Found for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5.
stable_diffusion_adapter.py: Please make sure you specified the correct `repo_id` and `repo_type`.
stable_diffusion_adapter.py: If you are trying to access a private or gated repo, make sure you are authenticated.
stable_diffusion_adapter.py: Invalid username or password..
stable_diffusion_adapter.py: Will try to load from local cache.
modified 8-Sep-24 10:19am.
|
|
|
|
|
hi there,
i´ve used CPAI with BI voth yolo5.6.2 with custom models.
Now i´ve installed a Coral USB.
Which model is the best for BI?
Can i use custom models which worked fine for me? I´ve read that Mike Lud was on the run to convert them for yolov5 last december. Are they ready?
|
|
|
|
|
MobileNet is pretty bad. I’m getting 79% confidences that a spider web is a person.
Right now, CPAI is broken and won't let you switch to any other model (as of CPAI 2.8 and Coral module 2.4.0), even though it claims it switched. So I can't really test the other ones.
Mike Lud’s models have been almost here for months now[^] but last update was months ago...
modified 12-Sep-24 16:15pm.
|
|
|
|
|
15:18:44:Response rec'd from License Plate Reader command 'alpr' (...756284)
15:18:44:License Plate Reader: [AttributeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 53, in process
result = await detect_platenumber(self, self.opts, image)
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR.py", line 78, in detect_platenumber
pillow_image.save(image_buffer, format='JPEG') # 'PNG' - slow
AttributeError: 'NoneType' object has no attribute 'save'
|
|
|
|
|
the problem is, seems lilke the piddlepaddle.org.cn address is blocked is the reason why License plate recognizer is not getting loaded
|
|
|
|
|
Latest CPAI (2.8) and Coral module on Docker. Within 24 hours, Coral stops working. These are the last things in the log:
…
19:36:49:Response rec'd from Object Detection (Coral) command 'detect' (...139ee6) ['Found car, car, car'] took 21ms
19:36:51:Response rec'd from Object Detection (Coral) command 'detect' (...7a5fc1) ['Found tv'] took 25ms
19:37:57:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:10:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:20:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:20:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:55:objectdetection_coral_adapter.py: WARNING:root:Pipe thread didn't join!
|
|
|
|
|
if USB version, may use a powered hub. Onboard ports sometimes don´t have enough power.
|
|
|
|
|
It’s USB but it’s already on a powered hub. Still getting the same thing.
15:33:51:Response rec'd from Object Detection (Coral) command 'detect' (...3ae31b) ['Found car'] took 33ms
15:33:59:Response rec'd from Object Detection (Coral) command 'detect' (...17d9fc) [''] took 50ms
15:34:05:Response rec'd from Object Detection (Coral) command 'detect' (...9eef63) [''] took 32ms
15:35:11:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
15:35:30:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
15:35:49:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
15:36:01:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
15:36:05:objectdetection_coral_adapter.py: WARNING:root:Pipe thread didn't join!
00:04:06:As of "09/01/2024 05:03:59 +00:00", the heartbeat has been running for "00:00:22.1268054" which is longer than "00:00:01". This could be caused by thread pool starvation.
01:01:16:As of "09/01/2024 06:01:11 +00:00", the heartbeat has been running for "00:00:01.8423864" which is longer than "00:00:01". This could be caused by thread pool starvation.
What’s the point of a heartbeat if it doesn’t restart the container or do anything?
|
|
|
|
|
same issue here, stopping each day now.
Is the heartbeat on by default?
Looking for a way to restart the module if it stops.
My CPAI on unraid (docker container app) works perfectly though (M2)
Just the coral usb on the windows VM with CPAI on the same VM.
|
|
|
|
|
Hi,
I'm trying to install the server with an ALPR extension on a raspberry pi 5.
I've a bit struggled with the installation from this tutorial, but I finally succeed.
But I've an issue now : I can't launch ALPR extension, I get this error :
14:21:30:Started License Plate Reader module
14:21:31:ALPR_adapter.py: Traceback (most recent call last):
14:21:31:ALPR_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ALPR/ALPR_adapter.py", line 11, in
14:21:31:ALPR_adapter.py: from ALPR import init_detect_platenumber, detect_platenumber
14:21:31:ALPR_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ALPR/ALPR.py", line 17, in
14:21:31:ALPR_adapter.py: from paddleocr import PaddleOCR
14:21:31:ALPR_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ALPR/bin/linux/python38/venv/lib/python3.8/site-packages/paddleocr/__init__.py", line 14, in
14:21:31:ALPR_adapter.py: from .paddleocr import *
14:21:31:ALPR_adapter.py: File "/usr/bin/codeproject.ai-server-2.6.5/modules/ALPR/bin/linux/python38/venv/lib/python3.8/site-packages/paddleocr/paddleocr.py", line 21, in
14:21:31:ALPR_adapter.py: import paddle
14:21:31:ALPR_adapter.py: ModuleNotFoundError: No module named 'paddle'
14:21:31:Module ALPR has shutdown
I tried running :
<pre lang="text">pip install paddlepaddle
And the installation succeeded, but the issue persists.
If I launch Thonny and I try to import paddle, I get the same error ("no module named paddle").
Would you have an idea about how to fix this ? Thanks !
modified 28-Aug-24 10:34am.
|
|
|
|
|
Hi
i have a similar problem. i m running codeproject on docker but i have same error like you. tried to install paddlepaddle but still same problem.
have you fixed it?
br Manu
|
|
|
|
|
the problem when the installer tries to install paddle, it tries to get the file from "piddlepaddle.org.cn" which is blocked in most countries
|
|
|
|
|
Thanks, do you know how to fix it ?
|
|
|
|
|
There's no way to fix it until the script gets fixed to pull paddle in from a non chinese URL
|
|
|
|
|
Are you sure that the problem is the Chinese firewall?
I can install the license plate reader but when I use the multi tpu config it is not working anymore
modified 8-Sep-24 5:49am.
|
|
|
|
|
I have tried the following code in VS2022 .NET 8.0 Windows Forms;
WebClient webClient = new WebClient();
webClient.QueryString.Add("image", "\\\\LS210D314\\share\\Files\\Photos\\My Pictures\\Critters\\Heron\\Blue Heron\\Photos\\3_2024-01-20_07-46-23_large.jpg");
string result = webClient.DownloadString("http://localhost:32168/v1/vision/detection");
ResultsBox.Text = result;
What is the correct C# method for the server?
|
|
|
|
|
|
The first link is broken, error 404.
The second link does get me to the SDK which I installed but don't see any documentation on how to use it.
The third link is broken, error 404.
The forth link does get me to API reference which I found no references to C#
Is there somewhere else I can find how to send request to codeproject.ai server in C#?
|
|
|
|
|
Sorry, I had links to our private repo.
I've corrected the links in my previous message to go to the public repo
"Mistakes are prevented by Experience. Experience is gained by making mistakes."
|
|
|
|
|
First link is still broken, gives error
All the following links are now working.
Lots to digest, Thanks
|
|
|
|