|
IF YOU ARE HAVING A PROBLEM
- Take a look at the logs in
C:\Program Files\CodeProject\AI\logs and see if there's anything in there that screams 'something broke'.
- Check the FAQs in the CodeProject.AI Server documentation
- Make sure you've tested the server using the Explorer (blue link, top middle of the dashboard) to ensure it's a server issue rather than something else such as Blue Iris or another app using CodeProject.AI server.
- If there's no obvious answer, then copy and paste into a message the contents of the System Info tab, describe what you are doing, and what you see, and what you would expect.
Always include a copy and paste from the System Info tab of the dashboard. It gives us a ton of info on your setup. If an individual module is failing, click the 'Info' button to the right of the module's name in the status list and copy and paste that info too.
How to reinstall a module
Option 1. Go to the install modules tab on the dashboard and try re-installing the package. Make sure you have enough disk space and a reliable internet connection.
Option 2: (Option 1 with a vengeance): If that fails, head to the module's folder ([app root]\modules\module-id), open a terminal in admin mode, and run ..\..\setup . This will force a manual reinstall using the install script.
Docker: In Docker you will need to open a terminal into the docker container. You can do this using Docker Desktop, or Visual Studio Code with the Docker remote extension, or on the command using using docker attach . Then do a cd /app/modules/module-id where module-id is the id of the module you need to resinstall. Next, run sudo bash ../../setup.sh --verbosity info to force a manual reinstall of that module. (Set verbosity as quiet, info or loud to get less or more info)
cheers
Chris Maunder
modified 18-Feb-24 15:48pm.
|
|
|
|
|
If you are a Blue Iris user and you are using custom models, then you would notice that the option, in Blue Iris, to set the custom model location is greyed out. This is because Blue Iris does not currently make changes to CodeProject.AI Server's settings. It can be done by manually starting CodeProject.AI with command line parameters (not a great solution), or editing the module settings files (a little messy), or setting system-wide environment variables (way easier). For version 1.6 we added an API to allow any app to change our settings programmatically, and we take care of stopping/restarting things and persisting the changes.
So: Blue Iris doesn't currently change CodeProject.AI Server's settings, so it doesn't provide you a way to change the custom model folder location from within Blue Iris.
Blue Iris will still use the contents of this folder to determine the calls it makes. If you don't specify a model to use in the Custom Models textbox, then Blue Iris will use all models in the custom models folder that it knows about.
Here we've specified a specific model to use. The Blue Iris help file explains more about how this works, including inclusive and exclusive filters on the models it finds.
CodeProject.AI Server doesn't know about Blue Iris' folder, so it can't tell what models it may be expected to use, nor can it tell Blue Iris about what models CodeProject.AI server has available. Our API allows Blue Iris to get a list of the AI models installed with CodeProject.AI Server, and also to set the folder where these models reside. But Blue Iris doesn't, yet, use that API.
So we do a hack.
At install time we sniff the registry to find where Blue Iris thinks the custom models should be. We then make empty copies of the models that we have, and copy them into that folder. If the folder doesn't exist (eg you were using C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\assets , which no longer exists) then we create that folder, and then copy over the empty files.
When Blue Iris looks in that folder to decide what custom calls it can make, it sees the models, notes their names, and uses those names in the calls. CodeProject.AI Server has those models, so when the calls come through we can process them.
Blue Iris doesn't use the models. It uses the list of model names.
If you have your own models in the Blue Iris folder
You will need to copy them to the CodeProject.AI server's custom model folder (by default this is C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models )
If you've modified the registry and have your own custom models
If you were using a folder in C:\Program Files\CodeProject\AI\AnalysisLayer\CustomObjectDetection\ (which no longer existed after the upgrade, but was recreated by our hack) you'll need to re-copy your custom model into that folder.
The simplest solutions are:
- Modify the registry (Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Perspective Software\Blue Iris\Options\AI, key 'deepstack_custompath') so Blue Iris looks in
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models for custom models, and copy your models into there.
or
- Modify
C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\modulesettings.json file and set CUSTOM_MODELS_DIR to be whatever Blue Iris thinks the custom model folder is.
cheers
Chris Maunder
|
|
|
|
|
I cannot get Face Processing to keep running, there seems to be no error it just stops for no reason. If I reboot the PC it runs for a few hours then does the same. I am using my GPU and Object Detection (YOLOv5 6.2). This runs with no issues at all.
I have tried to remove and re-install no joy still does the same.
|
|
|
|
|
|
I can't get the facial recognition to work at all reliably. It is detecting clean shaven middle aged men as someone I have trained it on that is older, and has a very large beard. It detects a person in a hoodie and sunglasses as someone different, trained with images that include no sun glasses, or hoodie. And does these detections with 70+% confidence. It detects a tall person wearing glasses, as a short person that doesn't wear any. It seems to be only slightly better then a random number generator.
I have posted before, and thought I had it resolved, by training on images that include JUST the face rather then a whole image, but as soon as I added a few more people to detect, its back to being a random number generator.
What else can I saw other than WTF?!
Please help and prove me wrong. I was sure I was just doing something wrong, but now I am not sure. Is it that I am not training correctly, anything. But I can't seem to find any information on how to set things up other than what I followed originally (and I can't seem to find it, sorry.)
|
|
|
|
|
I'm seeing the below error in the log and I wonder if anyone know how to resolve it.
ERROR: Module Training for YoloV5 6.2 has version 1.6.5, but ModelReleases has max version as 1.7.0
|
|
|
|
|
Since updating CPAI and the Coral module, the first few inferences made by the Coral fail (for every batch you send to it), like below. The error is "Quote: Unable to run inference: There is at least 1 reference to internal data in the interpreter in the form of a numpy array or slice. Be sure to only hold the function returned from tensor() if you are using raw data access.
This is how the logs look, as you can see, the first several detections fail (all with the aforesaid error):
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...7e1bf8) [''] took 7ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...4e958d) [''] took 10ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...7a9388) [''] took 15ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...7e1526) [''] took 6ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...f5f51a) [''] took 7ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...3a1587) [''] took 8ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...2e8f38) [''] took 9ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...464767) ['Found car, car'] took 45ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...c7ca39) ['Found car, car, car'] took 31ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...bf1b0f) ['Found car, car'] took 31ms
00:44:13:Response rec'd from Object Detection (Coral) command 'detect' (...417fcd) ['Found car, car'] took 37ms
This causes Blue Iris AI detections to fail on the first several frames of EVERY SINGLE action, which are the most important parts. So this is a HUGE bug. It happens after the Coral is unused for a few seconds previously.
P.S. this latest version also doesn't allow you to change between models on the Coral, it's always stuck on MobileNet even though it claims to be on another model.
|
|
|
|
|
i got problems too with failed inferences on CPAI 2.8/Coral 2.4
Lot of random failing, about 15%
<pre> Module 'Object Detection (Coral)' 2.4.0 (ID: ObjectDetectionCoral)
Valid: True
Module Path: <root>\modules\ObjectDetectionCoral
Module Location: Internal
AutoStart: True
Queue: objectdetection_queue
Runtime: python3.9
Runtime Location: Local
FilePath: objectdetection_coral_adapter.py
Start pause: 1 sec
Parallelism: 16
LogVerbosity:
Platforms: all
GPU Libraries: installed if available
GPU: use if supported
Accelerator:
Half Precision: enable
Environment Variables
CPAI_CORAL_MODEL_NAME = EfficientDet-Lite
CPAI_CORAL_MULTI_TPU = False
MODELS_DIR = <root>\modules\ObjectDetectionCoral\assets
MODEL_SIZE = medium
Status Data: {
"inferenceDevice": "TPU",
"inferenceLibrary": "TF-Lite",
"canUseGPU": "false",
"successfulInferences": 8443,
"failedInferences": 1318,
"numInferences": 9761,
"averageInferenceMs": 8.382446997512732
|
|
|
|
|
I was also seeing a lot of failed inferences, about 18 to 20% if I remember correctly.
I thought there was an earlier version of the Coral module that was better, but can't remember which.
I've given up on the Coral module and accelerator. I've gone back to the YOLOv5 .Net module.
|
|
|
|
|
I also am experiencing failed inferences. My thought is that there is an issue with queueing.
There are always a couple fails at the start, only if I have any "pre-trigger images" from BI. If I set that to zero, there are no fails at the start.
And then for the scattered ones, I found that setting the BI interval much larger than inference time eliminates these "random" failures. Even though my average inference time is 215ms, I have to set the "analyze one image each [ ]" to 500ms to totally eliminate failures. I thought 250ms would be perfect, but even 333ms is too low and still yields errors.
|
|
|
|
|
I agree, it’s a failure of CPAI to queue the inferences properly. I noticed no matter what I set the number of pre-trigger inferences to, they all fail. And the thing you said. It’s 100% a failure to queue.
Also, if it fails, it should also retry again (maybe once or twice, maybe as many times as it takes until, say, a minute timeout)
|
|
|
|
|
I'm trying out the new codeproject.ai 2.6.5 YOLOv8 but I get the following response.
{'success': False, 'error': 'Unable to create YOLO detector for model yolov8m', 'moduleId': 'ObjectDetectionYOLOv8', 'moduleName': 'Object Detection (YOLOv8)', 'code': 500, 'command': 'detect', 'requestId': '77d48609-bd2d-4900-b830-4c4a202ed5cb', 'inferenceDevice': 'GPU', 'analysisRoundTripMs': 46, 'processedBy': 'localhost', 'timestampUTC': 'Mon, 02 Sep 2024 13:55:32 GMT'}
|
|
|
|
|
So, I assume (at least I hope) I am doing something wrong.
First off, I don't know a lot of what I am doing here, so please mind my explanation.
I have set up Agent DVR, and CodeProject as the AI for it. If that at all matters.
I have "Trained" the AI on about 12 images of myself using the face registration. I have only registered 1 face.
Since it has been set up and running, I have not been in frame (it did work under testing, but I am saying "Set up and actually running") once, and someone else has been in frame twice. It has detected that person as me both times, once with 61% and another time with 73% confidence. During testing, best I could get on my own face is 61%.
So have I set something up wrong? Or does CodeProject try so hard to match to a face it knows, it really stretches? And if that is the case, how do I get it to say "I really don't know who that is" rather then saying "Well, it has to be this guy, as it is the only guy I am trained on"
It has been suggested I post the system info, so here it is:
Server version: 2.6.5
System: Docker (b8155e24c956)
Operating System: Linux (Ubuntu 22.04)
CPUs: AMD Ryzen 5 3600 6-Core Processor (AMD)
1 CPU x 6 cores. 12 logical processors (x64)
System RAM: 16 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
There are a lot of factors to consider with the ability to recognize a face through a camera lens. Lighting and focal point of the camera. The camera should be "looking" centered to the face height, with the face at the focal distance of the camera. The face detected will be different every time because of lighting. Lots of pictures enrolled of every face angle possible will improve the recognition of the subject.
I have found that face detection works very well.
The accuracy of face recognition has many factors that change each detection.
The confidence level that you submit to face detection along with the picture to CPAI will determine if it detects a face or returns not found.
The confidence level that you submit to face recognition along with the picture to CPAI will determine if it detects a face or returns unknown.
The confidence level that CPAI returns to you after detection is what you act upon.
Again, many pictures of the subject will improve recognition of a face. CPAI returns a confidence level to act upon.
I detect for faces before I submit for recognition. It works, but I have found, with my cameras, face recognition mostly unreliable.
|
|
|
|
|
Lots of wonderful information.. except the most important part.. How to resolve this?
You said "Many" pictures of the subject will improve. Is 12 not enough? Should I use 50? 10,000?
is 75% change its me, when its someone else not generally a huge fail?
All and all, I am convinced I am doing something wrong, however I did follow a tutorial I found, and it seems to be a major fail.
Can you recommend how to fix this?
|
|
|
|
|
If Face Detection and Face Recognition are returning results, then the software is not broken.
Face Detection and Face Recognition work for me with spotty results depending on my cameras and lighting.
I manipulate the input and output confidence levels to get the best results that I can.
I have 20 pictures enrolled of me. I do not rely on it for security events.
I have 20 year old FosCams
Probably the best one can do is look at the quality of camera used.
I don't think there is a fix for an infinite number of combinations of cameras and lighting.
|
|
|
|
|
So, I have retrained it on 36 images of me with a bare face and head (not wearing glasses, not wearing anything on my head) taken from the camera in question. It is a reolink 2k doorbell camera. It is receiving images that are 1280x960.
I want to explain how "bad" this is, how I am sure I have either totally buggered things up ultra bad, or expose a mega ultra flaw.
It just detected someone wearing sun glasses, a hoodie pulled 1/2 way up, and a ball cap as me with 70% confidence. Oh, and this is the first time I saw this person after retraining.
How is this even remotely possible? The way I am seeing this, this is even worse than a random guess.
I wasn't wearing a hoodie in any of my images. I wasn't wearing sunglasses in any of my images. I wasn't wearing a ball cap in any of my images. So how could it possibly even remotely think that might in some world be me with even a 1% confidence, if "me" doesn't have glasses, and doesn't have a hat.
|
|
|
|
|
If I remember correctly, I have read that the frame size that Object Detection, Face Detection, and Face Recognition uses for inference is 640x640.
My 20 year old FosCams output at 640x480.
1280x960 is almost twice the size of a 640x640 frame.
If your cam allows for it, I would test at the lowest resolution image that the cam will output to see if that helps.
|
|
|
|
|
And that seems contradictory to what I have read which suggests I use the highest resolution for better accuracy.
I however have done something that didn't occur to me, and seem to improve accuracy. I was training it on raw images. Face, body, background, uncropped. So I decided to try and crop some images, and I seem to be getting better confidence in myself (well, the device seems to recognize me with a higher confidence). That may however be just lighting conditions. We will need to see.
|
|
|
|
|
Text2Image is running well on my Windows system as service with Cuda. I wanted to have a running installation on my Unraid Server and decided to take an available docker container and install the Text2Image via install module feature. I tried a lot of different possibilities (with Cuda and without Cuda, versions 2.6.2 and 2.6.5 with Cuda 11 and 12 also Beta version 2.8.0).
The installation of Text2Image works fine but when trying to generate an image from text, there is always another error depending on the installation environment. How to proceed to get Text2Image running within Docker on Linux?
I succeeded finally the installation. In following the instruction how I proceeded. Could be helpful for others. Precondition was the already existing installation on Windows.
Installation of Text2Image (version 1.2.1) on Unraid Server, AMD Ryzen 7 5700G with Radeon Graphics:
- pull codeproject/ai-server (without cuda, here version 2.6.5)
- copy Text2Image installation files (version 1.1.2) from windows installation to /app/modules/Text2Image folder
- copy Text2Image/asset directory from Windows installation to Text2Image folder to avoid the error message:
stable_diffusion_adapter.py: Image generation failed: Unable to create pipeline from runwayml/stable-diffusion-v1-5 (runwayml/stable-diffusion-v1-5 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
stable_diffusion_adapter.py: If this is a private repository, make sure to pass a token having permission to this repo with `token` or log in with `huggingface-cli login`.)
- start docker console
- install gcc via:
apt install gcc
- install cuda 12.3 (same version as in windows installation) via:
wget https://developer.download.nvidia.com/compute/cuda/12.3.0/local_installers/cuda_12.3.0_545.23.06_linux.run
sh cuda_12.3.0_545.23.06_linux.run
-> accept, toolkit only (drivers exist already via Nvidia Drivers plugin)
- remove /usr/bin/nvcc and create symlink: /usr/bin/nvcc -> /usr/local/cuda/bin/nvcc otherwise Text2Image tries to install for cuda 11.5 and ends with rocm5.4.2
- install CUDNN (same version as in windows installation) via:
wget https://developer.download.nvidia.com/compute/cudnn/9.3.0/local_installers/cudnn-local-repo-ubuntu2204-9.3.0_1.0-1_amd64.deb
dpkg -i cudnn-local-repo-ubuntu2204-9.3.0_1.0-1_amd64.deb
cp /var/cudnn-local-repo-ubuntu2204-9.3.0/cudnn-*-keyring.gpg /usr/share/keyrings/
apt-get update
apt-get -y install cudnn-cuda-12
- install Text2Image (1.1.2):
change to /app/modules/Text2Image folder
run bash ../../setup.sh
Text2Image will install with error:
stable_diffusion_adapter.py: A module that was compiled using NumPy 1.x cannot be run in
stable_diffusion_adapter.py: NumPy 2.0.2 as it may crash. To support both 1.x and 2.x
stable_diffusion_adapter.py: versions of NumPy, modules must be compiled with NumPy 2.0.
stable_diffusion_adapter.py: Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
stable_diffusion_adapter.py: If you are a user of the module, the easiest solution will be to
stable_diffusion_adapter.py: downgrade to 'numpy<2' or try to upgrade the affected module.
stable_diffusion_adapter.py: We expect that some modules will need time to support NumPy 2.
stable_diffusion_adapter.py: Traceback (most recent call last): File "/app/modules/Text2Image/stable_diffusion_adapter.py", line 14, in
stable_diffusion_adapter.py: import torch
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/__init__.py", line 1382, in
stable_diffusion_adapter.py: from .functional import * # noqa: F403
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/functional.py", line 7, in
stable_diffusion_adapter.py: import torch.nn.functional as F
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/__init__.py", line 1, in
stable_diffusion_adapter.py: from .modules import * # noqa: F403
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/modules/__init__.py", line 35, in
stable_diffusion_adapter.py: from .transformer import TransformerEncoder, TransformerDecoder, \
stable_diffusion_adapter.py: File "/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 20, in
stable_diffusion_adapter.py: device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
stable_diffusion_adapter.py: /app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
stable_diffusion_adapter.py: device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'),
- install numpy 1.26.4 (same version as in windows installation):
(venv) root@33674a29fd82:/app/modules/Text2Image/bin/linux/python39# pip install numpy==1.26.4
Collecting numpy==1.26.4
Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 16.0 MB/s eta 0:00:00
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 2.0.2
Uninstalling numpy-2.0.2:
Successfully uninstalled numpy-2.0.2
WARNING: Failed to remove contents in a temporary directory '/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/~umpy.libs'.
You can safely remove it manually.
WARNING: Failed to remove contents in a temporary directory '/app/modules/Text2Image/bin/linux/python39/venv/lib/python3.9/site-packages/~umpy'.
You can safely remove it manually.
Successfully installed numpy-1.26.4
- This message still appears:
stable_diffusion_adapter.py: Couldn't connect to the Hub: 401 Client Error. (Request ID: Root=1-66dacc6f-007b78e7281bee803c893af5;ea0a2c35-bef3-4e34-afca-2d979e04bc8b)
stable_diffusion_adapter.py: Repository Not Found for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5.
stable_diffusion_adapter.py: Please make sure you specified the correct `repo_id` and `repo_type`.
stable_diffusion_adapter.py: If you are trying to access a private or gated repo, make sure you are authenticated.
stable_diffusion_adapter.py: Invalid username or password..
stable_diffusion_adapter.py: Will try to load from local cache.
modified 2 days ago.
|
|
|
|
|
hi there,
i´ve used CPAI with BI voth yolo5.6.2 with custom models.
Now i´ve installed a Coral USB.
Which model is the best for BI?
Can i use custom models which worked fine for me? I´ve read that Mike Lud was on the run to convert them for yolov5 last december. Are they ready?
|
|
|
|
|
They’re all pretty bad. I’m getting 79% confidences that a spider web is a person. There seems to be no difference whatsoever between EdgeNet and MobileNet. Yolov5 seems okay but still thinks every shadow is a car with 51% confidence. Yolov8 fails inference more than half the time.
Mike Lud’s models have been almost here for months now but last update was months ago
|
|
|
|
|
15:18:44:Response rec'd from License Plate Reader command 'alpr' (...756284)
15:18:44:License Plate Reader: [AttributeError] : Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 53, in process
result = await detect_platenumber(self, self.opts, image)
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR.py", line 78, in detect_platenumber
pillow_image.save(image_buffer, format='JPEG') # 'PNG' - slow
AttributeError: 'NoneType' object has no attribute 'save'
|
|
|
|
|
the problem is, seems lilke the piddlepaddle.org.cn address is blocked is the reason why License plate recognizer is not getting loaded
|
|
|
|
|
Latest CPAI (2.8) and Coral module on Docker. Within 24 hours, Coral stops working. These are the last things in the log:
…
19:36:49:Response rec'd from Object Detection (Coral) command 'detect' (...139ee6) ['Found car, car, car'] took 21ms
19:36:51:Response rec'd from Object Detection (Coral) command 'detect' (...7a5fc1) ['Found tv'] took 25ms
19:37:57:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:10:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:19:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:20:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:20:objectdetection_coral_adapter.py: WARNING:root:Queue stalled; refreshing interpreters.
19:38:55:objectdetection_coral_adapter.py: WARNING:root:Pipe thread didn't join!
|
|
|
|
|