|
Hello,
.Net 7 is now EOL. As far as I can tell CodeProject AI requires .Net 7 to run. Am I correct? If yes, are there plans to move to .Net 8, and if I am wrong, how do I get it working with .Net 7 uninstalled?
Thanks
|
|
|
|
|
Next version (in beta testing) is .NET 8
cheers
Chris Maunder
|
|
|
|
|
We can create and train an Object Detection model based on "TensorFlow 2 Object Detection API" —> https:
There is also a file for evaluating the trained model —> https:
But how can I view images in the test dataset that were not recognized by the detection/pictures on which objects were not detected?
|
|
|
|
|
Hello i have setup training for yolov5 6.2, ive created the dataset and trained model, but when codeproject runs its not using the training app, just normal Object Detection (yolov5 6.2)
|
|
|
|
|
How do I change the autostart setting for YOLOv5.net to true? The CodeProject.AI service setting in Blue Iris is set to true. Tried stopping the Blue Iris service and restarting it.
09:38:39:System: Windows
09:38:39:Operating System: Windows (Microsoft Windows 10.0.19045)
09:38:39:CPUs: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (Intel)
09:38:39: 1 CPU x 4 cores. 8 logical processors (x64)
09:38:39:GPU (Primary): Intel(R) HD Graphics 630 (1,024 MiB) (Intel Corporation)
09:38:39: Driver: 27.20.100.9664
09:38:39:System RAM: 16 GiB
09:38:39:Platform: Windows
09:38:39:BuildConfig: Release
09:38:39:Execution Env: Native
09:38:39:Runtime Env: Production
09:38:39:Runtimes installed:
09:38:39: .NET runtime: 8.0.3
09:38:39: .NET SDK: 8.0.202
09:38:39: Default Python: Not found
09:38:39: Go: Not found
09:38:39: NodeJS: Not found
09:38:39: Rust: Not found
09:38:39:App DataDir: C:\ProgramData\CodeProject\AI
09:38:39:Video adapter info:
09:38:39: Intel(R) HD Graphics 630:
09:38:39: Driver Version 27.20.100.9664
09:38:39: Video Processor Intel(R) HD Graphics Family
09:38:39:STARTING CODEPROJECT.AI SERVER
09:38:39:RUNTIMES_PATH = C:\Program Files\CodeProject\AI\runtimes
09:38:39:PREINSTALLED_MODULES_PATH = C:\Program Files\CodeProject\AI\preinstalled-modules
09:38:39:DEMO_MODULES_PATH = C:\Program Files\CodeProject\AI\src\demos\modules
09:38:39:EXTERNAL_MODULES_PATH =
09:38:39:MODULES_PATH = C:\Program Files\CodeProject\AI\modules
09:38:39:PYTHON_PATH = \bin\windows\%PYTHON_NAME%\venv\Scripts\python
09:38:39:Data Dir = C:\ProgramData\CodeProject\AI
09:38:39:Server version: 2.6.5
09:38:44:Server: This is the latest version
10:10:30:Update ObjectDetectionYOLOv5Net. Setting MODEL_SIZE=medium
10:10:30:Restarting Object Detection (YOLOv5 .NET) to apply settings change
10:10:35:Update ObjectDetectionYOLOv5Net. Setting AutoStart=false
10:10:35:Stopping Object Detection (YOLOv5 .NET)
Logging level
Mike S
|
|
|
|
|
I have exactly the same problem after updating YOLOv5.Net to version 1.10.2.
Blue IRIS is up to date and i am using an up to date Windows 11 machine.
Remove CodeProject and install again didn't solve the problem.
|
|
|
|
|
Click the start button on the YOLOv5.Net module, and the stop button on the YOLO 6.2 module to have the .NET module startup rather than the Python version. The settings will be persisted. What you're seeing are just the defaults.
cheers
Chris Maunder
|
|
|
|
|
I didn't have the YOLO 6.2 module installed and I had clicked the start button on the NET version, but YOLO5.NET kept stopping after a while for some reason.
I don't know what I changed, but it appears to still be running now.
Thanks
Mike S
|
|
|
|
|
Whatever i do still the message: AI not responding. Don't know what to do now ![Confused | :confused:](https://www.codeproject.com/script/Forums/Images/smiley_confused.gif)
|
|
|
|
|
Chris, I'm seeing the same thing at times, see my later posts.
YOLOv5.NET starts then apparently stops when BlueIris starts.
|
|
|
|
|
Hi, sadly I performed the update on Coral 2.3.4 and now the module doesn't work anymore as before with my EdgeTPU in conjunction with BlueIris. When the module is started it recognizes the EdgeTPU but is falling back to CPU only mode a few moments later. I uninstalled and re-installed the module itself and also the Codeserver AI several times but that didn't help. What can I do to bring it back to stable operation again?
|
|
|
|
|
I cannot start the AIServer (latest version) withe Face Processing disabled. How can I do this with a docker run command?
What I have tried:
Including
-e Modules:FaceProcessing:AutoStart=False
in the run command
|
|
|
|
|
Running CodeprojectAI 2.6.5 with Object Detection(Coral) 2.3.4.
In Blue Iris when AI is triggered I take 10 images. When I check those AI results with the .dat files. I can see on almost every AI check, that at least one of those then images results in a ""code":500". Full result can be seen here:
[
{
"api":"objects",
"found":{
"success":false,
"error":"Unable to run inference: There is at least 1 reference to internal data\n in the interpreter in the form of a numpy array or slice. Be sure to\n only hold the function returned from tensor(
)
if you are using raw\n data access.",
"inferenceMs":0,
"processMs":59,
"predictions":[
]
,
"message":"",
"count":0,
"moduleId":"ObjectDetectionCoral",
"moduleName":"Object Detection (
Coral)
",
"code":500,
"command":"detect",
"requestId":"d076fa17-2373-4747-9de5-94fb1ca0f6b2",
"inferenceDevice":null,
"analysisRoundTripMs":103,
"processedBy":"localhost",
"timestampUTC":"Thu,
04 Jul 2024 11:52:24 GMT"}
}
]
I've tested different models (MobileNet SSD, YoloV5 and YoloV8). It seems to be happening to all of them, the Yolo models a bit more, but also the MobileNet SSD.
A good result looks like this:
T-560 msec [180 msec]
[
{
"api":"objects",
"found":{
"success":true,
"inferenceMs":6,
"processMs":62,
"message":"Found car,
car,
car...",
"count":4,
"predictions":[
{
"confidence":0.76953125,
"label":"car",
"x_min":617,
"y_min":621,
"x_max":2838,
"y_max":2113}
,
{
"confidence":0.69921875,
"label":"car",
"x_min":601,
"y_min":226,
"x_max":799,
"y_max":378}
,
{
"confidence":0.41796875,
"label":"car",
"x_min":3109,
"y_min":320,
"x_max":3214,
"y_max":459}
,
{
"confidence":0.40625,
"label":"car",
"x_min":2580,
"y_min":98,
"x_max":2700,
"y_max":183}
]
,
"moduleId":"ObjectDetectionCoral",
"moduleName":"Object Detection (
Coral)
",
"code":200,
"command":"detect",
"requestId":"7d709542-d6cc-4031-9466-42a3791d6b47",
"inferenceDevice":null,
"analysisRoundTripMs":103,
"processedBy":"localhost",
"timestampUTC":"Thu,
04 Jul 2024 11:52:24 GMT"}
}
]
AI seems to work fine, only this Error 500 pops up every time a camera is triggered. Would like to know if this a "real" Error?
|
|
|
|
|
I've been having the same issues since Multi-TPU support was added to the Coral module. Only thing I have been able to find is a reference to Python, Windows, and threading issues.
|
|
|
|
|
Thanks very much for your report. Could you please share your System Info tab from your CodeProject.AI Server dashboard? Also, could you please share your AI settings in Blue Iris, and version?
Thanks,
Sean Ewington
CodeProject
|
|
|
|
|
I'm running CodeProject AI in docker on Unraid.
Here my System Info tab:
Server version: 2.6.5
System: Docker (c7694d71d408)
Operating System: Linux (Ubuntu 22.04)
CPUs: 12th Gen Intel(R) Core(TM) i5-12600K (Intel)
1 CPU x 10 cores. 16 logical processors (x64)
System RAM: 63 GiB
Platform: Linux
BuildConfig: Release
Execution Env: Docker
Runtime Env: Production
Runtimes installed:
.NET runtime: 7.0.19
.NET SDK: Not found
Default Python: 3.10.12
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
System GPU info:
GPU 3D Usage 0%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
BlueIris AI settings:
![](/Uploads/Content/Images/fbd6221c-64ad-46ca-82e4-8d461fd08269.png)
BlueIris version: 5.9.3.4 x64 (18.jun.2024)
|
|
|
|
|
Question, I updated the YOLOv8 to the latest version and it is still having problems to load the custom models (license-plate) model. To be more precise, when the AgentDVR is trying to run the ALPR function, it could not call out the license-plate model and think it doesn't exist. However, my YOLOv8 does have the license-plate model. In order to make it work, I have to go to the Explorer and select the dropdown list so the AgentDVR can realize that the "license-plate" custom model is there.
I looked at the code in detect_adapter.py and modified the code from:
elif data.command == "custom": # Perform custom object detection
if not self.custom_model_names:
return { "success": False, "error": "No custom models found" }
to
elif data.command == "custom": # Perform custom object detection
# Check if there are any custom models available
if not self.custom_model_names:
# Load the custom models if they haven't been loaded yet
self._list_custom_models()
# After attempting to load, check again if there are any custom models available
if not self.custom_model_names:
# If still no custom models are found, return an error response
return { "success": False, "error": "No custom models found" }
After that, AgentDVR has no problem to call YOLOv8's custom models.
Is it possible to include this for the next update so I don't have to manually load the custom model by going to the Explorer to select the dropdown list?
If anyone solve this problem without modifying the code, please let me know.
|
|
|
|
|
Everything is functioning perfect. I'm diving in and having fun. With the Llama chat module, I have been messing with different models. I'm finding a need to manipulate the system prompt dynamically. So, looking at llama_chat.py, I see where it is generated. It looks like the module was designed to expose that there for llama_chat_adapter.py. I went ahead and open a text file into system_prompt. Then I tried the model from the home assistant integration. Home-3B-v3.q5_k_m.gguf. They have it very fine tuned for their function factory, but I did get some fun results with my prompt.
Then I switched back to mistral-7b-instruct-v0.2.Q5_K_M.gguf, and now I see much better results and the need for an API input for the system_prompt...
Also, does the system_prompt have a token limit? I saw 1024?
![](/Uploads/Content/Images/72349868-0c09-44fc-a292-c6049711a7c1-small.png)
|
|
|
|
|
I've just posted an update to the Llama module that allows you to modify the system prompt. The number of tokens is left at 0, meaning it depends on the model. We use the Microsoft Phi-3 4K model, so 4096 tokens supported. There's a 128K model also available[^]
cheers
Chris Maunder
|
|
|
|
|
Awesome, thank you. Works like a charm. It does slow things a bit on submit, processing the text. I know there are other ways to augment the models. I see where it can load documents and quantify the data?
I tried to load just all my device data and it was to many tokens. "ValueError: Requested tokens (312238) exceed context window of 32768"
lol, I have to figure out just how much data is enough to keep the response constrained to the task.
Funny thing happened with the Home LLM model from Home Assistant integrations. They have done a bit of detailed training to it. As I was feeding it, obviously constraints that it did not like, it got mad and scheduled a meeting with IT and Management and then wouldn't answer anymore till I flushed it... ![Smile | :)](https://codeproject.freetls.fastly.net/script/Forums/Images/smiley_smile.gif)
|
|
|
|
|
Jebus59 wrote: it got mad and scheduled a meeting with IT and Management and then wouldn't answer anymore
🤣
cheers
Chris Maunder
|
|
|
|
|
When I start CodeProject.AI, I notice that the following are not loaded;
.NET SDK: Not Found
Default Python: Not Found
Go: Not Found
NodeJS: Not Found
Rust: Not Found
what wonders will open up if these run-times were to be loaded?
|
|
|
|
|
I believe CPAI just needs this to show, to work. .NET runtime: 7.xx at least
The default python will show if you have a path setting, to it, in your environment variables.
I would imagine the other three would show with a path statement. Here's mine.
Runtimes installed:
.NET runtime: 7.0.5
.NET SDK: 7.0.203
Default Python: 3.9.6
Go: Not found
NodeJS: Not found
Rust: Not found
|
|
|
|
|
I've downloaded the ALPR 3.2.2 and while it is installing, I got the error of "No module named 'paddle'".
When I first downloaded, I have selected "Do not use download cache" and I got the following error:
00:20:34:Module 'License Plate Reader' 3.2.2 (ID: ALPR)
00:20:34:Valid: True
00:20:34:Module Path: <root>\modules\ALPR
00:20:34:Module Location: Internal
00:20:34:AutoStart: True
00:20:34:Queue: alpr_queue
00:20:34:Runtime: python3.9
00:20:34:Runtime Location: Local
00:20:34:FilePath: ALPR_adapter.py
00:20:34:Start pause: 3 sec
00:20:34:Parallelism: 0
00:20:34:LogVerbosity:
00:20:34:Platforms: all,!windows-arm64
00:20:34:GPU Libraries: installed if available
00:20:34:GPU: use if supported
00:20:34:Accelerator:
00:20:34:Half Precision: enable
00:20:34:Environment Variables
00:20:34:AUTO_PLATE_ROTATE = True
00:20:34:CROPPED_PLATE_DIR = <root>\Server\wwwroot
00:20:34:MIN_COMPUTE_CAPABILITY = 6
00:20:34:MIN_CUDNN_VERSION = 7
00:20:34:OCR_OPTIMAL_CHARACTER_HEIGHT = 60
00:20:34:OCR_OPTIMAL_CHARACTER_WIDTH = 30
00:20:34:OCR_OPTIMIZATION = True
00:20:34:PLATE_CONFIDENCE = 0.7
00:20:34:PLATE_RESCALE_FACTOR = 2
00:20:34:PLATE_ROTATE_DEG = 0
00:20:34:REMOVE_SPACES = False
00:20:34:ROOT_PATH = <root>
00:20:34:SAVE_CROPPED_PLATE = False
00:20:34:
00:20:34:Started License Plate Reader module
00:20:34:Installer exited with code 0
00:20:34:ALPR_adapter.py: Traceback (most recent call last):
00:20:34:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 11, in
00:20:34:ALPR_adapter.py: from ALPR import init_detect_platenumber, detect_platenumber
00:20:34:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR.py", line 17, in
00:20:34:ALPR_adapter.py: from paddleocr import PaddleOCR
00:20:34:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddleocr\__init__.py", line 14, in
00:20:34:ALPR_adapter.py: from .paddleocr import *
00:20:34:ALPR_adapter.py: File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddleocr\paddleocr.py", line 21, in
00:20:34:ALPR_adapter.py: import paddle
00:20:34:ALPR_adapter.py: ModuleNotFoundError: No module named 'paddle'
00:20:34:Module ALPR has shutdown
00:20:34:ALPR_adapter.py: has exited
After that I tried to run the setup.bat and got the following:
C:\Program Files\CodeProject\AI\modules\ALPR>..\..\setup.bat
Installing CodeProject.AI Analysis Module
======================================================================
CodeProject.AI Installer
======================================================================
63.8Gb of 243Gb available on M.2_Local
General CodeProject.AI setup
Creating Directories...done
GPU support
CUDA Present...Yes (CUDA 11.8, cuDNN 8.9)
ROCm Present...No
Checking for .NET 7.0...Checking SDKs...All good. .NET is 8.0.302
Reading ALPR settings.......done
Installing module License Plate Reader 3.2.2
Installing Python 3.9
Python 3.9 is already installed
Creating Virtual Environment (Local)...Virtual Environment already present
Confirming we have Python 3.9 in our virtual environment...present
Downloading ALPR models...already exists...Expanding...done.
Copying contents of ocr-en-pp_ocrv4-paddle.zip to paddleocr...done
Installing Python packages for License Plate Reader
Installing GPU-enabled libraries: If available
Ensuring Python package manager (pip) is installed...done
Ensuring Python package manager (pip) is up to date...done
Python packages specified by requirements.windows.cuda11_8.txt
- Installing NumPy, a package for scientific computing...Already installed
- Installing PaddlePaddle, Parallel Distributed Deep Learning...(❌ failed check) done
- Installing PaddleOCR, the OCR toolkit based on PaddlePaddle...Already installed
- Installing imutils, the image utilities library...Already installed
- Installing Pillow, a Python Image Library...Already installed
- Installing OpenCV, the Computer Vision library for Python...Already installed
- Installing the CodeProject.AI SDK...Already installed
Installing Python packages for the CodeProject.AI Server SDK
Ensuring Python package manager (pip) is installed...done
Ensuring Python package manager (pip) is up to date...done
Python packages specified by requirements.txt
- Installing Pillow, a Python Image Library...Already installed
- Installing Charset normalizer...Already installed
- Installing aiohttp, the Async IO HTTP library...Already installed
- Installing aiofiles, the Async IO Files library...Already installed
- Installing py-cpuinfo to allow us to query CPU info...Already installed
- Installing Requests, the HTTP library...Already installed
Scanning modulesettings for downloadable models...No models specified
Executing post-install script for License Plate Reader
Applying PaddleOCR patch
1 file(s) copied.
Self test: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR_adapter.py", line 11, in <module>
from ALPR import init_detect_platenumber, detect_platenumber
File "C:\Program Files\CodeProject\AI\modules\ALPR\ALPR.py", line 17, in <module>
from paddleocr import PaddleOCR
File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddleocr\__init__.py", line 14, in <module>
from .paddleocr import *
File "C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\lib\site-packages\paddleocr\paddleocr.py", line 21, in <module>
import paddle
ModuleNotFoundError: No module named 'paddle'
Self-test passed
Module setup time 00:04:14.21
Setup complete
Total setup time 00:04:16.39
My system info is:
Server version: 2.6.5
System: Windows
Operating System: Windows (Microsoft Windows 10.0.19045)
CPUs: Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz (Intel)
1 CPU x 4 cores. 4 logical processors (x64)
GPU (Primary): Intel(R) HD Graphics 530 (1,024 MiB) (Intel Corporation)
Driver: 31.0.101.2125
System RAM: 8 GiB
Platform: Windows
BuildConfig: Release
Execution Env: Native
Runtime Env: Production
Runtimes installed:
.NET runtime: 8.0.6
.NET SDK: 8.0.302
Default Python: 3.11.4
Go: Not found
NodeJS: Not found
Rust: Not found
Video adapter info:
Intel(R) HD Graphics 530:
Driver Version 31.0.101.2125
Video Processor Intel(R) HD Graphics Family
System GPU info:
GPU 3D Usage 19%
GPU RAM Usage 0
Global Environment variables:
CPAI_APPROOTPATH = <root>
CPAI_PORT = 32168
|
|
|
|
|
![Go to Parent](https://www.codeproject.com/App_Themes/CodeProject/Img/arrow-up24.png) Update:
I manually run the venv for ALPR and installed the "paddlepaddle" and I noticed it had uninstalled the "protobuf-5.27.2" and changed it to "protobuf-3.20.2". I test the ALPR after that and it works.
C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\Scripts>activate
(venv) C:\Program Files\CodeProject\AI\modules\ALPR\bin\windows\python39\venv\Scripts>cd C:\Program Files\CodeProject\AI\modules\ALPR
(venv) C:\Program Files\CodeProject\AI\modules\ALPR>pip install PaddlePaddle
Collecting PaddlePaddle
Downloading paddlepaddle-2.6.1-cp39-cp39-win_amd64.whl.metadata (8.8 kB)
Collecting httpx (from PaddlePaddle)
Downloading httpx-0.27.0-py3-none-any.whl.metadata (7.2 kB)
Requirement already satisfied: numpy>=1.13 in c:\program files\codeproject\ai\modules\alpr\bin\windows\python39\venv\lib\site-packages (from PaddlePaddle) (1.26.4)
Requirement already satisfied: Pillow in c:\program files\codeproject\ai\modules\alpr\bin\windows\python39\venv\lib\site-packages (from PaddlePaddle) (10.3.0)
Collecting decorator (from PaddlePaddle)
Downloading decorator-5.1.1-py3-none-any.whl.metadata (4.0 kB)
Collecting astor (from PaddlePaddle)
Downloading astor-0.8.1-py2.py3-none-any.whl.metadata (4.2 kB)
Collecting opt-einsum==3.3.0 (from PaddlePaddle)
Downloading opt_einsum-3.3.0-py3-none-any.whl.metadata (6.5 kB)
Collecting protobuf<=3.20.2,>=3.1.0 (from PaddlePaddle)
Downloading protobuf-3.20.2-cp39-cp39-win_amd64.whl.metadata (699 bytes)
Collecting anyio (from httpx->PaddlePaddle)
Downloading anyio-4.4.0-py3-none-any.whl.metadata (4.6 kB)
Requirement already satisfied: certifi in c:\program files\codeproject\ai\modules\alpr\bin\windows\python39\venv\lib\site-packages (from httpx->PaddlePaddle) (2024.6.2)
Collecting httpcore==1.* (from httpx->PaddlePaddle)
Downloading httpcore-1.0.5-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: idna in c:\program files\codeproject\ai\modules\alpr\bin\windows\python39\venv\lib\site-packages (from httpx->PaddlePaddle) (3.7)
Collecting sniffio (from httpx->PaddlePaddle)
Downloading sniffio-1.3.1-py3-none-any.whl.metadata (3.9 kB)
Collecting h11<0.15,>=0.13 (from httpcore==1.*->httpx->PaddlePaddle)
Downloading h11-0.14.0-py3-none-any.whl.metadata (8.2 kB)
Collecting exceptiongroup>=1.0.2 (from anyio->httpx->PaddlePaddle)
Downloading exceptiongroup-1.2.1-py3-none-any.whl.metadata (6.6 kB)
Requirement already satisfied: typing-extensions>=4.1 in c:\program files\codeproject\ai\modules\alpr\bin\windows\python39\venv\lib\site-packages (from anyio->httpx->PaddlePaddle) (4.12.2)
Downloading paddlepaddle-2.6.1-cp39-cp39-win_amd64.whl (81.0 MB)
---------------------------------------- 81.0/81.0 MB 4.7 MB/s eta 0:00:00
Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
---------------------------------------- 65.5/65.5 kB 1.2 MB/s eta 0:00:00
Downloading protobuf-3.20.2-cp39-cp39-win_amd64.whl (904 kB)
---------------------------------------- 904.2/904.2 kB 7.2 MB/s eta 0:00:00
Downloading astor-0.8.1-py2.py3-none-any.whl (27 kB)
Downloading decorator-5.1.1-py3-none-any.whl (9.1 kB)
Downloading httpx-0.27.0-py3-none-any.whl (75 kB)
---------------------------------------- 75.6/75.6 kB 463.5 kB/s eta 0:00:00
Downloading httpcore-1.0.5-py3-none-any.whl (77 kB)
---------------------------------------- 77.9/77.9 kB 2.2 MB/s eta 0:00:00
Downloading anyio-4.4.0-py3-none-any.whl (86 kB)
---------------------------------------- 86.8/86.8 kB 1.6 MB/s eta 0:00:00
Downloading sniffio-1.3.1-py3-none-any.whl (10 kB)
Downloading exceptiongroup-1.2.1-py3-none-any.whl (16 kB)
Downloading h11-0.14.0-py3-none-any.whl (58 kB)
---------------------------------------- 58.3/58.3 kB 1.5 MB/s eta 0:00:00
Installing collected packages: sniffio, protobuf, opt-einsum, h11, exceptiongroup, decorator, astor, httpcore, anyio, httpx, PaddlePaddle
Attempting uninstall: protobuf
Found existing installation: protobuf 5.27.2
Uninstalling protobuf-5.27.2:
Successfully uninstalled protobuf-5.27.2
Successfully installed PaddlePaddle-2.6.1 anyio-4.4.0 astor-0.8.1 decorator-5.1.1 exceptiongroup-1.2.1 h11-0.14.0 httpcore-1.0.5 httpx-0.27.0 opt-einsum-3.3.0 protobuf-3.20.2 sniffio-1.3.1
|
|
|
|
|