I'm curious if anyone on these forums or project are familiar with or have attended / published / co-authored at either USENIX or OSDI.
I attended this week, and there's some absolutely staggering things happening in the ML/Ai world beyond what's being reported, especially in both hardware and simple scope.
Some of the papers and case studies absolutely floored me. But others showed that there's a lot of open source advancements that aren't paywalled out there (unless you consider minor things like access to Nvidia H100s and such a pay wall haha)
The first statement in Prolog is an assignment where LP is defined as the negation of the truth value of LP itself. This means that LP is the logical negation of LP, indicating that LP is false if it is true, and true if it is false.
The second statement is a query that checks if there exists a unification between LP and not(true(LP)). It attempts to find a consistent value for LP that satisfies the equation.
In this case, the result of the query is false, indicating that there is no valid unification between LP and not(true(LP)). In other words, there is no consistent value that can simultaneously satisfy the equation LP = not(true(LP)). This suggests that there is a contradiction in the logic, as the equation cannot hold true for any value of LP.
It turns out that ChatGPT is incorrect because Prolog is merely recognizing the pathological self-reference of LP. "true" and "not" in the above context are meaningless placeholders to Prolog.
Prolog equally rejects this expression: X = foo(bar(X)).
I was trying to show that Prolog recognizes and rejects the Liar Paradox. It turns out the Prolog recognizes and rejects every expression that is isomorphic to the Liar Paradox.
Stopping short of achieving your level of incompetence.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
Has anyone else had success running CodeProject AI in a Docker container? I've spent days trying to troubleshoot what is going on with my deployment without success.
Here's my setup: At the base I'm running ESXi. Blue Iris is running in a Win10 VM. For the Docker setup, I'm running PhotonOS in a VM, with Portainer on top to give me a GUI for Docker. Inside Docker, I'm pulling in the codeproject/ai-server image. I'm using macvlan as the networking config to give it an IP on the LAN.
The CPAI container spins up fine, I can access the web GUI fine. I can ping it from the BI server. But it doesn't respond to any kind of requests. I've noticed that if I go into the CPAI Explorer, none of the models are showing up except for a few that appear to be defaults. If I load up an image in Explorer and ask it to analyze it, nothing happens. I just get a timeout and no logs generated.
I've triple checked that I've got the model folder mapped to "/app/modules/ObjectDetectionYolo/custom-models". I've ssh'd to the container and verified the models show up in there, but CPAI doesn't seem to recognize them.
That was an error in the module listing. You'll need to wait till server 2.1 comes out (in alpha testing today) for the update to be accessible. However: the update is only a structural update to fit the new architecture of server 2.1, so you're not missing anything major.
Hello friends, I am currently looking for a solution into beating sport betting companies, one will be able to utilize their live match to ones own advantage. Like an AI bot which will be built in form of a browser and able to execute command, in order to be able to place last minutes bet by scanning for time and current status of the match. Lets say there is a game between team a and team b, team a is winning the match by 2 and team b 0, the bot will first scan for time and if the is up to 88 0r 90, it can then select all match in that range and the winning which also include draw game. Do anyone have any idea how we can bring this idea to live?
No. Because it's cheating, illegal in some places, and most definitely immoral. And that's ignoring the various groups of people believed to be behind organised gambling who would be most unhappy with everybody involved in the project. And they have long arms, and cold hearts.
We do not condone, support, or assist in the production of malicious code in any way, form, or manner. This is a professional site for professional developers.
If you want to know how to create such things, you need to visit a hacking site: but be sure to disable all firewalls and antivirus products first or they won't trust you enough to tell you.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
I do understand your worries and every other things you had said. This is not necessary cheating because I only need a bot or program to assist me place bet at a very fast accurate speed that I am not able to achieving. This not not like hacking into the sport betting server or any of what you said. Building a browser form bot to execute commands which I could have been able to do any where by myself but at slower speed and so I am employing the use of an AI bot. Hackers are the ones who build malicious virus but professionals work is to build a clean legal app for problem solving