|
Hi everyone, I hope you’re having a nice day today! I’ve tried reading some articles on how to set up minikube and I’m wondering where to find some basic resources. I’ve looked here Getting Started with Kubernetes: A Comprehensive Minikube Guide - Codabase Bytes[^] which was somewhat helpful but also found the basic Minikube docs here minikube start[^]
Any and all advice for running minikube locally and learning about the container part of it would be super useful!
Thanks everyone, have a nice weekend and a happy St Pattys!
|
|
|
|
|
I have tried this twice.
I have created a VM in Microsoft Azure for Tiki Wiki (it is available from the Azure Portal as 'Tiki Wiki CMS Groupware packaged by Bitnami '). It comes up successfully with its Login form. However, nowhere in the build is there anywhere to specify an Admin username / password for the Wiki, so I cannot create a 'normal' user. The help pages (e.g. ) tik-wiki-login-help[^] suggest using the 'I don’t know my Admin Username or Email address ' link, but that is not there. The alternative is to use cPanel (using instructions at How to Login to cPanel – InMotion Hosting Support Center[^] but that does not work either. The only username / password that I have had to specify when creating was (presumably) for the root user; but, as this is a SaaS offering, I cannot log on to the VM.
Any clues would be appreciated! Thanks in advance.
|
|
|
|
|
The short answer appears to be no.
Background: I am a SignalR/web newbie. My goal is to learn a bit about both these. So I created a chat hub based on the Tutorial: Real-time chat with SignalR 2 | Microsoft Docs. It worked nicely in VS2017. Then I spent many hours fruitlessly trying to get it to run under IIS on an AWS/LightSail VPS. I kept thinking the source of my problem was my ignorance. Turns out, to paraphrase Arthur Conan Doyle (of Sherlock Holmes fame) “Once you eliminate all the possible causes, whatever remains, no matter how improbable, must be the truth.” That's the long explanation to the short answer above!
Appears as though Microsoft will not allow AWS to host SignalR, as it perhaps does not want competition (for Teams and such) from Amazon!
Hopefully this will save someone a similar painful experience. (BTW: I have since learned that AWS has introduced AWS Websockets API. If you are interested, you might want to take a look at this nice article Which is best? WebSockets or SignalR - Dotnet Playbook.)
|
|
|
|
|
|
MS Azure SignalR Service doesn't have in-built support for other serverless platforms. Like AWS Lambda or Google Cloud Functions.
|
|
|
|
|
I'm new to the subject of CloudSim and I've run some of the existing examples in the CloudSim package (the 8 existing examples...).
I want to implement an optimal SFC allocation scenario with CloudSim. This scenario will include definition of cost function to allocate the virtual network functions (included in the service function chain) optimally on the network nodes with processing resources (CPUs).
Moreover there will be constraints for the aforementioned cost function. And this optimization problem could be solved through different heuristic methods such as genetic algorithm and so on.
I know that there's a class named DatacenterBroker that must be extended in order to be able to create my own algorithm for submitting Cloudlets to the defined virtual machines. I'm also aware that in order to create new algorithms to place the defined VMs in the hosts (in a Datacenter) a class named VmAllocationPolicy (which is defined as an abstract class) has to be extended. But I want to find out what algorithm is used in this class by default so I get some ideas on how to propose new algorithms. But since it is an abstract class I can't see what's inside its methods. So what do I have to do about it?
Secondly I Want to know if it is possible to consider Datacenters as nodes of the network and define links with specific bandwidths and latencies between those Datacenters. And how am I going to include my optimization code in the scenario? How to define an algorithm for routing between these Datacenters after the aforementioned placement has taken place? How to define the ingress node and egress nodes in the scenario?
Or does anyone have any idea if implementing such scenario is possible in mininet environment? how?
thanks
|
|
|
|
|
Everyone knows the three largest cloud storage platforms. But I am a little discouraged in the choice. AWS and Google Cloud Platform have equal pros and cons. Who had a choice situation? Please help me to understand.
|
|
|
|
|
Depends on what you want to do?
- Want a developer-friendly cloud with good documentation/community support? Go with Azure.
- Want a consumer-friendly cloud with everything ready as-an-API? Go with Google Cloud Platform.
- Want a cloud platform in the Asia-pacific market? Go with Alibaba Cloud (sometimes I feel like it is Azure but for Asian markets).
- Want just a hosting platform and a database set? Go with DigitalOcean.
- Want to get stuck reading documentation/creating support tickets/learning that the issue you are facing in 2020 was reported back in 2015 but the platform decided never to solve it and moved on with marketing and sales campaign? Go with AWS.
I hope this helps.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
|
I do Azure because I have been a Microsoft developer forever and so it's easiest for me and Microsoft is awesome.
Social Media - A platform that makes it easier for the crazies to find each other.
Everyone is born right handed. Only the strongest overcome it.
Fight for left-handed rights and hand equality.
|
|
|
|
|
Everything in the technology world comes in a full package. There will be pros and cons all along.
Whether you should be bothered about the cons or not, it is all down to the usage. May I know what kind of business application you are talking about?
|
|
|
|
|
From my experience with both Azure and AWS, I found Azure better than AWS as Microsoft carry well architected Hybrid solutions and security offerings than others.
|
|
|
|
|
I am learning more about Artificial intelligence these days so I am working on AWS. It is smooth and efficient.
|
|
|
|
|
So, I've "played" with Azure a couple of times in the past, and found it MASSIVELY confusing - vast number of different services, huge number of (wildly varied) pricing options, dozens of pages of configuration, huge amount of new terminology. Really not into that, I'm a developer at heart, I like writing code.
I have a client with a critical line-of-business ASP.Net Webforms website (not web application), plus a separate sub-domain for testing. He would like to move this away from his cheap-as-chips shared hosting, and onto Azure. (Two different hosting providers keep messing us around by changing configurations that break the site).
So, is there a SIMPLE migration tool to help me move this site into Azure? A complication is that it uses MySql so we need a MySql service on Azure too (with 2 separate databases, 1 prod 1 UAT). The MS Azure site has reams of stuff about migrating, but it looks like it assumes you control a dedicated server. Doesn't necessarily need to be a tool, a comprehensive tutorial / walkthrough would be fine, BUT it would need to be up-to-date (When tinkering with Azure a couple of years ago I found not only was it complex, it was constantly shifting too!).
Also anyone else's experiences with this migration path would be welcome, as I will need to give the client an estimate of my time on this migration. And once done, I will still need to connect from my existing d/b client (e.g. Heidi SQL, MySql Workbench) to the databases, and be able to EASILY upload changed pages, access IIS logs etc. I did at one point get the app running on Azure (linking back to the existing hosted MySql d/b) so I know it will run with minimal changes, just finding the whole thing pretty daunting right now.
|
|
|
|
|
Quote: critical line-of-business ASP.Net Webforms website (not web application) Web Forms is a web application, and should be just fine on Microsoft Azure.
ASP.NET Web Deployment using Visual Studio: Introduction | Microsoft Docs
Quote: plus a separate sub-domain for testing. Microsoft Azure provides this built-in to the platform; Deployment Slots.
Quote: is there a SIMPLE migration tool to help me move this site into Azure? Depends, if you can connect to the MySQL database locally and create a backup, Microsoft might have some tools to deploy the MySQL snapshot to Azure platform.
Quote: A complication is that it uses MySql so we need a MySql service on Azure too (with 2 separate databases, 1 prod 1 UAT). I recommend using Terraform to create the infrastructures; dev and UAT. The UAT environment can be created when test is needed and then disposed when not needed.
Quote: (When tinkering with Azure a couple of years ago I found not only was it complex, it was constantly shifting too!). Sadly it still is. But if you can get a subscription for MySQL engine on Azure, running the backup/restore should be just fine.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
Many thanks for your response. Afzaal Ahmad Zeeshan wrote: Web Forms is a web application, and should be just fine on Microsoft Azure. Nope. ASP.Net webforms allows you to create two types of web site - a "Web Application", which involves pre-compiling code into DLLs (all done by VisualStudio) and you implement the DLLs and .ASPX pages; and "Web Site", where the runtime does the compilation of each code-behind page when first called. It's therefore critical that Azure includes those runtime components or my site would not run as-is.
Afzaal Ahmad Zeeshan wrote: Microsoft Azure provides this built-in to the platform; Deployment Slots. But only at the higher-end packages. Because the entire slot is "swapped" there's no way during the swap to also swap from UAT to production database if using a standard connection string. Also the IIS InstanceID presumably changes during the swap, so there is no way for an end-user to determine which environment they're looking at on-screen. (I use the InstanceID to make CSS changes, including adding environment name, to reduce the risk of making "test" transactions in the production environment; see IIS_Environment_Detection[^]
A quick scout around just the "Deployment slots" documentation just now confirms Azure remains obscure and overly-complex for "normal" website applications.
|
|
|
|
|
Quote: Nope. ASP.Net webforms allows you to create two types of web site - a "Web Application", which involves pre-compiling code into DLLs (all done by VisualStudio) and you implement the DLLs and .ASPX pages; and "Web Site", where the runtime does the compilation of each code-behind page when first called. It's therefore critical that Azure includes those runtime components or my site would not run as-is. Right, thanks for the information. From what I "think", Visual Studio should be able to handle this situation.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
We currently have an intranet portal which uses IMAP to read email from a shared mailbox on Office365, using the excellent MailKit[^] library.
I've just been made aware that Microsoft is going to disable basic authentication for IMAP clients. Originally, this was going to happen in October, but it's since been pushed back to the second half of 2021.
Basic Auth and Exchange Online – February 2020 Update - Microsoft Tech Community - 1191282[^]
After some panicked reading and experimentation (and much swearing), I've managed to get the MSAL[^] library to return an OAuth2 access token:
const string ClientId = "...";
const string Username = "...";
SecureString password = ...;
var scopes = new[] { "https://outlook.office365.com/.default" };
var app = PublicClientApplicationBuilder.Create(ClientId).WithAuthority(AadAuthorityAudience.AzureAdMultipleOrgs).Build();
var tokenResult = await app.AcquireTokenByUsernamePassword(scopes, Username, password).ExecuteAsync(cancellationToken); After configuring the application as a "public client" in the Azure portal, giving it Mail.ReadWriteAll and Mail.SendAll permissions, and granting admin consent for my organisation, this code now returns a seemingly-valid access token.
According to the author of the MailKit library[^], all I need to do now is use the token to authenticate:
using var client = new ImapClient();
await client.ConnectAsync("outlook.office365.com", 993, SecureSocketOptions.Auto, cancellationToken);
await client.AuthenticateAsync(new SaslMechanismOAuth2(Username, tokenResult.AccessToken), cancellationToken); Unfortunately, that simply throws an "authentication failed" exception.
MailKit.Security.AuthenticationException
HResult=0x80131500
Message=Authentication failed.
Source=MailKit
StackTrace:
at MailKit.Net.Imap.ImapClient.<AuthenticateAsync>d__81.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at TestOffice365OAuth.Program.<MainAsync>d__1.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at TestOffice365OAuth.Program.Main() I don't know whether this is a bug in Microsoft's implementation, a bug in MailKit, a configuration error with my application, or a mistake in my code.
Has anyone managed to get Office365 IMAP access working with OAuth2? Can you spot anything I've missed?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
modified 28-Apr-20 6:21am.
|
|
|
|
|
Thanks to "paulflo150" on GitHub, I was able to authenticate with the token by changing the scopes to:
var scopes = new[] { "https://outlook.office365.com/IMAP.AccessAsUser.All" }; Now I need to find out how to connect to a shared mailbox. The usual trick of appending "\shared-mailbox-alias" to the username results in the same "authentication failed" error, and if I authenticate without it there are no "shared namespaces" available.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Finally managed to get the code to access a shared mailbox, with help from Sivaprakash Saripalli at Microsoft. It's simply a case of passing the email address of the mailbox instead of the username to the SaslMechanismOAuth2 constructor.
const string ClientId = "...";
const string UserName = "...";
SecureString Password = ...;
const string Mailbox = "...";
var scopes = new[] { "https://outlook.office365.com/IMAP.AccessAsUser.All" };
var app = PublicClientApplicationBuilder.Create(ClientId).WithAuthority(AadAuthorityAudience.AzureAdMultipleOrgs).Build();
var authenticationResult = await app.AcquireTokenByUsernamePassword(scopes, UserName, Password).ExecuteAsync(cancellationToken);
using var client = new ImapClient();
await client.ConnectAsync("outlook.office365.com", 993, SecureSocketOptions.Auto, cancellationToken);
await client.AuthenticateAsync(new SaslMechanismOAuth2(Mailbox, authenticationResult.AccessToken), cancellationToken);
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Hi Sir,
Need help, i noob in Asp.net. Using VB for my coding;
Situation :-
a) Using Master Page
b) Got 3 web Form ( Staff.aspx,student.aspx,external.aspx)
c) Every form have Search Textbox
In Staff.aspx, search Textbox working great when connecting to Oracle Database and appear the data holder in other textbox.
But when using Student.aspx or External.aspx, the Search Textbox will appear No Data / Cardholder , in database the data is exist.
the Sql query it ok when i direct include the User ID but when keep in in textbox cannot appear the data.
Plz guide me
Tq
|
|
|
|
|
Why have you posted this same question a second time? Please post questions once only.
|
|
|
|
|
I have met the following problem. I have a pipeline that reads data from an MS SQL Server and stores them into a file in a BLOB container in Azure Storage. The file has Parquet (or Apache Parquet, as it is also called) format.
So, when the “sink” (output) file is stored in a compressed way (snappy, or gzip – does not matter) AND the file is large enough (more than 50 Mb), the pipeline failed. The message was the following:
"errorCode": "2200",
"message": "Failure happened on 'Sink' side. ErrorCode=UserErrorJavaInvocationException,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred when invoking java, message: java.lang.OutOfMemoryError:Java heap space\ntotal entry:11\r\njava.util.ArrayDeque.doubleCapacity(Unknown Source)\r\njava.util.ArrayDeque.addFirst(Unknown Source)\r\njava.util.ArrayDeque.push(Unknown Source)\r\norg.apache.parquet.io.ValidatingRecordConsumer.endField(ValidatingRecordConsumer.java:108)\r\norg.apache.parquet.example.data.GroupWriter.writeGroup(GroupWriter.java:58)\r\norg.apache.parquet.example.data.GroupWriter.write(GroupWriter.java:37)\r\norg.apache.parquet.hadoop.example.GroupWriteSupport.write(GroupWriteSupport.java:87)\r\norg.apache.parquet.hadoop.example.GroupWriteSupport.write(GroupWriteSupport.java:37)\r\norg.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:123)\r\norg.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:292)\r\ncom.microsoft.datatransfer.bridge.parquet.ParquetBatchWriter.addRows(ParquetBatchWriter.java:60)\r\n,Source=Microsoft.DataTransfer.Common,''Type=Microsoft.DataTransfer.Richfile.JniExt.JavaBridgeException,Message=,Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge,'",
"failureType": "UserError",
"target": "Work_Work"
}
The "Work_Work" is name of a Copy Data activity in the pipeline.
If I turn the compression off (the generated BLOB file is uncompressed), the error does not happen.
Is this the error described in https://docs.microsoft.com/en-us/azure/data-factory/format-parquet: the “…If you copy data to/from Parquet format using Self-hosted Integration Runtime and hit error saying "An error occurred when invoking java, message: java.lang.OutOfMemoryError:Java heap space", you can add an environment variable _JAVA_OPTIONS in the machine that hosts the Self-hosted IR to adjust the min/max heap size for JVM to empower such copy, then rerun the pipeline….”?
If it is, have I understood correctly that I have to do the following:
To go to a server where the “Self-hosted Integration Runtime” (still have no idea what it is) and increase the max heap size for JVM. Is this correct?
If it is, my next question is: how large the max heap size should be? My pipeline can generate a file whose size will be 30 GB.
What “max heap size” can guarantee that such a file will not cause the fail?
|
|
|
|
|
Hello everyone,
I am having a question. I want to make a website scanner that allows me to scan the whole website for one word or one color. This because there isn't one available (at least i couldn't find one).
I am not a programming wonder and i really hope or you could help me. I would really appreciate it.
Thanks in advance!
Regards,
Lars
|
|
|
|
|
You need a web scraping solution, something like this in .NET world, Html Agility pack | Html Agility Pack.
You can find out other packages in other languages and runtimes too if you find. The basic concept is you download the entire web page as HTML document and this tool parses it down. You then read the document and look for your "word or color". That is the closest of what I can think of.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|