The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Is there a forum for plain English language programming question?!
Not since the SoapboxTrollpit was deleted.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
Not a stupid question at all, and it's mentioned I think in the documentation (at least in the pdf version that I downloaded last year).
I think they recommend using an API gateway like Ocelot (on GitHub), or messaging like RabbitMQ. But the main point is that microservices should be as independent as possible.
Bit long answer, but here it goes.
Alright, imagine the following scenario.
You have a website for buying concert tickets and looking up information on artists and concerts.
It's all just one big monolith in the back-end.
Now The Rolling Stones are doing a concert near you and ticket purchasing starts at 12 PM.
At 12 PM you log into the website only to find it's not working or working really slow.
Needless to say, you didn't get a ticket because they were sold out after five seconds.
Let's say ordering a ticket costs three seconds to process and the system can process five at the same time, but at that point any other requests will have to wait.
That means you can process 100 tickets a minute, but you get thousands and the system becomes unresponsive for minutes at a time.
There are, as I see it, two ways to solve this problem.
Either offload to another service or scale up your service and optionally use a load balancer.
The first option is offloading the order handling to a separate (micro)service.
Just put every order in a queue and have your microservice read the queue and process the order.
The main application can now simply put something on a queue and inform the user that their order is being processed and they will receive an email soon.
This process takes milliseconds instead of three seconds and users won't get an unresponsive website.
You can put up multiple instances of your microservice to process faster.
Cloud services such as Azure Functions or AWS Lambda can scale out-of-the-box if its CPU goes to 80% or something and there are still a lot of items on the queue.
You can have up to 200 instances I think, so you should be able to process everything relatively fast.
One the queue is empty, Azure will scale down back to 0 (in case of Azure Functions, you'll also stop paying at 0 instances).
Unfortunately, some users will get an email that their order was cancelled because there are no more tickets left, but at least they got their chance and everyone can continue using the website.
Now, let's say even with this solution in place users experience long loading times.
You should probably scale your website as well.
This is a load you see coming, so you can scale up your server for the day, say double CPU, to process faster.
You pay for the extra CPU, but only for a day.
The day after you can scale back down to your regular CPU.
Another form of scaling is by telling Azure to just spin up another instance.
If you're using App Services this process is automatic and you get a free load balancer.
This process can be automated or be manual (this is different than the order processing).
Say you need to scale up to three instances temporarily and then back to one, no problem.
Because you're using microservices you can scale the website and the ordering system independently.
That's good because the order processing needs to scale a lot more, but the process is also a lot smaller.
If it was one application you had to scale the whole thing to maybe 20 instances just to process orders.
An added bonus, say your order processing service goes down, for whatever reason.
People can still access the website and even place orders, because that application didn't go down.
The orders won't be processed and maybe people need to wait hours before their order confirmation, but at least they won't get a 500.
Now, try scaling up like that with your on-premises network.
You need to install load balancers and somehow add multiple servers on the fly.
It's possible, but it probably means you've got a lot of idle servers sitting around.
In the cloud these servers aren't idle, they just go to the next customer who needs to scale up.
You can't do that on-premises.
A long answer, but hopefully you got a gist of what Azure can do what you can't do on-premises
There are other options too and different solutions per options, but that would make this already long post into a book
Alright, I hear what you're saying and I'm going to write another long reply about it
You're saying, what if I have a server with 20 busy processes?
The obvious fix would be to get a new server and move a couple of processes to the new server (if possible).
That fixes the CPU issue, but your database is still a single point of failure.
Correct me if I got that wrong.
Before continuing, let me say this, fixing performance issues is rarely easy.
You may be able to buy a bigger server now, but the problem will come back when you get more users.
The issue is most likely caused by some hard to fix part of the software, like a particular database schema or a piece of code that ties various parts of the application together.
Azure won't magically fix your problems, but neither will not going to Azure.
In the end, Azure is a tool that may or may not work for you, but it does have some benefits over on-premises hardware.
Especially when you need big hardware, Azure is not necessarily cheaper.
Back to microservices.
Using a microservice architecture, every service has their own data store.
The benefit of this, is that every service can use the store that best fits its needs.
And of course that your single point of failure is gone, and that's one that makes this complex architecture worthwhile for many companies.
The downside is, obviously, that it adds a lot of complexity and possibly costs.
Furthermore, in practice, all of the services will probably use the database you're most familiar with anyway, say SQL Server.
That's really a trade-off and every team will have to decide if it's worth it, but it's always a possibility.
So, again, say that out of those 20 processes you have, one is particularly busy and could keep a server busy by itself.
You can opt to give this particular service its own data store if that's the problem.
Of course you will need to come up with some way to sync that data store with your other data store.
If it's a web service you could always provide the necessary data and then save the data it returns (sort of caching, I guess).
You could work with queues or events, but I really can't make that decision for you.
In any case, you'll have some work on your hands there.
My former recommendations also still stand, you can auto-scale it using Azure and use a load balancer (included with App Services).
Or if it's an autonomous process you could spin up an Azure Function and let that handle the scaling up and down.
You say you currently have one big application and bringing it to Azure would bring no benefits.
This approach is also called lift and shift, and indeed brings few benefits to the table.
You may be able to do some scaling, if your application can handle running multiple processes to begin with.
Doing some refactoring and running your application in an Azure App Service could be rewarding.
It has auto scale and deployment becomes a lot easier.
You can also think about migrating your database to Azure (SQL Server, PostgreSQL and MariaDB have PaaS offerings in Azure), but that could be expensive.
This approach takes a little more effort as well.
You could rearchitect your application so that you can host various parts of the application in cloud-native solutions such as App Services, Functions, Azure SQL and Service Bus.
The effort here could be considerable, but you could get a lot more benefits like easy deployment and (auto-)scaling.
If you really want to go cloud native and get the most out of the cloud you could do a complete rewrite, but that's expensive and costs a lot of effort.
I say most likely not worth it, but you could consider it.
You could check out a book I co-authored, Migrating Applications to the Cloud with Azure (see my signature).
It's a bit of self promotion, but it discusses the rehost, refactor, rearchitect and rewrite strategies in a bit more depth and it also discusses various (cloud-native) technologies you could use.
Hopefully this answers your questions
After thinking about it, after reading about it on the interweb and MSDN, after talking about it...
I think I can safely state:
- Yes Cloud architecture might improve your application performance (or might not)
- What many people fail to emphasise is that efficient cloud architecture is significantly more complex... it is often brushed with a simple one liner "each micro service should have its own database", "synchronise your service with event architecture" but, in fact, this is the only part that is difficult. And lots of fore thought it needs indeed before diving in....
Further, in our particular case where we are forced to interface to 2 local Navision system, I think it's not going to work without lots of additional work...
I mean.. it's good for me, was just curious about the wisdom of it... As far as I know we only have a few thousands customer (we are an Australia only Business to Local Government only factory) and performance has never be an issue, as far as I know
Yeah, think about it and see if it can help you one way or another.
Read up on the subjects of cloud and microservices.
It's complex stuff and you can really shoot yourself in the foot with it.
The ROI is different in each situation, so I can't help there.
If not now, maybe later or on your next project
For my smaller customers, the cloud really has some added benefits.
These guys barely know how to start a computer and quite frankly, they don't want to.
They don't have on-premises hardware, but for about €50 a month I can host their web application, including database.
They don't need to buy hardware and they have zero maintenance
I did something similar for a website, which used various kinds of emails to process different types of orders for customers. I set up a service to accept an email request from a message queue, where each request contained the type of email and a set of data. It simply plugged the data into the approriate email template and sent it off.
The website now only needs to submit the data annd email type to the message queue, then return to the user immediately letting them know an email will be sent.
Lots of advantages:
- it takes a bit of time to generate the email and then send via email server. The website is not held up with this processing.
- Easy to update the email templates, maintain a common look and feel, etc. without any need to change anything in the website. In fact you could shut down the email service for upgrades, etc. with no email loss.
- If there is a disruption in sending the email, easy to resubmit to queue and process again, without loss of the email. For example if a mail server goes down. I even had this set up so that if there is repeated failures in sending because email server was down, I would go into sleep mode, with a periodic test to see if email server was restored.
Offloading emailing like this made sending emails from the web site much easier, with guaranteed delivery.
This particular customer didn't need this. But if the user account indicated that they had a language preference and method of contact, it would be easy to add this. For example if the language was set to French, and the method of contact was SMS, would be a very simple extension in the data added to the message sent, and it could then be routed to the appropriate service (SMS vs. Email) with a specific language template. Very little change in the Web App, but with a large expansion of functionality.
Look at the bottom of a message posted by them - if it has an "Email" option next to "Reply" then click that. If it doesn't, then the member has specifically refused permission for any private messages.
Be aware that it will not reveal the other members email to you, but will reveal yours to them so they can reply.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
IF the user has it activated allowing it, then just search a message by him in a forum (doesn't matter which) and look at "reply", there should be a "email" widget just right of it.
If not... then you can't
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
To paraphrase an old coworker, "There's a reason they don't work for company that has to charge for their products." Apparently the IRS folk are so used to everything being in uppercase, they didn't bother checking for human input:
I received the "dreaded" not-availabile message for a while. Then I didn't.
Correlating it with anything other than getting my data from SSA (one of the reasons it may be in limbo) is silly. This because I always owe IRS money (they're not a savings bank) so they don't have a direct deposit account. SSA does.
However - they didn't transfer the info: I had to enter it after proving I'm me to their satisfaction. Not really dumb: what if I wanted to target a different account?
I doubt the caps did anything beyond make the user feel better about shouting.