I need to move a set of databases from a Windows 2008r2 server to a new Windows 2012 database server. I have looked at several articles but have not found the best one listing all of the complete steps in doing this the in a way that seems to have the complete process. Can someone direct me to a good article for me to follow in doing this?
I understand what your saying about it being very easy but the main issue is we are creating a 200Rr2 server from scratch and want to ensure everything is done right. This includes server setup, settings, etc.
Suppose you purchase a piece of server software and install it on a machine. Now, ideally I would like to tie it to the underlying machine, but it may not be a machine at all, it may be a VM.
Now, if you want to start up a second instance, I want to ensure that you are paying for that second instance. Because of this, I need to find a way to distinguish them even though the underlying hardware may be the same.
I thought of using the PC name, but even that may be problematic because I can't be totally sure that *ALL* cloud vendors (hence, not just the VM vendors, but also their customers who, in turn, become sellers to my customers) will allow to change it.
I'm a freelance, junior sys-admin who is currently tasked with setting up the hardware for a tech startup. We're hosting a software service and need a system solution that makes sense within our budget and requirements.
The ideal goal would be n + 1 design, high security, long-term high availability without breaking the bank. Budget is ~6k for initial round of investment towards gear only. Additionally funds will be allocated for a year of quarter rack space in a datacenter local to my location.
I'm a bit over my head with my current knowledgebase, and intend to bridge the gap with a lot of pre-planning over the next 60 days. I figure 60 days before ordering any gear, 30 days to have it all come in, and get it initially configured at the datacenter, then 90 days to build out our hosting interface and properly test the system before going production. 6 months would be nice, but 8 months is being allocated for project.
Okay, so onto the gear and setup:
(2) Cisco SG-300-10 Switches, 1 wired & active, the second rack-mounted ready for wiring in case of failure.
Primary <> Secondary
With heartbeat in load-balancing + fail-over mode
Primary <> Secondary
Configured in PFsense for failover
A 4:1 ratio of active to failover App/Data Servers. I think failover can be configured with the Nginx Load-Balancer else PFsense.
Initial deployment is the two firewalls, two LBs, and 4+1 App/Data servers, with expectations to grow more App/Data as demand increases.
The service runs on LEMP stack. A master to master MySQL link between each A/D and the failover exists on separate partitions, separate instances of MySQL. It continuously synchs database with each active A/D, ready for activation in the event a LB declares an active dead. Higher resources in the form of more memory, higher thread count CPU, and n(A/D) hard disk space is allotted for failover.
Additionally, I plan to use an Anycast DDos prevention serviced by my colocation provider. I am wondering what drawbacks there are to my model.
The first 3-4 months is designing and deploying the system, the next 3-4 is linking the system to the software with the developers so that we can auto-provision services. I plan to utilize scripts heavily for this.