I want to build a cluster, functions like an ordinary cluster takes in requests and distributes the load among the connected computers. How to I begin a project..How to code it, Basic instinct is to use a lower level language such as C, but that seems a lot of work, Even if I use C I don't know how to handle the networking part or connecting computers. How must I begin and the steps I must take for completion of the project. Thanks in advance
I am using PC Helpware and VNC to connect remote desktops of client PC’s. I have set repeaters for that in my Web Server. I have set to ports say ‘X’ and ‘Y’ to Helpware and VNC respectively. Now the repeater ports are closing randomly.
I notice this problem when clients are unable to connect to the server. I don’t know why it is happening.
If I close and open the running repeater exe and then I can connect to clients without any problem.
I've setup a server I need to manage to allow RDP over the Internet. I would like to use this server to access and manage some of the other servers on my local network (ESXi 6.0 host). For some reason when I RDP over the Internet to my Win 2012 server I'm unable to putty from that server to other local servers.
I need to move a set of databases from a Windows 2008r2 server to a new Windows 2012 database server. I have looked at several articles but have not found the best one listing all of the complete steps in doing this the in a way that seems to have the complete process. Can someone direct me to a good article for me to follow in doing this?
I understand what your saying about it being very easy but the main issue is we are creating a 200Rr2 server from scratch and want to ensure everything is done right. This includes server setup, settings, etc.
Suppose you purchase a piece of server software and install it on a machine. Now, ideally I would like to tie it to the underlying machine, but it may not be a machine at all, it may be a VM.
Now, if you want to start up a second instance, I want to ensure that you are paying for that second instance. Because of this, I need to find a way to distinguish them even though the underlying hardware may be the same.
I thought of using the PC name, but even that may be problematic because I can't be totally sure that *ALL* cloud vendors (hence, not just the VM vendors, but also their customers who, in turn, become sellers to my customers) will allow to change it.
I'm a freelance, junior sys-admin who is currently tasked with setting up the hardware for a tech startup. We're hosting a software service and need a system solution that makes sense within our budget and requirements.
The ideal goal would be n + 1 design, high security, long-term high availability without breaking the bank. Budget is ~6k for initial round of investment towards gear only. Additionally funds will be allocated for a year of quarter rack space in a datacenter local to my location.
I'm a bit over my head with my current knowledgebase, and intend to bridge the gap with a lot of pre-planning over the next 60 days. I figure 60 days before ordering any gear, 30 days to have it all come in, and get it initially configured at the datacenter, then 90 days to build out our hosting interface and properly test the system before going production. 6 months would be nice, but 8 months is being allocated for project.
Okay, so onto the gear and setup:
(2) Cisco SG-300-10 Switches, 1 wired & active, the second rack-mounted ready for wiring in case of failure.
PFSense: Primary <> Secondary With heartbeat in load-balancing + fail-over mode
Nginx Load-Balancers: Primary <> Secondary Configured in PFsense for failover
App/Data Servers: A 4:1 ratio of active to failover App/Data Servers. I think failover can be configured with the Nginx Load-Balancer else PFsense.
Initial deployment is the two firewalls, two LBs, and 4+1 App/Data servers, with expectations to grow more App/Data as demand increases.
The service runs on LEMP stack. A master to master MySQL link between each A/D and the failover exists on separate partitions, separate instances of MySQL. It continuously synchs database with each active A/D, ready for activation in the event a LB declares an active dead. Higher resources in the form of more memory, higher thread count CPU, and n(A/D) hard disk space is allotted for failover.
Additionally, I plan to use an Anycast DDos prevention serviced by my colocation provider. I am wondering what drawbacks there are to my model.
The first 3-4 months is designing and deploying the system, the next 3-4 is linking the system to the software with the developers so that we can auto-provision services. I plan to utilize scripts heavily for this.
Should be easily do-able via tunneling. VPN on AC1 alone won't get your BS1 server a good route back to AS1. If it's a simple web service that doesn't require a lot of security, you could always give AS1 a real internet addressable address (from your ISP) and BS1 can access AS1 web service over the web. The thing about web services is that they typically aren't blocked by firewalls, but you still need to have an IP address facing the internet for AS1.
There is a good discussion on how to configure a reverse VPN tunnelling. Look like connecting back to the system is complicated issue. The essence of your problem is that even you can do the DNS registration (which makes eligible for the servers to find each other), the actual ip connection between your machines is further impossible.
It depends what you need or want from your server. And, btw, a third option "between" these two, is a Virtual Private Server (VPS) which, IMO, is as good a dedicated server (but much cheaper) unless your site is particularly large (or attracts lots of traffic) or resource heavy.
But the main advantage of a VPS or dedicated server is that you have full (virtual) control over the server, so can configure it as you want - obviously only an advantage if there is something particular you need to configure - and that you can also install and run your own programs (exe's) in the background to perform all sorts of related tasks.