As with many aspects of technology, understanding how something works behind the scenes can be a real boon when it comes to troubleshooting. In this post in my blog series on implementing continuous delivery with TFS, we take a look at the Release Management for Visual Studio Deployment Agent, and specifically how it does its thing. Bear in mind that I don’t have any inside knowledge about the Deployment Agent and this post is just based on my own experience and observations.
The first step in eliminating easy errors with the Deployment Agent is to ensure that it is installed correctly and can communicate with the RM Server. The key question is whether your servers are part of a domain. If they are, then the easiest way to configure RM is to create a domain account (
RMDEPLOYER for example) and add this to the Manage Users section of the RM Client as a Service User. On target nodes, add this domain account to the Administrators group and then install the Deployment Agent specifying the domain
RMDEPLOYER as the service account. See this post for a bit more detail. If your servers are not part of a domain, you will need to use shadow accounts which are simply local accounts that all have the same name and password. The only difference is that you add the different shadow accounts of the different nodes to the Manage Users section of the RM Client as a Service User – make sure you use the machine name as well as the account name, i.e., the Windows Account field should be $MyNodeName$\RMDEPLOYER.
The test that all is working is to see the servers listed as Ready in the RM Client under Configure Paths > Servers. Something I have observed in my demo environment is that when deployment nodes boot up before my all-in-one TFS machine, they don’t seem to communicate. When that happens, I use a PowerShell script to remotely restart the service (e.g.,
Start-AzureVM -ServiceName $cloudservicename -Name $SERVERNAME).
In a production environment, your circumstances could be wildly different from a clean demo setup. I can’t cover all the scenarios here, but if you are experiencing problems and you have a complicated environment, then this post could be a good troubleshooting starting point.
Package Deployment Mechanism
When the agent is installed and running, it polls the RM server for packages to deploy on a schedule that can be set in the RM client under Administration > Settings > Deployer Settings > Look for packages to deploy every. On my installation, it was set at 28 seconds but if time is critical, you may want to shorten that.
When the agent detects that it has a package to deploy or actions to perform, it copies the necessary components over to C:\Users\RMDEPLOYER\AppData\Local\Temp\RM\T\RM (where RMDEPLOYER is the name of the account the Deployment Agent is running under which might be different in your setup). There are at least two types of folders that get created:
- Deployer Tools. This contains any tools and their dependencies that are needed to perform tasks. These could be executables, PowerShell scripts and so on. They are organized in a folder structure that relate to their Id and version number in the RM server database. For example, in my database, XCopy Deployer (irxcopy.cmd) has Id = 12 Version = 2 in
dbo.DeployerTool and is thus deployed to C:\Users\RMDEPLOYER\AppData\Local\Temp\RM\T\RM\DeployerTools\12\2.
- Action or Component. These folders correspond to the actions that will take place on that node. The names are the same as the names in the Release Management client Deployment Log (from Releases > Releases). A sub folder (whose name includes the date and time and other more mysterious numbers) contains the tool, the files it will deploy or otherwise work with and a file called IR_ProcessAutoOutput.log which is the one displayed when clicking the View Log link in the Deployment Log:
Component folders warrant a little bit more analysis. What exactly gets deployed to the timestamped sub-folder is dependant on how the component is configured under Configure Apps > Components, specifically the Build Drop Location. If this is configured simply with a backslash (\), then all of the drop folder is deployed. This can be further refined by specifying a specific folder, in which case, the contents of that folder get deployed. For example, the Contoso University\Deploy Web Site component specifies \_PublishedWebsites\ContosoUniversity.Web as the Build Drop Location folder which means that just the website files are deployed.
It’s perhaps worth noting here that there are two mechanisms for the Deployment Agent to pull in files: UNC or HTTP(S). This is configured on a per-server basis in Configure Paths > Servers > Deployment Agent. UNC is much quicker than HTTP(S) but the latter method is the only choice if your node doesn’t have access to the UNC path.
A final aspect to touch on is that over time, the node would get choked with old deployments if no action were taken, and to guard against this the Deployment Agent runs a cleanup process on a schedule specified in Administration > Settings > Deployer Settings. This is something to look at if disk space is tight.
Debugging Package Deployment
Having described how package management works – at least at a high level – what are the troubleshooting options when a component is failing to deploy or run correctly? These are the logs that are available on a target node:
- IR_ProcessAutoOutput.log – saved to the action or component folder as above
- DeploymentAgent.log – cumulative log saved to C:\Users\RMDEPLOYER\AppData\Local\Temp\Microsoft\ReleaseManagement\12.0\Logs
- $GUID$DeploymentAgent.log – instance log saved to C:\Users\RMDEPLOYER\AppData\Local\Temp\Microsoft\ReleaseManagement\12.0\Logs. Not sure of the value of these since I’ve never seen them contain anything
If between them these logs don’t provide sufficient information for troubleshooting your problem, you can increase the level of detail – this post has the details but don’t forget to restart the Microsoft Deployment Agent service. Additionally, if you have SMTP set up and working, you will also receive a Deployment Failed! notification email. This can be particularly useful because it invariably contains the command line that failed. This leads on to a useful debugging technique where you rerun the failing command yourself. For example, if the command was running a PowerShell script, simply open the PowerShell console, switch to the correct folder and paste in the command from the email. Chances are that you will get a much more informative error message this way.
I know from personal experience that debugging RM components can be a frustrating experience. Typically, it’s a daft mistake with a parameter or something similar but sorting this type of problem out can really eat time. Do you have any tips for debugging components? Are there other error logs that I haven’t mentioned? Please do share your experiences and findings using the comments.
Cheers – Graham
The post Continuous Delivery with TFS: Behind the Scenes of the RM Deployment Agent appeared first on Please Release Me.
Dr Graham Smith is a former research scientist who got bitten by the programming and database bug so badly that in 2000 he changed careers to become a full-time software developer. Life moves on and Graham currently manages a team of software engineers and specialises in continuous delivery and application lifecycle management with the Team Foundation Server ecosystem.