While using TFS 2010 as the only repository of source code and project management repository for a company with more than 10 developer, it is very important to handle backup/restore plans as accurate as possible. As I needed to backup such a TFS server correctly, I have compiled a check list for myself that I’d like to share it with all.
1. DVD of softwares must be kept at a separate reachable place. Including Windows Server 2008 R2, SQL Server 2008, TFS 2010, SharePoint and any original/modified template process like Scrum for Team System version 3’s x64 template process.
2. Know of any good online/offline documentation for backup/restore: Backing up and Restoring Your Deployment. Some good key points of this document are:
-Databases to backup: TFS_Configuration, TFS_Warehouse, TFS_CollectionName (all collections), TFS_Analysis (if any), ReportServer, ReportServerTempDB, WSS_Config (if any), WSS_Content (if any), WSS_AdminContent (if any)
-TFS_Configuration database must be last database to backup and first database to restore.
3. Backup the Reporting Services Encryption Key using “Reposrting Services Configuration Manager”. It’s a tiny file. This file must be protected with a password. Don’t forget to write down the password for future use. Alternatively you can use a simple password like 123 so you never forget it.
4. Use this guidline for: Restore Data to the Same Location and use this for Restore a Single Server Deployment to New Hardware.
5. You can always use a virtual machine to test you backups.
It’s a few months that I am dealing with continuous integration, automatic builds, TFS and Team Foundation Build deeply. Experiences during this period learned me some points that I’m glad to share with all:
1. Don’t assume anything about build agent. Specifically don’t assume any library is installed on the agent, any assembly registered in build agent’s GAC, any specific file/folder structure exists in build agent, etc. Otherwise you may encounter build breaks in other agents or at another time. Best practices here is to assume that build agent has just .Net framework installed on it and nothing more.
2. Always add assembly references from source control maintained dlls in same project and just as relative path not as absolute path. Otherwise you may encounter build breaks.
3. Never ever let anyone direct access to live machine that is dedicated to install continuous integration results instantly, specifically for web application projects. Because someone may manually add/modify a file/folder in the web application path. So you may end with a software that needs manual modifications that are never documented nor automated.
4. Automate database creation. Either via maintaining database creation scripts in source control or with help of ORM’s database auto generating feature.
In many applications, data change is tracked. For example in TFS whenever you change any section of a work item, you can then see that change has been logged in work item history. It says that, for instance, owner of work item or time estimation field has been changed from a value to another value. Another example is article history in Wikipedia. Both TFS and Wikipedia are large applications. So what should you do if you want to add history feature for a small or medium application? Suppose your application have 20 domain classes (tables) that all of them needs to keep tracking of changes. Is it a good idea to add an extra class/table for each of them to save their history? Is there any other way?
As a solution I used a simple way: having a single class/table as main history repository, using Xml Serialization for persisting changes and Reflection to explore field changes. This way you need to use two events of your ORM: AfterLoad and OnSave or something similar. You must serialize objects in AfterLoad as old (current) value, serialize each object in OnSave as new value, saving in database and consequently showing data change by deserializing them and using Reflection to explore changes. This way has a large overload as a very long record will be saved in database with each change in data, but if history feature is forced to be done then it is a concise and centralized solution. Specially when compared to ways that developer must save history manually by adding code to each UI section that modifies data.
Castle ActiveRecord is a thin layer over NHibernate and provides easy and fast use of NHibernate. Regarding data save/retrieve in Castle ActiveRecord, there is some useful events like OnSave and OnUpdate that can be utilized to automate features like automatic data tracing or data auditing. Unfortunately there is situations that we need events that are not supported directly in Castle ActiveRecord. For example we need an event like AfterLoad or PostLoad in order to do some specific operations in our application. But Castle ActiveRecord didn’t provide us with such an event. Googling showed me that I can leverage NHibernate events to achieve this goal. But how can I catch firing of NHibernate’s events in Castle ActiveRecord? This was not an easy question to be googled. BTW asking in StackOverflow and some extra googling here and here showed me the way.
Listening to NHibernate events in Castle ActiveRecord was very easier than I thought. All you must do is creating a class that implements a special interface and add a special attribute in class file. That’s it. You are done! No need to modify web.config or add code when initializing Castle ActiveRecord. So here is my listener class:
public class MyPostLoadEventListener : IPostLoadEventListener
public void OnPostLoad(PostLoadEvent @event)
//do what ever you want with @event.
Paging and sorting is a common need in ASP.NET applications. GridView itself have a default paging and sorting mechanism. Default paging has performance issues while manipulating large amounts of data. So people use a custom paging mechanism. This way they must note that only needed data must be extracted from database. For example when page 3 just shows 10 records from 21 to 30, there is no reason to load all data from database. With sorting there is two problem. First: default sorting mechanism only work with few specific data sources like Typed DataSet and not every other data sources like those come from NHibernate or Castle ActiveRecord. Second: default sorting sorts only current page not all data.
In order to have an efficient paging and proper sorting mechanism we should use custom paging and custom sorting. Scott Mitchell has a great tutorial series on paging and sorting with GridView. These series are based on an object data source of typed DataSets. So as I’m working with Castle ActiveRecord as my data access layer, I was unable to use Scott’s solution. So I decided to create my own solution using Castle Active Record based on Scott’s original solution.
Doing paging and sorting with Castle Active Record is very very easy. Because Castle ActiveRecord has an API dedicated to paging and sorting: SlicedFindAll. In my solution, firstly I have added 2 method to a typical domain class named Company, secondly notice GridView’s markup that has no codebehind at all. Notice that all my domain classes are inheriting from ActiveRecordBase:
public static Company FindAll(int maximumRows, int startRowIndex, string sortExpression)
orders = new Order;
orders = new Order;
const string DESC = " DESC";
orders = Order.Desc(sortExpression.Replace(DESC, string.Empty));
orders = Order.Asc(sortExpression);
return SlicedFindAll(startRowIndex, maximumRows, orders);
public static int TotalCount()
<asp:GridView runat="server" ID="gvCompany" AllowPaging="true" AllowSorting="true"
<asp:BoundField DataField="Name" HeaderText="Name" SortExpression="Name" />
<asp:BoundField DataField="Address" HeaderText="Address" SortExpression="Address" />
<asp:BoundField DataField="Tel" HeaderText="Tel" SortExpression="Tel" />
<asp:BoundField DataField="Field" HeaderText="Field" SortExpression="Field" />
<asp:ObjectDataSource runat="server" ID="odsCompany" TypeName="MyDomainNamespace.Company"
SelectMethod="FindAll" EnablePaging="True" SelectCountMethod="TotalCount" SortParameterName="sortExpression" />
While binding data to ASP.NET data aware controls like GridView and FormView, there is always two choices. Firstly you can do updating, inserting and deleting operations just in code behind. This way you should reference controls, get/set values and do actual operation in codebehind. Secondly it’s possible to do almost everything in markup of your aspx/ascx declaratively. This way you just tell ASP.NET that which method it should use and ASP.NET take cares of it for you. What of this ways is better? I’ve thought of about it and here is my findings:
1. Need less logic in code.
2. All logic is centralized in a single place in the place where data source or data web control is defined.
3. You are not forced to work with sophisticated API of data web control events and properties.
4. Development with this style is fast.
5. Suitable for small/medium projects and average ASP.NET developers.
1. Working with none primitive data types is a bit hard.
2. Their bug will not be detected unless in runtime instead of compile time. Because field and method names names are defined as embedded strings and will not be checked during compile.
3. Debugging is hard as you can not set a break point in declarative code.
4. Testing is a bit hard.
5. Declarative data binding turns your program a bit inflexible. Because web controls is placed in item template and edit templates so accessing them is not so easy. One example is when you want to add an ASP.NET AJAX ConfirmButton to a GridView CommandField. For this purposes you should use various events.
Scott Mitchel training series about data access in ASP.NET controls