Pages

Advertisement

Wednesday, July 11, 2007

.NET Tracing Tutorial

The Trace and Debug classes are included in the .NET framework for adding tracing support in an application. Proper tracing in the code helps in application debugging, bug fixing, and profiling. Three unique features of the .NET tracing support are: support of trace filtering using Trace Switches, Trace Listeners, and trace configuration using application configuration files without recompilation of the application.

 

As a convention, the Debug Class is used to instrument the code for pure debugging purposes; and this code instrumentation is not included in release builds. This is the default setting of the VS.NET. VS.NET removes the code instrumentation based on the Debug Class from release builds. Only code instrumentation based on the Trace Class is enabled in release builds. However, it is possible to change the default setting, if required.

Support of trace filtering is very important in any application. .NET supports two levels of filtering for tracing, based on the Debug and Trace Classes. The first level is to compile the application for Trace or Debug. Using this feature, it is possible to completely remove the code instrumentation based on Trace/Debug Classes from the application. The second level of filtering is supported using Trace Switches and application configuration files. It allows tracing of filtering criteria without application compilation.

Note that all members of the Trace and Debug classes are static. Thus, for its use, no member variables of these classes are required to be declared. Also, both classes have exactly the same members with similar signatures. For a complete members list of the classes, check the .NET SDK documentation.

Compiling to Include Debug/Trace Support

To enable tracing based on the Trace Class in an application, the compile time symbol, TRACE, must be defined. If this symbol is not defined, all tracing code based on the Trace Class is removed from the application during the compilation process. Similarly, tracing based on the Debug Class is controlled by the DEBUG symbol. This is the first level of filtering support as discussed above.

There are two methods to define the TRACE (or DEBUG) symbol to include the tracing support. The first one is to use the #define directive in the application code as given in the following snippet. The #define statement must be the first statement in the file.

#define DEBUG    // #define TRACE to enable Tracing based on
// the Trace Class
using System;
...

The second method is to define the flag while compiling the application, as shown in the following.

csc /define:DEBUG myfile.cs ...         // C# users
vbc /define:TRACE=True myfile.vb ... // VB .NET users

Enabling or Disabling Tracing Using Trace Switches

It is possible to enable or disable Tracing based on the Trace/Debug classes using Trace Switches. By using trace switches, the level of tracing can be controlled by using the application configuration file. There are two types of tracing switches: BooleanSwitch and TraceSwitch. The BooleanSwitch is used as a flag to enable and disable the tracing. On the other hand, the TraceSwitch supports tracing levels so that trace messages at a particular level can be enabled or disabled. This is a very powerful filtering mechanism. It can be changed using configuration files without re-compilation of the application.

The following gives an example of the use of the trace switches.

BooleanSwitch boolSwitch = new BooleanSwitch("ABooleanSwitch",
"Demo bool Switch");
TraceSwitch traceSwitch = new TraceSwitch("ATraceSwitch",
"Demo trace switch");

// Set the switch values programmatically
boolSwitch.Enabled = true;
traceSwitch.Level = TraceLevel.Info;

Trace.WriteLineIf(boolSwitch.Enabled, "bool switch is enabled");
Trace.WriteLineIf(traceSwitch.TraceInfo,
"traceSwitch.TraceInfo is enabled");
Trace.WriteLineIf(traceSwitch.TraceError,
"traceSwitch.TraceError is enabled");

When using TraceSwitch, the tracing occurs only if the current trace level is equal to or less than the specified level. For instance, in the above example, the message "traceSwitch.TraceError is enabled" will not be displayed, because the 'Trace Error' level is less than the current value of the tracing level (Trace Info). The following table shows the enumeration values of TraceLevel. To set the tracing level of a Trace Switch, these values should be used instead of using hard-coded numeric values.

Enumerated Value
Integer Value
Remarks

Off
0
None

Error
1
Only error messages

Warnings
2
Warning messages

Info
3
Informational messages

Verbose
4
Verbose messages

The above example explained the use of the WriteLineIf method of the Trace Class. A Trace Switch can also be used in a condition statement, as shown in the following code.

if(boolSwitch.Enabled)
Trace.WriteLine("Boolean switch is enabled");

The above examples discussed the use of trace switches where the trace level is set programmatically. It is also possible, which is more common and effective, to set the trace level by using the application configuration files. In this method, there is no need to change and thus re-compile the application.

The following section explains the use of trace switches using application configuration files. The application configuration file for Windows as an application is named appplicationName.exe.config. For ASP.NET application files, it is named web.config.


  1. Add the following after the <configuration> but before the </configuration> tag. Note that the names here are the same as the name of trace switches (first parameter passed to the constructor) used in the application. Also note that we have used only two switches, so there are only two entries. However, there should be one entry for each trace switch used in the application.

      <system.diagnostics>
    <switches>
    <!--
    0 - Disabled
    1 - Enabled
    -->
    <add name="ABooleanSwitch" value="0" />

    <!--
    0 - Disabled
    1 - Gives error messages
    2 - Gives errors and warnings
    3 - Gives more detailed error information
    4 - Gives verbose trace information
    -->
    <add name="ATraceSwitch" value = "0" />
    </switches>
    </system.diagnostics>
    In this configuration, both switches are switched off.
  2. If a BooleanSwitch is required to be turned ON, change its value to something other than 0. For example, in the above, make the change to '<add name="ABooleanSwitch" value="1" />.
  3. If a TraceSwitch is needed to be switched ON, change the value to the appropriate level given in table above (1 to 4).

Trace Listeners

The tracing based on the Trace and Debug is sent to trace listeners registered for the application. By default, if no trace listeners are registered for the application, the tracing is sent to the DefaultTraceListener. The default listener sends the tracing message to the OutputDebugString, which can be captured using a debug monitor like DebugView from SysInternals.

The following example shows how to add a console, file, and event log as debug listeners.

Debug.Listeners.Add(new TextWriterTraceListener(Console.Out));
Debug.Listeners.Add(new TextWriterTraceListener(File.Create(
"Output.txt")));
Debug.Listeners.Add(new EventLogTraceListener("SwitchesDemo"));

The download includes a MyTrace.cs and MyTrace.exe.config file to experiment the Trace and Debug classes. Use the command-line C# compiler (csc) to compile the application with and without various combinations of /define:TRACE and /define:DEBUG switches.

Downloads
Download source - 1 Kb

An In-Depth Coverage of ASP.NET 2.0's Master Pages: Part 2 of 3

Specifying a Master Page

You can specify the master page for a content page in two different ways:

  1. At the page level: In the content page's @Page directive, you can specify the master page. This is the method used when you create a content page through Visual Studio. A sample page directive is:
    <%@ Page MasterPageFile="~/MySite.master". %>

    Note: ASP.NET resolves the ~ operator to the root of the current application. The ~ operator can be used only in server controls; you cannot use the ~ operator for client elements (that is, HTML elements without the runat="server" specified).


  2. At the application or folder level: You also can specify the master page to use for content pages in the web.config file. For this, use the web.config file's pages element's masterPageFile attribute. By using this method, one can specify in one place the master page for all the content pages in the Web site or in a particular folder. A sample setting is:
    <configuration>
    <system.web>
    <pages masterPageFile="~/MySite.master" />
    </system.web>
    </configuration>


  3. Note: If you have specified the master page file to use in the web.config file and also at the page level in the @Page directive, the page setting overrides the web.config setting.


Nested Master Pages

Master pages can be nested; for example, a content page refers to a master page which in turn uses another master page. In a typical Web site, the pages are divided into sections; for example, an admin section and a user section. All pages have some common UI that is common for the full Web site. Each section has some UI that is shared just by pages in that section. One can design for this scenario by using nested master pages. The UI common to the full Web site can be put into a parent master page, while each section can have a separate master page that inherits from the parent master page. A sample application is included with this article; it uses nested master pages.

Accessing Controls on the Master Page

You might need to programmatically access the controls on the master page from content page code. The Page class has a property called Master which, at runtime, references the page's master page, if it has one. But, because the controls are added as protected members, you cannot access them directly using the Master property; use the FindControl method to reference them.

The following code in the content page's code-behind file shows an example of using FindControl:

Visual Basic

' Gets a reference to a Label control ("masterPageLabel")
' on the master page
Dim mpLabel As Label
mpLabel = CType(Master.FindControl("masterPageLabel"), Label)
If Not mpLabel Is Nothing Then
'Set content page title to master page control
Title.Text = mpLabel.Text
End If

C#

// Gets a reference to a Label control ("masterPageLabel")
// on the master page
Label mpLabel = (Label) Master.FindControl("masterPageLabel");
if(mpLabel != null)
{
//Set content page title to master page control
Title.Text = mpLabel.Text
}

Note: Master pages can contain ContentPlaceHolder controls that contains default content. If you want to get a reference to a control that is inside a ContentPlaceHolder control, first get a reference to the ContentPlaceHolder control and then use its FindControl method. Also, note that if the ContentPlaceHolder contents are overridden by the content page, the controls in ContentPlaceHolder control will not be accessible.


Accessing Methods and Properties of the Master Page

Master pages not only contain controls but can also contain methods and properties too. You can access these by using the Master property of the content page's Page class.

The Master property is defined in this way:

Visual Basic

Public ReadOnly Property Master As MasterPage

C#

public MasterPage Master { get; }

Here, MasterPage is the base class of your master page. To use this property, you can either type-cast it to your master page's class or use the @MasterType directive in the content page. When you use the @MasterType directive, the master page's public members can be directly accessed from the Master property as shown below:

Here, Home.aspx is the content page (notice the @MasterPage directive). The master page has a public property called MpProperty:

Home.aspx

<%@ Page MasterPageFile="~/SiteMaster.master"
CodeFile="Home.aspx.cs"
Inherits="Home" %>
<%@ MasterType VirtualPath="~/SiteMaster.master" %>

Home.aspx.cs

this.Title = Master.MpProperty;    // No type-casting required.

Note: If you set the content page to use a particular master page and you set the @MasterPage directive to a totally different master page class, at runtime you will get an InvalidCastException error.


A sample application for accessing master page controls and methods and properties is included.

Setting Master Page Programmatically

As you have already seen, you can specify a content page's master page at design time in its @Page directive or in the web.config file. What if you want to set the master page to use at runtime? One might want to change the look of the Web site at runtime if, say, the Web site integrates into different portals or if for different users you present a different look for the Web site. This could be a common requirement.

As the master page is merged into the content page at the page initialization stage, you need to specify the master page in the content page's PreInit event. This event occurs before the page's Load event. Sample code is shown below.

Visual Basic

Sub Page_PreInit(ByVal sender As Object, _
ByVal e As EventArgs) Handles Me.PreInit
Me.MasterPageFile = "~/NewMaster.master"
End Sub

C#

void Page_PreInit(Object sender, EventArgs e)
{
this.MasterPageFile = "~/NewMaster.master";
}

Referencing External Resources

At runtime, the master page merges into the content page and runs in the context of the content page. Now, the question that arises is that if you specify a path on the master page, say the UEL of an image, is this path resolved relative to the location of the master page or the content page? Consider an ASP.NET application with a master page and two content pages that are in different locations as shown.


Figure 1: A Web application in which content pages are in different locations with respect to the master page.

There are two different ways in which paths are handled in master pages:


  1. Server controls: In server controls on master pages, ASP.NET modifies the URLs of properties that reference external resources. For example, in the above application, if you have an Image Web control on the master page SiteMaster.master and you set its ImageUrl property to "Images/logo.jpg" at runtime ASP.NET will modify the URL so that it resolves correctly in the context of the content page. The behavior is summarized below.

    Code in SiteMaster.aspx:

       <asp:Image ID="Image1"
    runat="server"
    ImageUrl="Images/logo.jpg" />

    Runtime rendering for Home.aspx:

    <img ID="Image1" src="Images/logo.jpg" />

    Runtime rendering for AdminTest.aspx:

    <img ID="Img1" src="../Images/logo.jpg" />

    Result: Image renders just fine for both Home.aspx and AdminTest.aspx.


  2. Plain html: If you have plain HTML elements (that is, HTML controls without the runat="server" specified) on the master page, ASP.NET does not modify this and passes it on as is. This can create problems. For example, consider a plain HTML img on the master page as shown.

Code in SiteMaster.aspx:

<img ID="Image2" src="Images/logo.jpg" />

Runtime rendering for Home.aspx:

<img ID="Img2" src="Images/logo.jpg" />

Runtime rendering for AdminTest.aspx:

<img ID="Img3" src="Images/logo.jpg" />

Result: Image shows up in Home.aspx but not for AdminTest.aspx because the path is incorrect.

If you face the problem of paths on a master page not resolving properly when merging with content pages, you have the following options:


  1. Instead of using plain HTML, use server controls. You can do this by using corresponding Web server controls or by putting runat="server" in the plain HTML, thereby making them HTML server controls. So, in the above example you could change the plain img tags to
    <asp:Image ID="Image1" ImageUrl="Images/logo.jpg"
    runat="server" />

    OR

    <img ID="Img4" src="Images/logo.jpg" runat="server" >

    Note: Using this approach is going to lead to a small performance hit because server controls take a little more processing time.


  2. In the master page, for the plain HTML elements, instead of using a relative path, give the full path. This method doesn't result in a performance hit but the drawback here is that it's not a good idea to hardcode the URLs to the full path. So, in the example, the img would be specified as
    <img ID="Img5"
    src="http://www.mywebsite.com/Images/logo.jpg" >

  3. Keep your file layout in such a way that the content pages have the same relative position to the master page.

Some Useful Links


  1. This book has a great chapter on master pages (you can download this chapter for free!)
    A First Look at ASP.NET v 2.0
  2. The master pages section on ASP.NET Web
    Visual Web Developer 2005 Express Edition Beta Guided Tour

Find Out What's Taking Your .NET Code So Long to Run

Profiling generally is learning about your code's behavior. A big part of profiling is knowing where your code spends a lot of its time. Although I don't encourage profiling in early development, it can become critical when debugging subsystems that are too slow. It also is a useful technique near the end of a significant subsystem's development, especially if that subsystem performs outside of an acceptable range.

Visual Studio .NET 2005—especially Team Test—has some great tools for profiling, but they are designed to run in the IDE. An auto profiler that stays with your code would allow you to decide when to turn it off and on, even after deployment. This article demonstrates how to employ some useful .NET features like hashtables to build an easy-to-use auto profiler that can time a single statement or your entire application.

Implementing the Timestamp Class

The first step is building a class that tracks start and stop times. You need to know when you began profiling a block of code and the elapsed time since. You can use the DateTime class for start and stop times and incorporate the elapsed-time calculation into this class. Listing 1 shows the Stamp class.

Listing 1: The Stamp Class Contains Start and Stop Times

Friend Class Stamp
Private start As DateTime
Public Sub New()
start = DateTime.Now
End Sub

Public ReadOnly Property ElapsedTimeString() As String
Get
Return ElapsedTime.ToString()
End Get
End Property

Public ReadOnly Property StartTime()
Get
Return start.ToLongTimeString()
End Get
End Property

Public ReadOnly Property ElapsedTime() As TimeSpan
Get
Return DateTime.Now.Subtract(start)
End Get
End Property
End Class

Implementing the MarkTime Class

To keep the class easy to consume, place most of the work on yourself—the producer. The next class, MarkTime, uses a generic tack of Stamp objects. It constructs the Stamp objects, places them in a Stack, and returns the time stamp. You need something like a Stack here to handle recursion. For example, you could recurse 10 times in the same method, adding 10 starts to the MarkTime stack, before you pop and calculate any of the end times. Listing 2 shows the MarkTime class.

Listing 2: The MarkTime Class Contains a Stack to Handle Recursion

Friend Class MarkTime<o:p>
Private stack As Stack(Of Stamp) = Nothing
Public Sub New()
stack = New Stack(Of Stamp)()
End Sub

Public Function AddStart() As String
Dim start As Stamp = New Stamp()
stack.Push(start)
Return start.StartTime
End Function

Public Function RemoveStart() As String
If (stack.Peek() Is Nothing = False) Then
Return stack.Pop().ElapsedTimeString
Else
Return ""
End If

End Function
End Class

Building the AutoProfiler with a Hashtable

Finally, the AutoProfiler contains a shared constructor, Sub New, and two shared methods, Stopp and Start. (Stopp with a double-p is used because Stop is a reserved word in VB.) Start calls a shared method, GetKey, that uses a StackTrace and Reflection to obtain the fully qualified name of the calling method. This name becomes the key into the hashtable. Hence, the consumer does not need to determine which method is being profiled. The hashtable takes care of that. Should a method be called multiple times, an entry already will exist in the hashtable and additional stops and starts will be handled by the same MarkTime object in the hashtable.

All a consumer of the AutoProfiler needs do is call AutoProfiler.Start and AutoProfiler.Stopp. The class tracks start and stop times and the caller. Listing 3 contains the implementation of the MarkTime class.

Listing 3: The Implementation of the AutoProfiler Class

Public Class AutoProfiler
Private Shared hash As Hashtable = Nothing
Private Shared output As OutputType = OutputType.Console

Shared Sub New()
hash = New Hashtable
End Sub

Private Shared Function GetKey() As String
Const mask As String = "{0}.{1}"
Dim trace As StackTrace = New StackTrace
Dim method As MethodBase = trace.GetFrame(2).GetMethod()
Return String.Format(mask, _
method.ReflectedType.FullName, method.Name)
End Function

Public Shared Property OutputTo() As OutputType
Get
Return output
End Get
Set(ByVal value As OutputType)
output = value
End Set
End Property

<Conditional("DEBUG")> _
Public Shared Sub Start()
Dim marker As MarkTime = Nothing
Dim key As String = GetKey()
If (hash(key) Is Nothing) Then
marker = New MarkTime()
hash.Add(key, marker)
Else
marker = CType(hash(key), MarkTime)
End If
WriteLine("Started {0} at {1}", key, marker.AddStart())
End Sub

<Conditional("DEBUG")> _
Public Shared Sub Stopp()
Dim marker As MarkTime = Nothing
Dim key As String = GetKey()
If (hash(key) Is Nothing) Then
Throw New ArgumentOutOfRangeException(key, _
"Can't find start time entry")
End If
marker = CType(hash(key), MarkTime)
WriteLine("Stopped: {0}, elapsed time {1}", _
key, marker.RemoveStart())
End Sub

Private Shared Sub WriteLine(ByVal format As String, _
ByVal ParamArray args() As Object)
If (output = OutputType.Console) Then
System.Console.WriteLine(String.Format(format, args))
Else ' debug
System.Diagnostics.Debug.WriteLine( _
String.Format(format, args))
End If
End Sub

End Class

Listing 4 contains the complete AutoProfiler implementation, including a sample console application (a simple enum to redirect profiler output and a test console application) that shows how easy it is to profile with this technique.

Listing 4: The Complete AutoProfiler Implementation with a Sample Console App

Imports System
Imports System.Collections
Imports System.Collections.Generic
Imports System.Diagnostics
Imports System.IO
Imports System.Reflection
Imports System.Text


Module Module1

Sub Main()
Test()
End Sub

Sub Test()
Profiler.AutoProfiler.Start()
System.Threading.Thread.Sleep(5000)
Profiler.AutoProfiler.Stopp()
Console.ReadLine()
End Sub

End Module

Namespace Profiler

Public Enum OutputType
Console
Debug
Window
End Enum

Public Class AutoProfiler
Private Shared hash As Hashtable = Nothing
Private Shared output As OutputType = OutputType.Console

Shared Sub New()
hash = New Hashtable
End Sub

Private Shared Function GetKey() As String
Const mask As String = "{0}.{1}"
Dim trace As StackTrace = New StackTrace
Dim method As MethodBase = trace.GetFrame(2).GetMethod()
Return String.Format(mask, _
method.ReflectedType.FullName, method.Name)
End Function

Public Shared Property OutputTo() As OutputType
Get
Return output
End Get
Set(ByVal value As OutputType)
output = value
End Set
End Property

<Conditional("DEBUG")> _
Public Shared Sub Start()
Dim marker As MarkTime = Nothing
Dim key As String = GetKey()
If (hash(key) Is Nothing) Then
marker = New MarkTime()
hash.Add(key, marker)
Else
marker = CType(hash(key), MarkTime)
End If
WriteLine("Started {0} at {1}", key, marker.AddStart())
End Sub

<Conditional("DEBUG")> _
Public Shared Sub Stopp()
Dim marker As MarkTime = Nothing
Dim key As String = GetKey()
If (hash(key) Is Nothing) Then
Throw New ArgumentOutOfRangeException(key, _
"Can't find start time entry")
End If
marker = CType(hash(key), MarkTime)
WriteLine("Stopped: {0}, elapsed time {1}", _
key, marker.RemoveStart())
End Sub

Private Shared Sub WriteLine(ByVal format As String, _
ByVal ParamArray args() As Object)
If (output = OutputType.Console) Then
System.Console.WriteLine(String.Format(format, args))
Else ' debug
System.Diagnostics.Debug.WriteLine( _
String.Format(format, args))
End If
End Sub

End Class

Friend Class MarkTime
Private stack As Stack(Of Stamp) = Nothing
Public Sub New()
stack = New Stack(Of Stamp)()
End Sub

Public Function AddStart() As String
Dim start As Stamp = New Stamp()
stack.Push(start)
Return start.StartTime
End Function

Public Function RemoveStart() As String
If (stack.Peek() Is Nothing = False) Then
Return stack.Pop().ElapsedTimeString
Else
Return ""
End If

End Function
End Class

Friend Class Stamp
Private start As DateTime
Public Sub New()
start = DateTime.Now
End Sub

Public ReadOnly Property ElapsedTimeString() As String
Get
Return ElapsedTime.ToString()
End Get
End Property

Public ReadOnly Property StartTime()
Get
Return start.ToLongTimeString()
End Get
End Property

Public ReadOnly Property ElapsedTime() As TimeSpan
Get
Return DateTime.Now.Subtract(start)
End Get
End Property
End Class
End Namespace

Put Your Code on the Clock

You just created an AutoProfiler that enables a consumer to time any statement, multiple statement, method, or larger block of code only by calling AutoProfiler.Start and AutoProfiler.Stopp. This technique employs generics, hashtables, reflection, and the knowledge of the StackTrace class. You will find it useful whenever you encounter code running slower than desired.

My father taught me that a craftsman is known by the quality of his tools. Writing high-quality .NET code depends on knowing which .NET tools exist and crafting those that are absent. I hope you find the AutoProfiler helpful and easy to use.

Whammy Tracing: Hassle-Free .NET Debugging

When I write an application with 40 or 50 thousand lines of code, its major capabilities can be tedious to debug line by line. I'd rather run these features at full speed and then go back and audit what actually occurred. The debugging tool that this article discusses, called the Whammy, provides an easy way to do that.

In a very unobtrusive way, the Whammy permits you to use the .NET Framework to add detailed tracing information to your application while writing very little code. For example, if you wrote code like the following:

System.Diagnostics.Trace.WriteLine(string.Format(
"Main called at {0}", DateTime.Now))

The Whammy is for you. It achieves effectively the same result, but with a lot less typing (as I have implemented it):

Whammy.ConsoleWriteWithTimestamp()

Author Note: Whammy comes from the idea that this code automatically answers the question 'who am I' about the calling method. Pronounced fast as one word, Who-Am-I becomes Whammy. That's my story and I am sticking to it.


By reading this article, you will learn about the Whammy and its corresponding technologies, ConditionalAttribute, the StackFrame and StackTrace objects, and reflection.

Using the ConditionalAttribute to Remove Deployed Code

You can apply the ConditionalAttribute, which takes a string argument, to a class or method. The string or conditional argument, if defined, permits calls to the conditional classes or methods. Code tagged with the conditional attribute is always emitted to MSIL (intermediate language code), but if the string is not defined, those calls are ignored.

To define a conditional string constant, use the #define pragma, as in the following example:

#define "MY_STRING"

Obtaining a StackFrame and StackTrace Object

Two of the many interesting features that the System.Diagnostics namespace defines are the StackTrace and the StackFrame. The StackTrace is an ordered collection of StackFrames; a StackFrame is all of the information about a single, literal call stack. StackFrames include information about the method called, arguments passed to that method, and from there, all kinds of information that you can glean through reflection.

Using Reflection to Obtain Method Information

Reflection is a .NET technology that evolved from run time type information (RTTI). It allows you to explore the .NET metamodel. Reflection supports a very dynamic kind of programming that enables programmers to receive an object blindly and then ask about its methods, fields, properties, and events. You can even invoke these operations or dynamically invoke methods without knowing what they are before hand.

Reflection is a very useful technology that you can use to dynamically resolve the namespace and name of a calling method (the following Whammy class listing shows how). You also could expand the details of the reflected type and make the Whammy output more verbose:

Listing 1: The Whammy Class Automatically Provides Trace Information about the Calling Method

Imports System
Imports System.Collections.Generic
Imports System.Diagnostics
Imports System.Text

Module Module1

Sub Main()
Whammy.ConsoleWrite()
Console.ReadLine()
System.Diagnostics.Trace.WriteLine("Main called")

End Sub

End Module

' Thanks to Bill Wagner and Addison Wesley for the StackTrace tip
' in Effective C#

Public Class Whammy

<Conditional("DEBUG")> _
Public Shared Sub DebugWrite()
Debug.WriteLine(GetMyName())
End Sub

<Conditional("DEBUG")> _
Public Shared Sub DebugWriteWithTimeStamp()
Dim mask As String = String.Format("{0} called at {1}", _
GetMyName(), DateTime.Now)

Debug.WriteLine(mask)
End Sub

<Conditional("DEBUG")> _
Public Shared Sub ConsoleWrite()
Console.WriteLine(GetMyName())
End Sub

<Conditional("DEBUG")> _
Public Shared Sub ConsoleWriteWithTimestamp()
Dim mask As String = String.Format("{0} called at {1}", _
GetMyName(), DateTime.Now)
Console.WriteLine(mask)
End Sub

<Conditional("DEBUG")> _
Public Shared Function GetMyName()
Dim trace As New StackTrace()
Try
Return String.Format("{0}.{1}", _
trace.GetFrame(2).GetMethod().ReflectedType.FullName, _
trace.GetFrame(2).GetMethod().Name)
Catch ex As Exception
Return String.Format(trace.GetFrame(1).GetMethod().Name)
End Try
End Function

End Class

The code is pretty straightforward after you understand how its three key technologies—ConditionalAttribute, Reflection, and StackTrace and StackFrame objects—work. The first statement in bold (Whammy.ConsoleWrite) demonstrates how easy it is to employ the Whammy class. The next statement in bold (ConditionalAttribute("Debug")>) shows the proper usage of the ConditionalAttribute class.

When the class "undefines" DEBUG, the Whammy code is pretty much ignored, which mitigates any costs of using the trace capability after deployment. However, you could use a custom string for this attribute and turn the Whammy back on after deployment if you needed to.

The GetName function in bold demonstrates how you can get a StackTrace and StackFrame. GetMethod returns a MethodInfo object, which tells you about the reflected type, the namespace and class, and the method name. The integer passed to GetFrame tells you which frame you'd like to get: GetFrame(0) gets the method currently in; GetFrame(1) would get any method that called GetMyName; because you want the external caller, you use GetFrame(2).

The output from the sample indicates that the caller was WhammyDemo.Module1.Main, the Main method.

Another .NET Goody

The difference between advanced solutions and a lot of unnecessary work is knowing your tools' capabilities. Six years after first using the very rich, diverse set of tools in .NET, I am still amazed at how many cool technologies, such as reflection, attributes, and stack information, are available.

Acknowledgements

I'd like to offer a special thanks to Bill Wagner and Addison Wesley for letting me borrow the stack trace technique from Bill's excellent book Effective C#.

ASP.NET Tip: Adding Tracing to an Application

For some applications, such as console or Windows Forms applications, it's fairly easy to step through them to debug an issue. However, tracing down errors in web applications and services can be difficult, especially if they happen only in production environments where debugging isn't possible. In these cases, it's helpful to be able to add trace statements to your code that appear only when tracing is enabled on the web page or the web site

Step 1 is to enable tracing, either at a page or site level. To enable tracing for a single page, add the following to the @Page directive in your ASPX file:

Trace="true"

By default, this will dump the tracing information at the bottom of the web page. In most debugging/tracing situations, this is sufficient. However, if you're working on tracing in an entire site, you can make the change in the Web.config file instead of in each web page. To do this, open up the Web.config and add the following line:

<trace enabled="true" localOnly="false" pageOutput="true" />

The enabled attribute turns tracing on and off. The localOnly attribute controls whether the trace output shows up on machines other than the web server. In my own development environment, I use VMWare Workstation to host a Windows 2003 Server instance. So, I have to turn this attribute to false to see the output when I bring up a browser outside the virtual machine. If you are running IIS locally on your development machine, you won't need to set this attribute. The final attribute indicates that the output should be dumped at the bottom of the page. You also can opt to put the tracing in a separate file, but I've found that having the information right on the page is far easier to work with.

If you add this to a normal web page, you'll get all of this information:


Having all this information can make it much easier to debug issues, even before you add your custom messages.

To add custom messages to the trace output, you can use the Trace object property of the Page class to send messages into the list of events that is dumped to the web page. Here's an example:

Trace.Write("Current value of variable: " & variableValue.ToString());

This will print the line of text at the appropriate time during all the other events that are documented in the trace output. Because tracing can be turned on and off through a configuration file, this is an easy way to leave in debugging commands that you may need in the future. However, because it can be easily enabled and disabled, be careful about how much personal information, such as identification numbers and passwords, you dump into the trace output.

Additional Debugging Techniques in C#

Mike Borromeo's "Debugging Techniques in C#" presented a useful Debug class that can log debug or trace messages with caller and code location information automatically.

There is one assumption made by the Debug class that was presented that is not always true. The Debug class assumes that it lives in the same assembly as the code that uses it. When a call from another assembly calls in, it would raise an exception. An exception from the debug harness code itself is probably the last thing programmers want to see when debugging. I've modified the code presented by Mike's to make it more friendly to other assemblies. I've also made some modifications that fit my own needs. These modifications may be useful to you as well. Here are the details:

1.  An exception is raised in Mike's code when a call from another assembly comes in. This is because namespaces are added to the Hashtable by getting all namespaces in the assembly that the Debug class lives in.

Assembly a = Assembly.GetAssembly( new Debug().GetType() );
foreach( Type type in a.GetTypes() )
{
if( ! namespaces.Contains( type.Namespace ) )
namespaces.Add( type.Namespace, true );
}

When a caller from another assembly wants to log a message, "namespaces[sf.GetMethod().DeclaringType.Namespace]" would get a "null". This would cause the namespace checking statement in the Log() method to throw an exception because one cannot convert "null" to a "bool".

if( (bool) namespaces[ sf.GetMethod().DeclaringType.Namespace ] )
OnLog( msg,
sf.GetFileName(),
sf.GetFileLineNumber(),
sf.GetMethod().ToString() );

To solve this, I added code in Log() to check if the namespace is in the Hashtable. If not, the new namespace would be added. Now one can compile the Debug class as a separate assembly, install it in the GAC and share it across all projects.

2.  Mike's log message prints out the full path of the caller file regardless of the length of the path. It a piece of source code is in a deep subdirectory, the long path would make the real message less easy to read. I improved this by printing at most two directory names on the path, namely the topmost and the bottommost ones.

private static string FormatMsg(string msg,
string file,
int lineNumber,
string methodName,
string method)
{
string str;
string part1 = "[" + msg + "] " + methodName + "()" + ":";
char[] delimiters = {'\\'};
string [] elements = file.Split(delimiters);

if (elements.Length<=5)
str = part1 + file + ":" + lineNumber + " " + method;
else
str = part1 + elements[0] + "/" +
elements[1] + "/.../" +
elements[elements.Length-2] + "/" +
elements[elements.Length-1] +
":" + lineNumber + " " + method +
" [" + file + "]";

return str;
}

So instead of printing a message like:

[ERROR c:\Document and Settings\Somebody\work\programming\
VisualStudio.NET\csharp\samples\debugging\debugconsole.cs:
157 Void .ctor()]:This is a test debug error statement

the message is cut to:

[ERROR: c:\Document and Settings\...\debugging\
debugconsole.cs:157 Void .ctor()]: This is a test debug
error statement

This had worked better for me. Of course, the choice of preferrable message format is higly personally. It doesn't hurt to have one more choice.

Note that I did not use the DebugConsole. I felt the standard output window in IDE is sufficient. For those using the DebugConsole, some code needs to be added to show the added namespaces on the DebugConsole window.

Downloads
Download source - 2 Kb

Implementing a Generic Object State Dumper

Every .NET class inherits the ToString method from the root Object class. This method, ToString, has several uses. One of its main uses is to dump the state of the class objects to facilitate debugging process. For example, in the following class, MyClass overrides ToString to return the state of the MyClass objects.

using System;

class MyClass
{
public MyClass(){}


public override string ToString()
{
return "iMyState1 = " + iMyState1.ToString() + "\n" +
"strMyState = " + strMyState2 + "\n";
}

private int iMyState1;
private string strMyState2;
}

The problem with the above approach is that it is a manual process; whenever MyClass is changed by adding or deleting attributes, it is necessary to update the ToString to dump correct and complete state. For example, if a new state variable strMyState3 is added to the MyClass, ToString must be updated so that it returns the strMyState3 value, also. Similarly, if iMyState1 is deleted from the class, ToString must be updated to remove references to iMyState1 from the function to compile the code.

The above process is manual and thus error prone. It is possible to implement a generic mechanism using .NET reflection to automate this process. This relieves the developer from updating the ToString frequently and instead concentrate on implementing the functionality.

This article includes the source code of the Dbg class that implements the two functions DisplayObject and DisplayClass, which facilitate this process. The Dbg.DisplayObject is used to dump the state of an instance object and Dbg.DisplayClass is used to dump the static attributes of a class.

Let's rewrite MyClass using Dbg class.

using System;
using MyDbg; // Class Dbg is defined in MyDbg namespace

class MyClass
{
public MyClass(){}


public override string ToString()
{
return Dbg.DisplayObject(this);
}

private int iMyState1;
private string strMysState2;

}

Now, if attributes are added or deleted from MyClass, it is automatically taken care and no changes are needed to the ToString method. Not only this, the resulting code is much more readable. The readability effect can be seen prominently when the class state contains several variables.

Mostly, you will need to use Dbg.DisplayObject, which dumps both instance as well as static attributes of a class. However, sometimes you will need to see static attributes of a class without an instantiating object of that class. In such situations, Dbg.DisplayClass is indispensable.

Note that the use of Dbg.DisplayObject and Dbg.DisplayClass is not limited to the ToString method. These can be used to dump the state of a object or class at any time and place. These can even be used to dump the state of objects and classes that you have not implemented and do not have access to the source.

The download of this article contain two files. The first file is Dbg.cs, which implements the Dbg class. The second file is test.cs, which shows the usage of Dbg.cs. To compile it, use the csc Dbg.cs test.cs command. It will create the test.exe application.

The easiest way to use Dbg in an application is to add the Dbg.cs file to the project and import MyDbg namespace, as done in the examples given.

Downloads :

Download source - 2 Kb

Building a Logging Object in .NET

Welcome to the next installment of the .NET Nuts & Bolts column. In this column we talk about how you can use some of the classes in the .NET Framework to log information to different locations. The locations of choice in this article will be to a file and the windows event log.

Even though many programmers deny it about their code, or try using some name such as a "feature" to make it sound better, code will sometimes contain defects and errors. Whether that error is caused by a flawed design, bad specifications, incorrect formula, or the fact that you worked on it so long without sleep you were seeing things, you will eventually need a way to capture error messages to some location. In addition to error messages, it may also be advantageous to capture information about your code for performance or some other reason.

This brings me to the focus of this column. We'll explore how to build an object that can log information to a file or the event log. A database is another logical location to log information, but since I've covered some database stuff in the past, for this article I'll focus on accessing files and the event log.

Designing the Logging Object

Step one is to identify the problem. We want to build some type of logger object that can log to multiple locations. There are plenty of ways to approach this. One way would be to create different classes for each type of log we want to use then just create and use the desired class in our code. The downside to this approach is that it locks us into logging to the same place all of the time, or we have to put a bunch of duplicated logic in our code to decide which object to create and use. A better way to handle this, at least in my mind, is to have a single logger object with which to interact. This logger can be configurable to which location it will record information. Based on how it is configured at run time, it will create and log messages to the appropriate location.

Now we've decided on a single object to interact with when logging, and we are going to allow the object to log to different locations. Wouldn't it be nice to have components that can be added to the logger or taken out rather than have to add a bunch of code to or take away code from the logger? In order to accomplish this, we'll create an abstract class that defines the methods our individual log objects should contain. If all of the individual logging classes our logger uses all implement the same interface it makes our logger relatively simple.

The last remaining item is to define what functionality our logger and its individual log components should have. To come up with our definition we need to think about what methods and properties we want our logger to provide. It stands to reason since the primary purpose of our logger is for error logging we should have a method that accepts an exception as an input parameter. It also stands to reason that if we plan to use our logger to record informational errors as well that we'll need another method that accepts a generic message along with an indicator of whether the message is an error or some other type of informational or warning message that will control how the logging occurs.

First we'll define a base class for our log objects to ensure that our logger object can interact with each of them. After the base class is defined, we'll create the individual classes that will handle logging to a file and the event log, and then we'll tie it all together by creating the logger component.

Sample Abstract Log Class

The following code outlines a base class for the log objects with which our logger will interact.

using System;
namespace CodeGuru.ErrorLog.Logs
{
/// <remarks>
/// Abstract class to dictate the format for the logs that our
/// logger will use.
/// </remarks>
public abstract class Log
{
/// <value>Available message severities</value>
public enum MessageType
{
/// <value>Informational message</value>
Informational = 1,
/// <value>Failure audit message</value>
Failure = 2,
/// <value>Warning message</value>
Warning = 3,
/// <value>Error message</value>
Error = 4
}

public abstract void RecordMessage(Exception Message,
MessageType Severity);

public abstract void RecordMessage(string Message,
MessageType Severity);
}
}

Creating the Logging Object to Write to a File

Reading and writing to files is accomplished through classes in the System.IO namespace. The FileStream object is used to read or write files. The StreamReader and StreamWriter are used in conjunction with the FileStream to perform the actual action. Below we'll create an object that extends our Log base class and that uses the FileStream and the StreamWriter to write a message to a file.

Sample File Logging Class

using System;
using System.IO;
using System.Text;
namespace CodeGuru.ErrorLog.Logs
{
/// <remarks>
/// Log messages to a file location.
/// </remarks>
public class FileLog : Log
{
// Internal log file name value
private string _FileName = "";
/// <value>Get or set the log file name</value>
public string FileName
{
get { return this._FileName; }
set { this._FileName = value; }
}

// Internal log file location value
private string _FileLocation = "";
/// <value>Get or set the log file directory location</value>
public string FileLocation
{
get { return this._FileLocation; }
set
{
this._FileLocation = value;
// Verify a '\' exists on the end of the location
if( this._FileLocation.LastIndexOf("\\") !=
(this._FileLocation.Length - 1) )
{
this._FileLocation += "\\";
}
}
}

/// <summary>
/// Constructor
/// </summary>
public FileLog()
{
this.FileLocation = "C:\\";
this.FileName = "mylog.txt";
}

/// <summary>
/// Log an exception.
/// </summary>
/// <param name="Message">Exception to log. </param>
/// <param name="Severity">Error severity level. </param>
public override void RecordMessage(Exception Message,
Log.MessageType Severity)
{
this.RecordMessage(Message.Message, Severity);
}

/// <summary>
/// Log a message.
/// </summary>
/// <param name="Message">Message to log. </param>
/// <param name="Severity">Error severity level. </param>
public override void RecordMessage(string Message,
Log.MessageType Severity)
{
FileStream fileStream = null;
StreamWriter writer = null;
StringBuilder message = new StringBuilder();

try
{
fileStream = new FileStream(this._FileLocation +
this._FileName, FileMode.OpenOrCreate,
FileAccess.Write);
writer = new StreamWriter(fileStream);

// Set the file pointer to the end of the file
writer.BaseStream.Seek(0, SeekOrigin.End);

// Create the message
message.Append(System.DateTime.Now.ToString())
.Append(",").Append(Message);

// Force the write to the underlying file
writer.WriteLine(message.ToString());
writer.Flush();
}
finally
{
if( writer != null ) writer.Close();
}
}
}
}

Creating the Logging Object to Write to the Event Log

The .NET Framework includes classes for interfacing with the Windows Event Log. The classes are located in the System.Diagnostics namespace. The classes allow you to write to any of the existing log locations, or it allows you to create your own Event Log. There are different message types that can be logged to the event log, which is where I got the definition of the types that I used (informational, failure, warning, and error).

It is important to note that the Event Log was designed to hold information about normal application errors. It is designed to hold items of a more catastrophic system level. In the course of normal application errors it should be logged to another location such as a file or a database.

Below we'll create an object that extends our Log base class and that uses the Event log classes in the System.Diagnostics namespace to write a message to the event log.

Sample Event Logging Class

using System;
using System.Diagnostics;
using System.Text;
namespace CodeGuru.ErrorLog.Logs
{
/// <remarks>
/// Log messages to the Windows Event Log.
/// </remarks>
public class EventLog : Log
{
// Internal EventLogName destination value
private string _EventLogName = "";
/// <value>Get or set the name of the destination log</value>
public string EventLogName
{
get { return this._EventLogName; }
set { this._EventLogName = value; }
}

// Internal EventLogSource value
private string _EventLogSource;
/// <value>Get or set the name of the source of entry</value>
public string EventLogSource
{
get { return this._EventLogSource; }
set { this._EventLogSource = value; }
}

// Internal MachineName value
private string _MachineName = "";
/// <value>Get or set the name of the computer</value>
public string MachineName
{
get { return this._MachineName; }
set { this._MachineName = value; }
}

/// <summary>
/// Constructor
/// </summary>
public EventLog()
{
this.MachineName = ".";
this.EventLogName = "MyEventLog";
this.EventLogSource = "MyApplication";
}

/// <summary>
/// Log an exception.
/// </summary>
/// <param name="Message">Exception to log.</param>
/// <param name="Severity">Error severity level.</param>
public override void RecordMessage(Exception Message,
Log.MessageType Severity)
{
this.RecordMessage(Message.Message, Severity);
}

/// <summary>
/// Log a message.
/// </summary>
/// <param name="Message">Message to log.</param>
/// <param name="Severity">Error severity level.</param>
public override void RecordMessage(string Message,
Log.MessageType Severity)
{
StringBuilder message = new StringBuilder();
System.Diagnostics.EventLog eventLog =
new System.Diagnostics.EventLog();

// Create the source if it does not already exist
if( !System.Diagnostics.EventLog.SourceExists(
this._EventLogSource) )
{
System.Diagnostics.EventLog.CreateEventSource(
this._EventLogSource, this._EventLogName);
}
eventLog.Source = this._EventLogSource;
eventLog.MachineName = this._MachineName;

// Determine what the EventLogEventType should be
// based on the LogSeverity passed in
EventLogEntryType type = EventLogEntryType.Information;

switch(Severity.ToString().ToUpper())
{
case "INFORMATIONAL":
type = EventLogEntryType.Information;
break;
case "FAILURE":
type = EventLogEntryType.FailureAudit;
break;
case "WARNING":
type = EventLogEntryType.Warning;
break;
case "ERROR":
type = EventLogEntryType.Error;
break;
}
message.Append(Severity.ToString()).Append(",").Append(
System.DateTime.Now).Append(",").Append(Message);
eventLog.WriteEntry(message.ToString(), type);
}
}
}

Building the Logger

Now to tie it all together we need to create our logger object to interact with the individual log classes. The sample code is given below. An item of note is how the set accessor method of the LogType property results in the appropriate log object being created. The actual RecordMessage methods do nothing more than call the appropriate method on the desired log class.

The actual object used to do the logging is declared of type Logs.Log. This will allow us to create and use any objects that extend Log as its base class.

Sample Logging Class

using System;
namespace CodeGuru.ErrorLog
{
/// <remarks>
/// Managing class to provide the interface for and control
/// application logging. It utilizes the logging objects in
/// ErrorLog.Logs to perform the actual logging as configured.
/// </remarks>
public class Logger
{
/// <value>Available log types.</value>
public enum LogTypes
{
/// <value>Log to the event log.</value>
Event = 1,
/// <value>Log to a file location.</value>
File = 2
}

// Internal logging object
private Logs.Log _Logger;

// Internal log type
private LogTypes _LogType;
/// <value></value>
public LogTypes LogType
{
get { return this._LogType; }
set
{
// Set the Logger to the appropriate log when
// the type changes.
switch( value )
{
case LogTypes.Event:
this._Logger = new Logs.EventLog();
break;
default:
this._Logger = new Logs.FileLog();
break;
}
}
}

/// <summary>
/// Constructor
/// </summary>
public Logger()
{
this.LogType = LogTypes.File;
}

/// <summary>
/// Log an exception.
/// </summary>
/// <param name="Message">Exception to log.</param>
/// <param name="Severity">Error severity level.</param>
public void RecordMessage(Exception Message,
Logs.Log.MessageType Severity)
{
this._Logger.RecordMessage(Message, Severity);
}


/// Log a message.
/// </summary>
/// <param name="Message">Message to log.</param>
/// <param name="Severity">Error severity level.</param>
public void RecordMessage(string Message,
Logs.Log.MessageType Severity)
{
this._Logger.RecordMessage(Message, Severity);
}
}
}

Using the Logger

Now that we've built our logger object that we'll use to do all of the logging and its supporting log objects, let's give it a try. The example below should result in a file c:\mylog.txt being written and an MyEventLog being added to the Windows Event Log.

Sample Logger Usage

Logger logger = new Logger();

// Log to a file (default settings)
logger.RecordMessage("Testing", Logs.Log.MessageType.Error);
logger.RecordMessage(new Exception("My test exception"),
Logs.Log.MessageType.Error);

// Log to the event log
logger.LogType = Logger.LogTypes.Event;
logger.RecordMessage("Testing", Logs.Log.MessageType.Error);
logger.RecordMessage(new Exception("My test exception"),
Logs.Log.MessageType.Error);

Possible Enhancements

Now we have an object that can be used for logging information and errors. There are all sorts of enhancements that could make this even more valuable. Here are some ideas that you can consider for yourself.

Happy Breakpoints for Testing!

The other day I was testing a bunch of code for my upcoming book Debugging Microsoft .NET and Windows Applications (Microsoft Press) and found myself wishing there was a way to set breakpoints on all the functions in a particular source file. In order to initially verify that my unit test was actually doing anything worthwhile, I wanted to at least ensure I was executing each method in a particular file.

Since I was working with a new set of code, I was doing the initial testing and scrolling like mad so I could click in the margin next to each function to get that breakpoint set. After I'd scrolled half way through the file, I was cursing at myself thinking that there had to be a better way. After I got done with the initial set of testing, I simply had to take a look at finding a way to automate setting breakpoints for a file no matter what language I was using. At the rate I was going, I was going to wear the bottom off my poor optical mouse scrolling all over the place. Fortunately, the very cool extensibility model with Visual Studio .NET actually made it relatively easy to achieve my goal. Even better was instead of grinding through and doing an Add-In, it was something that was easily accomplished with a macro. The icing on the cake is that with the code at the end of this article, you can set, and more importantly, remove, those function breakpoints without messing up any of your carefully set existing breakpoints!

There's two pieces of work necessary to achieve the goal. The first is to figure out how to find each function/method in a file. The second part is setting and removing the breakpoints. Finding the particular functions in a source file can be a daunting task involving parsers and all sorts of weird technologies such as FLEX and YACC. If you've ever worked on real parsers you know they are extremely hard to do. In fact, since the premise behind .NET is the language independence, there's no telling how many languages you might have to write parsers for simply to find the function locations. While it would be nice to spend the next five years working on parsers, I just wanted a simple utility!

The good news is that Visual Studio .NET does all the heavy lifting for you and essentially hands you the parse tree for the current document through a very clean interface. If you search the MSDN for "Automation Object Model Chart" you'll find the chart that shows you a set of very cool objects such as called CodeElement, CodeFunction, CodeClass, and CodeEnum. By enumerating and recursing these elements you'll get the complete layout of what's in a particular source file. Keep in mind that the source file enumeration is only available on files that are part of the open project. Given the fact that this works for all languages is such a huge feature I'm sure people will be doing all sorts of very cool tools that we always wished for in the past but didn't have the time to write a complete parser.

If something as complicated as the parsing is already done for you, having complete access to the Debugger object to set or clear breakpoints is almost anticlimactic. The algorithm for setting breakpoints on all function entry is the following:

Get the code elements for the active document in the project
for each code element
{
Is the element a Namespace, Class or Struct?
{
Recurse the child elements
}
Is the element a function or property?
{
Get the line where this element starts
Set a file and line breakpoint at that location
}
}

One thing I want to point out about SetBreakpointsOnAllCurrentDocFunctions is that after you run it, you'll see the breakpoints set, but they might look like they are in the wrong place. For example, in a .NET program the breakpoint can be sitting on an attribute before the function. That's perfectly fine as once you start debugging the debugger does the exact right thing and moves the breakpoint down to the first executing line of the function.

After I whipped up the first version of the SetBreakpointsOnAllCurrentDocFunctions macro, it ran just like I expected. However, further testing showed up some problems, not in my code, but in Visual Studio .NET that I need to make you aware of. The worst problem is with C++ header files. For some reason, nearly everything in the file is marked as a function and the starting and ending points for the elements don't relate too reality in the file. I played around with it quite a bit and considered not processing header files, but since many people do put inline functions, I decided against it. What you'll see are a bunch of breakpoints on empty lines and in comments but the good news is that you can forget about them because the debugger will ignore them as they can't be set.

The second issue I found was that some C++ source files are not properly parsed by the environment and might be missing a function or two in the code model. In those cases, there's nothing you can do to get the actual function unless you want to grind through the file yourself. The good news is that it's not something you'll run into very much. For those of you doing primarily .NET development everything lines up perfectly with Visual Basic .NET and C#.

The last issue I ran into was what got me thinking that I needed a way to easily remove any breakpoints put in by SetBreakpointsOnAllCurrentDocFunctions. If you click on the red dot breakpoint marker in the source file, you'll find that it will never toggle off. You can clear the breakpoint by either right clicking on it and selecting Remove from the context menu or clearing it from the Breakpoint window.

If I was going to be setting all these breakpoints automatically, I simply had to have a way to clear them out. While I could have grabbed the breakpoints collection from the Debugger object and wiped it clear, having a macro remove your carefully placed breakpoints isn't that useful. In reading about the Breakpoint object, I saw that Microsoft was really thinking ahead and gave us a Tag property where we could squirrel away a user defined string! All I had to do was uniquely identify any breakpoints I set and they'd be a piece of cake to remove. The one worry I had was that the Tag field wouldn't have been saved between sessions, but a little experimentation proved it was. For the tag, I use the filename as part of it so when you run the RemoveBreakpointsOnAllCurrentDocFunctions macro it only removes the breakpoints put in by SetBreakpointsOnAllCurrentDocFunctions for the active file and leaves any others you set in that file alone.

Armed with SetBreakpointsOnAllCurrentDocFunctions and RemoveBreakpointsOnAllCurrentDocFunctions I've found that my testing is going easier because fewer than 200 lines of macro code quickly automate something I was doing manually all the time. Now you can easily find out if all the functions in a file are being called. As you hit each function, clear the breakpoint. At the end of the run, you'll see exactly which functions haven't been called. Good luck and crank that code coverage.

About the Author

John Robbins is the co-founder of Wintellect (http://www.wintellect.com), a consulting, debugging, and education firm that helps client's ship better code faster. He is also the author of Debugging Microsoft .NET and Windows Applications (Microsoft Press) as well as the Bugslayer columnist for MSDN Magazine. Before founding Wintellect, John was an architect and product manager at NuMega Technologies for products such as BoundsChecker, TrueTime, and TrueCoverage. Prior to joining the software world, John was a Paratrooper and Green Beret in the U. S. Army.

''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
' BreakPoints Module
'
' John Robbins - Wintellect - http://www.wintellect.com
'
' A module that will set and clear breakpoints at the entry point of
' all functions in the current source file. What's even cooler is
' that this code will not screw up breakpoints you already have set!
' Additionally, when removing breakpoints, it will only remove the
' breakpoints put for the current source file.
'
' There are some caveats:
' 0. The breakpoints set by the SetBreakpointsOnAllCurrentDocFunctions
' macro show up as you'd expect in the source windows as a red dot
' next to the line where they were set. However, you can click on
' that dot all day long as it will not clear it. Either run
' RemoveBreakpointsOnAllCurrentDocFunctions or clear them from the
' Breakpoints window. This seems to be a bug in the IDE.
' 1. There's a bug in the CodeModel for C++ header files. Pretty much
' anything in one gets called a function. There's no clean way to
' double check, short of parsing the file yourself, if the
' TextPoint values are real. If you run this on a header, you'll
' get breakpoints all over the place. Fortunately, the debugger
' is smart enough to ignore them.
' 2. The breakpoints are set at what the CodeElement.StartPoint
' property says is the first line. This can be at the start of an
' attribute or something. Don't worry, the debugger does the right
' thing and moves the breakpoint to the first executable line '
' inside the function. (Go Microsoft!) If a .NET method is empty,
' the breakpoint is set on the end of the function.
' 3. There's an odd bug you might run into when debugging this code.
' After setting the breakpoint, I access it to set the Tag field so
' I can identify which breakpoints this macro set. When debugging,
' that access seems to cause a Null Reference exception in some
' cases. However, if you don't set breakpoints, it will run fine.
' 4. In some C++ source files, the CodeModel occasionally does not
' have a function or two that's shown in the code window. Since
' you can't get them, you can't set breakpoints on them.
' 5. The active document returned from DTE.ActiveDocument is odd.
' It's the last code document that had focus. This can mean you're
' looking at the Start Page, but setting breakpoints on something
' hidden. These macros force you to have the cursor in a real code
' window before they will run.
'
' Version 1.0 - August 28, 2002
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''




Imports EnvDTE
Imports System.Diagnostics
Imports System.Collections
Public Module BreakPoints
Const k_ConstantTagVal As String = "Wintellect Rocks "
Public Sub SetBreakpointsOnAllCurrentDocFunctions()
' Get the current source file name doing all the checking.
Dim CurrDoc As Document = GetCurrentDocument()
If (CurrDoc Is Nothing) Then
Exit Sub
End If
' Get the source file name and build up the tag value.
Dim SrcFile As String = CurrDoc.FullName
Dim TagValue As String = BuildTagValue(CurrDoc)


' While I might have a document, I still need to check this
' is one I can get a code model from.




Dim FileMod As FileCodeModel = 
CurrDoc.ProjectItem.FileCodeModel
If (FileMod Is Nothing) Then
MsgBox("Unable to get code model from document.", _
MsgBoxStyle.OKOnly, _
k_ConstantTagVal)
Exit Sub
End If
' Everything's lined up to enumerate!
ProcessCodeElements(FileMod.CodeElements, SrcFile, TagValue)
End Sub
Private Sub ProcessCodeElements(ByVal Elems As CodeElements, _
ByVal SrcFile As String, _
ByVal TagValue As String)
' Look at each item in this collection.
Dim CurrElem As CodeElement
For Each CurrElem In Elems
' If I'm looking at a class, struct or namespace, I need 
' to recurse.
If (vsCMElement.vsCMElementNamespace = CurrElem.Kind) Or _
(vsCMElement.vsCMElementClass = CurrElem.Kind) Or _
(vsCMElement.vsCMElementStruct = CurrElem.Kind) Then
' This is kinda odd. Some CodeElements use a Children
' property to get sub elements while others use 
' Members.
Dim SubCodeElems As CodeElements = Nothing
Try
SubCodeElems = CurrElem.Children
Catch
Try
SubCodeElems = CurrElem.Members
Catch
SubCodeElems = Nothing
End Try
End Try
If (Not (SubCodeElems Is Nothing)) Then
If (SubCodeElems.Count > 0) Then
ProcessCodeElements(SubCodeElems, _
SrcFile, _
TagValue)
End If
End If
ElseIf (CurrElem.Kind = _
vsCMElement.vsCMElementFunction) Or _
(CurrElem.Kind = _
vsCMElement.vsCMElementProperty) Then
' Interestingly, Attributed COM component attributes 
' show up broken out into their functions. The only
' thing is that their StartPoint property is invalid
' and throws an exception when accessed.
Dim TxtPt As TextPoint
Try
TxtPt = CurrElem.StartPoint
Catch
TxtPt = Nothing
End Try
If (Not (TxtPt Is Nothing)) Then
Dim LineNum As Long = TxtPt.Line
Dim Bps As EnvDTE.Breakpoints
' Plop in one of my breakpoints.
Bps = DTE.Debugger.Breakpoints.Add(File:=SrcFile, _
Line:=LineNum)
' Get the BP from the collection and set the tag
' property so I can find the ones I set.
Try
' There's some sort of bug here. If you debug
' through this with the VSA debugger, it fails
' (0x8004005's) on accessing the breakpoints 
' collection occasionally. However, if you 
' run it, life is good. Whateva!
Dim Bp As EnvDTE.Breakpoint
For Each Bp In Bps
Bp.Tag = TagValue
Next
Catch
End Try
End If
End If
Next
End Sub
Public Sub RemoveBreakpointsOnAllCurrentDocFunctions()
' This is a much simpler function since I set the tag value on
' the breakpoints, I can remove them simply by screaming 
' through all BPs and removing those.
Dim CurrDoc As Document = GetCurrentDocument()
If (CurrDoc Is Nothing) Then
Exit Sub
End If
Dim TagValue As String = BuildTagValue(CurrDoc)
Dim CurrBP As EnvDTE.Breakpoint
For Each CurrBP In DTE.Debugger.Breakpoints
If (CurrBP.Tag = TagValue) Then
CurrBP.Delete()
End If
Next
End Sub
Private Function GetCurrentDocument() As Document
' Check to see if a project or solution is open. If not, you
' can't get at the code model for the file.
Dim Projs As System.Array = DTE.ActiveSolutionProjects
If (Projs.Length = 0) Then
MsgBox("You must have a project open.", _
MsgBoxStyle.OKOnly, _
k_ConstantTagVal)
GetCurrentDocument = Nothing
Exit Function
End If
' Getting the active document is a little odd.
' DTE.ActiveDocument will return the active code document, but
' it might not be the real ACTIVE window. It's quite 
' disconcerting to see macros working on a document when you're
' looking at the Start Page. Anyway, I'll ensure the active 
' document is really the active window.
Dim CurrWin As Window = DTE.ActiveWindow
Dim CurrWinDoc As Document = CurrWin.Document
Dim CurrDoc As Document = DTE.ActiveDocument
' Gotta play the game to keep from null ref exceptions in the 
' real active doc check below.
Dim WinDocName As String = ""
If Not (CurrWinDoc Is Nothing) Then
WinDocName = CurrWinDoc.Name
End If
Dim DocName As String = "x"
If Not (CurrDoc Is Nothing) Then
DocName = CurrDoc.Name
End If
If ((CurrWinDoc Is Nothing) And _
(WinDocName <> DocName)) Then
MsgBox("The active cursor is not in a code document.", _
MsgBoxStyle.OKOnly, _
k_ConstantTagVal)
GetCurrentDocument = Nothing
Exit Function
End If
' While I might have a document, I still need to check this is
' one I can get a code model from.
Dim FileMod As FileCodeModel = 
CurrDoc.ProjectItem.FileCodeModel
If (FileMod Is Nothing) Then
MsgBox("Unable to get code model from document.", _
MsgBoxStyle.OKOnly, _
k_ConstantTagVal)
GetCurrentDocument = Nothing
Exit Function
End If
GetCurrentDocument = CurrDoc
End Function
Private Function BuildTagValue(ByVal Doc As Document) As String
BuildTagValue = k_ConstantTagVal + Doc.FullName
End Function
End Module

Testing Visual Basic .NET with NUnit

NUnit is a testing framework for all .NET languages. The basic idea is that you create a class library that calls into your code, sending test values and evaluating responses automatically. The big payoff is that NUnit can run these tests automatically and be integrated as part of your build, test, and deployment lifecycle.

Let me take a moment or two to describe how you can integrate NUnit into your software development lifecycle, and in so doing, offer good arguments for integrating NUnit into your development lifecycle.

 

I have long subscribed to the idea first promoted by others (like Grady Booch) that testers and programmers should be involved as observers during the analysis phase. At this juncture the programmer's role is to begin prototyping things that represent the riskiest aspects of software development, and the tester's role is to begin recording necessary tests to validate these business rules. By integrating NUnit all players—the customer, analysts, programmers, and testers—have a common frame of reference for recording these tests. Because NUnit tests are written in .NET the testers can write tests without knowing the inner-plumbing details of the code; testers simply need to know how to interact with the business rules. And, because the customers and analysts are involved in defining these tests from the outset, they are more likely to get tests that validate the business rules from the customer's and analyst's perspective, rather than all-positive tests defined by the programmer.

The net effect is that everyone knows in advance what the tests must be, the platform in which those tests will be captured, and NUnit can be used as part of an automated development lifecycle. Because the NUnit GUI uses a simple go, no-go means of indicating whether tests have passed or failed, the fudge factor is eliminated. A positive benefit is that even if you cannot afford full time testers, you do not have to leave test definitions up to the programmers alone. Everyone can participate in the test-definition process, even if the programmers are actually codifying the tests.

In this article I will demonstrate how to use some aspects of NUnit and hopefully convey the powerful benefit incorporating NUnit into your product's lifecycle can have.

Downloading and Installing NUnit

LNUnit V2.0 is available as an open source product from www.nunit.org. It is a framework implemented entirely in .NET for .NET languages. You can download the binaries and source for free and modify them—if you need to—as long as you adhere to the rules proscribed by nunit.org.

When you download NUnit you can select all of the default configuration options, which will place NUnit in the C:\Program Files\NUnit V2.0\ folder. For our purposes we will assume that NUnit is in the default location; however, you can install it anywhere.

After you have installed NUnit on your workstation you will see an NUnit shortcut on your desktop. This shortcut is for the GUI version. NUnit also ships a console version. We will use the GUI version here, but the console version might be ideal for an automated build, test, deploy cycle.

Building a Test Fixture

The basic process for defining a test is to create a class library, add a reference to the code you want to test to that library, define a class in the test class library, and adorn that class with the TestFixtureAttribute. TestFixtureAttribute is defined in the nunit.framework.dll assembly, so you'll need to add a reference to this assembly (found in C:\Program Files\NUnit V2.0\bin folder) and add an Imports statement to the .cs file containing your class library. After these steps NUnit will recognize your class library as one that will contain tests. The next step is to define those tests.

NUnit offers many tools for testing, but you can get started with a class adorned with the TestFixtureAttribute and just one method in that class adorned with the TestAttribute. Listing 1 offers a very simple class for the purposes of testing, and listing 2 shows you how easy it is to begin using NUnit. (Note: As a general practice you do not need to write your own sort. Any System.Array can be sorted by calling the Array.Sort shared method, which performs a quick sort. In addition, you can define an IComparer that will permit you to define custom comparisons for complex types. The sort method in listing 1 is simply there to demonstrate NUnit.)

Listing 1: Some code we want to test.

Public Class TestMePlease

Public Shared Sub Swap(ByVal Values() As Integer, _
ByVal I As Integer, ByVal J As Integer)

Dim Temp As Integer = Values(I)
Values(I) = Values(J)
Values(J) = Temp

End Sub

Public Shared Sub Sort(ByVal Values() As Integer)
Dim I, J As Integer
For I = Values.GetLowerBound(0) To Values.GetUpperBound(0) - 1
For J = I To Values.GetUpperBound(0)
If (Values(I) > Values(J)) Then
Swap(Values, I, J)
End If
Next
Next
End Sub

Public Shared Sub Dump(ByVal Values() As Integer)
Dim E As IEnumerator = Values.GetEnumerator

While (E.MoveNext)
Console.WriteLine(CType(E.Current, Integer))
End While

Console.WriteLine("press enter")
Console.ReadLine()

End Sub

End Class

Listing 1 contains a Bubble sort that plays the role of code we want to test. Here is listing 2 showing a simple test for the sort method.

Listing 2: An NUnit TestFixture that will test our code for us.

Imports NUnit.Framework
Imports LibraryToTest

<TestFixture()> _
Public Class Test

<SetUp()> _
Public Sub Init()
' Initialization code here
End Sub

<TearDown()> _
Public Sub Deinit()
' De-initialization code here
End Sub

<Test()> _
Public Sub TestSortPass()

Dim I() As Integer = {5, 4, 3, 2, 1}
TestMePlease.Sort(I)
Assertion.AssertEquals("Sort passed", 2, I(1))

End Sub

End Class

Listing 2 contains a TestFixture. The class library containing the class in listing 2 is loaded into NUnit. NUnit uses Reflection to find classes adorned with the TestFixtureAttribute, create instances of that class, and execute methods adorned with special attributes. For example, at the start of each test methods with SetUpAttribute are run, followed by a single TestAttribute method, and finished with the method (if any) marked with the TearDownAttribute.

In our example in listing 2 Init, TestSortPass, and Deinit are run in that order. I don't have any initialization code; if you need to create an object for a specific test then the method marked with the SetUpAttribute is a good place to do it. Let's take a look at how we define tests now.

Defining Tests

Test methods—that is, methods that will be called directly by NUnit—are marked with the TestAttribute. Test methods are subroutines that have no parameters. Reflection can invoke methods with parameters and return types, but we are not testing the test method, we are testing the code inside the method. Thus, everything you need to perform a single test should occur inside the test method and the initialization method.

My test method is named TestSortPass. Adding a Pass or Fail suffix is a convention I follow. In TestSortPass I created an array of integers to pass to my sort method. I invoked the Sort method on the code that I am testing and then checked to see if an arbitrary value was in the correct position. The statement containing Assertion.AssertEquals is the code that comes from the NUnit framework.

There are eight overloaded versions of AssertEquals alone. The arguments in the overloaded version I used are a message, the result, and the test value. The statement is understood to mean that I(1) = 2, or that the second position in the array should have the second lowest value.

When I load the class library containing the TestFixture into NUnit I will see a list of tests. You can click on a single test or the fixture and click Run to run those tests. Tests with a green circle succeeded and tests with a red circle failed (see figure 1).



Click here for larger image

Figure 1: The NUnit graphical user interface showing the test results for the TestSortPass test.

Advanced Teechniques

Even if all NUnit offered was a simple pass or fail test it would be worth using; however, the NUnit framework is quite extensive. You can use an IgnoreAttribute to mark tests that aren't quite ready. These tests won't run and will be so noted in NUnit with a yellow circle adjacent to the test. You can add an ExpectedExceptionAttribute and the type object of an exception to indicate that a test method should receive an exception and the type of the exception, and much more.

With a bit of cleverness you can devise useful tests for very complex applications. On a project I am working on we have even defined tests for ASP.NET. Exploring the source code, the online documentation, and experimentation will help you invent great ways to test your applications.

Understanding the Test Cycle with NUnit

NUnit makes a copy of your class library containing the tests. It then loads the copy into its own application domain. The benefit here is that you can change your code and recompile and NUnit will automatically detect that the test assembly or dependent assemblies have changed, reloading them without necessitating an NUnit restart. This is a nice feature that permits you to seamlessly edit, compile, test, and modify code without stopping, starting, and reloading test assemblies.

In addition to loading the test assembly into its own AppDomain, NUnit looks for a file named testassembly.dll.config. This allows you to associate a configuration file with your test assemblies. For example, if you have special configuration information—a TraceSwitch, for example—you can copy that information into a .config file that will be read and used by NUnit too. Normally only Web.config and application .config files are read by .NET assemblies, but because NUnit supports a .config file you will not have to modify code that is dependent on a .config file for NUnit testing.

Summary

NUnit is an open source framework for testing applications written into .NET. Because the cost is zero there should be little or no objection to incorporating NUnit into your software development cycle. If used effectively you'll discover that NUnit greatly facilitates unit testing and automated regression testing. Once a test is created as a class library and test fixtures and tests are defined you'll never have to write that test again. Simply run the test and you will instantly know what works, what was broken, and what just isn't working yet.

Debugging Hosted Assemblies

We are well into winter in the Midwest and Northeast and that's got me thinking about sunshine and warmth. Sunshine and warmth makes me think of Las Vegas. I love Las Vegas and to play Black Jack. Generally my gambling is modest and I am a less than average player but committed to getting better. As an entertainment expense if I win $500 then I can go to a show, fly an Extra 300L at the Aerobatic Experience in Boulder City, have a great meal, visit the spa and get a massage, or go to the Richard Petty Experience on the casino's dime. So, winning a few hundred bucks means extra fun.

My other motivation for writing this particular article is that I really liked Chris Sells' Wahoo! Game on www.sellsbrothers.com/. Wahoo! is basically Tetris. You can download and play this Windows game from the Internet, and Chris uses it to talk about code access security. I liked the idea and BlackJack, so I thought I'd borrow the sentiment and create a Windows BlackJack game and use it to demonstrate how to debug a hosted assembly.

 

A hosted assembly is a non-executable assembly that runs in another process. COM+ server applications run in dllhost.exe. ASP.NET assemblies run in the aspnet_wp.exe process, and any assembly can be loaded and tested by NUnit test assemblies. For our purposes I elected to use a NUnit assembly and NUnit to demonstrate debugging a hosted process. The process we will be debugging is the library for my blackjack game. (As soon as I am happy with the results I will post the game, which you can download for free with source, from http://www.softconcepts.com/BlackJack.)

It is important to note that attaching to a host process and debugging your .NET assemblies is the same regardless of the host. All you need to know is the underlying name of the executable host process. (Examples are: COM+ is dllhost.exe, ASP.NET is aspnet_wp.exe, and NUnit is nunit-gui.exe.)

Building the BlackJack Game

The construction of the BlackJack took several hours to assemble. I understand the game well enough to identify some basic classes without a lot of analysis or fancy modeling; in short, I hacked the game together and refactored until I was reasonably happy with the result (see figure 1).



Click here for larger image

Figure 1: With a little practice my game might improve.

If you are familiar with the card games in general and Blackjack in particular then you won't be surprised by some of the classes implemented to support the game. Some of the classes we will need to test include BlackJack, Cards, Dealer, DealerHand, Deck, Decks, Hand, Hints, Player, PlayerCollection, PlayerHand, PlayerHandCollection, Shuffler, and Suits. (The source listing is too big to provide here, but it will be available online at http://www.softconcepts.com/BlackJack shortly.)

To permit the game to be played as a console game, Windows game, perhaps a CE game, and eventually a Web game, I implemented most of the classes in a separate class library. It is this class library (as well as the clients, but not for our purposes) that need to be tested in a host and that we will use to demonstrate host debugging in Visual Studio .NET.

Whereas some people like to build software starting with the presentation layer and finishing with the object layer, I generally build software from the object layer to presentation layer. I start with some core classes at the lowest level of complexity and then layer complexity, testing each layer as complexity is added. To this end it seemed natural to start with a card class, specific classes of cards—for example, in BlackJack Ace can have the value of 1 or 11—so it seemed suitable to subclass Card to define an Ace class as well as a class for each face value. Very quickly there were 15 classes to test—Card, Ace through King, and Deck. Because these are intrinsic classes it is useful to ensure these classes function correctly before layering complexity. Listing 1 shows supporting enumerations, and Listing 2 contains the core Card base class and the Ace class.

Listing 1: Enumerations for describing playing cards.

Imports System

Public Enum Face
One
Two
Three
Four
Five
Six
Seven
Eight
Nine
Ten
Jack
Queen
King
End Enum

Public Enum Suit
Diamond
Club
Heart
Spade
End Enum

Listing 2: The Card base class and an Ace subclass.

Imports System
Imports System.Drawing

Public MustInherit Class Card
Private FCardFace As Face
Private FHighFaceValue As Integer
Private FLowFaceValue As Integer
Private FCardSuit As Suit

#Region "External methods and related fields"
Private width As Integer = 0
Private height As Integer = 0

Declare Function cdtInit Lib "cards.dll" (ByRef width As Integer, _
ByRef height As Integer) As Boolean
Declare Function cdtDrawExt Lib "cards.dll" (ByVal hdc As IntPtr, _
ByVal x As Integer, ByVal y As Integer, ByVal dx As Integer, _
ByVal dy As Integer, ByVal card As Integer, _
ByVal suit As Integer, ByVal color As Long) As Boolean
Declare Sub cdtTerm Lib "cards.dll" ()
#End Region

Public Sub New(ByVal lowValue As Integer, ByVal highValue As Integer, _
ByVal cardSuit As Suit, ByVal cardFace As face)

cdtInit(width, height)
FHighFaceValue = highValue
FLowFaceValue = lowValue
FCardSuit = cardSuit
FCardFace = cardFace

End Sub

Public Function GetLowFacevalue() As Integer
Return FLowFaceValue
End Function

Public Function GetHighFaceValue() As Integer
Return FHighFaceValue
End Function

Public Function GetFaceValue() As Integer
Return FLowFaceValue
End Function

Public Property CardSuit() As Suit
Get
Return FCardSuit
End Get
Set(ByVal Value As Suit)
FCardSuit = Value
End Set
End Property

' TODO: Convert various paint styles to interface
Public Sub PaintTextFace()
Console.WriteLine(GetCardValue())
End Sub

Public Sub PaintGraphicFace(ByVal g As Graphics, ByVal x As Integer, _
ByVal y As Integer, ByVal dx As Integer, ByVal dy As Integer)

Dim hdc As IntPtr = g.GetHdc()
Try
Dim Card As Integer = CType(Me.FCardFace, Integer)
cdtDrawExt(hdc, x, y, dx, dy, Card, 0, 0)
Finally
' If Intellisense doesn't show this method unhine advanced
' members in Tools|Options
g.ReleaseHdc(hdc)
End Try
End Sub

Public Sub PaintGraphicBack(ByVal g As Graphics, ByVal x As Integer, _
ByVal y As Integer, ByVal dx As Integer, ByVal dy As Integer)

Dim hdc As IntPtr = g.GetHdc()
Try
' TODO: Make card style (hardcoded 61) a configurable property
cdtDrawExt(hdc, x, y, dx, dy, 61, 0, 0)
Finally
g.ReleaseHdc(hdc)
End Try

End Sub

Protected Overridable Function GetTextValue() As String
Return GetLowFacevalue().ToString()
End Function

Protected Function GetTextSuit() As String
Return FCardSuit.ToString().Chars(0).ToString
End Function

Public Overridable Function GetCardValue() As String
Return String.Format("{0}{1}", GetTextValue(), GetTextSuit())
End Function
End Class

Public Class Ace
Inherits Card

Public Sub New(ByVal cardSuit As Suit)
MyBase.New(1, 11, cardSuit, Face.One)
End Sub

Protected Overrides Function GetTextValue() As String
Return "A"
End Function
End Class



Click here for larger image

Figure 2: Show advanced methods that are concealed in Intellisense by default; an example of an advanced method is the Graphics.ReleaseHdc method.


The enumerations Face and Suit use strongly typed enumerations to describe the value and suit of a card. The Card class stores properties like the face and suit values as well as the underlying value of the card. In addition, I have declared some API methods from cards.dll, which already contain capabilities for drawing graphic playing cards. (Cards.dll ships with Windows and supports games like Solitaire (sol.exe), which ships with Windows.)

Some changes I'd like to see before this code goes live are to permit the dynamic configuration of the back of the playing card—it is hard coded to 61, a value described in the documentation for cards.dll available with a Google search—and convert the Paint methods into overloaded methods or specific implementations of a graphic and text card interface.

Generally, when I get write as much code as shown in figures 1 and 2 I begin testing.

Defining the NUnit Tests

To test our code we can downloaded the superlative NUnit Version 2.1 testing software from www.nunit.org. (Refer to last month's article on www.codeguru.com for more information on debugging with NUnit.)

NUnit will play the role of our testing host. Listing 3 contains some NUnit tests that we can very quickly assemble to begin scaffolding a suite of tests in conjunction with our application development.

Listing 3: NUnit tests for our Card and Ace classes.

Imports NUnit.Framework
Imports BlackJackLibVB
Imports System.Windows.Forms

_
Public Class BlackJackTests

_
Public Sub CardSuitTest()
Dim aceOfClubs As Ace = New Ace(Suit.Club)
Console.WriteLine(aceOfClubs.GetCardValue())
Assertion.AssertEquals("Expected 'AC'", aceOfClubs.GetCardValue(), "AC")
End Sub

_
Public Sub CardAceLowValueTest()
Dim a As Ace = New Ace(Suit.Heart)
Console.WriteLine(a.GetCardValue())
Assertion.AssertEquals("Expected 1", 1, a.GetLowFaceValue())
End Sub

_
Public Sub CardAceHighValueTest()
Dim a As Ace = New Ace(Suit.Heart)
Console.WriteLine(a.GetCardValue())
Assertion.AssertEquals("Expected 11", 11, a.GetHighFaceValue())
End Sub

Private spade As Ace
_
Public Sub GraphicPaintAceTest()
spade = New Ace(Suit.Spade)
Dim F As Form = New Form
AddHandler F.Paint, AddressOf OnPaint
F.ShowDialog()
Assertion.Assert(MsgBox("Did you see the ace of spades?", _
MsgBoxStyle.Question Or MsgBoxStyle.YesNo, "Ace of Spades") _
= MsgBoxResult.Yes)

End Sub

Private Sub OnPaint(ByVal sender As Object, ByVal e As PaintEventArgs)
If (spade Is Nothing) Then Return
spade.PaintGraphicFace(e.Graphics, 0, 0, 75, 100)
End Sub

End Class
</TEST()></TEST()></TEST()></TEST()></TESTFIXTURE()>

NUnit tests can be as advanced or as simple as you like. The real benefit of using NUnit is that it was designed to work with .NET specifically, uses a simple green for pass and red for fail visual metaphor, and offers a consistent predetermined means of defining, running, and evaluating tests.

To demonstrate I implemented some simple test that evaluate the text face-value of a card and one test that displays the graphic representation of the Ace of Spades. Figure 3 shows NUnit running in the background with the dynamic form and Ace card shown in the foreground.



Click here for larger image

Figure 3: The paint test for the ace of spades.

NUnit can be used quite simply as a pass or fail testing tool. Generally, you will need a testing scaffold that permits you to interact with your code while it's running; NUnit can be used for this too.

Attaching Visual Studio .NET to the Host Process

As the king of your demesne you can elect to step through your code for any reason. I wanted to work on the precise positioning of the playing cards. Pretending that the Ace of Spades didn't print where I anticipated we could test the BlackJackLibVB library while it is running in Visual Studio .NET. To do this we need to attach to the host process, NUnit. To debug a library running in its host, follow these steps:


  1. Open the library project you would like to test in Visual Studio .NET
  2. Run the host process that loads the library you will be testing. (In the example we need to run nunit-gui.exe and load the BlackJackLibVB.dll as shown in figure 3.)
  3. Back in Visual Studio .NET select Debug|Processes
  4. In the list of Available Processes find the nunit-gui.exe process hosting the BlackJackLibVB.dll as shown in figure 4
  5. Click the hosting process and click Attach
  6. In the Attach to Process dialog (see figure 5) check the Common Language Runtime program type and click OK
  7. Click Close to close the Processes dialog



Click here for larger image

Figure 4: Attached to the host process hosting your library.


Figure 5: We are debugging .NET code so select the Common Language Runtime program type.

After you click close you will see the dependent assemblies loaded into the integrated debugger in the Debug view of the Output window. Your library code is now running in the integrated debugger.

Debugging the BlackJackLibVB

Debugging a library this way is identical to debugging an executable, once the library is loaded via the host process. To debug the BlackJackLibVB set some breakpoints in the code at areas in which you are interested and run the tests. When the debugger hits your breakpoint the debugger will suspend code execution and you can take over. All of the great features you are used to in Visual Studio .NET are now at your fingertips when debugging library assemblies.

When You are Finished Debugging

When you have finished debugging your hosted assembly you can kill the host process or detach the debugger from the host process by selecting Debug|Processes, clicking the attached process and clicking Detach (see figure 4).

If you are debugging and NUnit is your host you have the luxury of detaching the debugger from NUnit, modifying your code, rebuilding, and re-attaching the debugger to NUnit all without shutting down VS.NET or NUnit. Collectively, these tools will yield some powerful results.

Summary

Some programmers aren't building rich client applications. Some of us, sometimes, are building frameworks of our own. Instead of spending a lot of time building test applications use the host your assembly will really run in and attach VS.NET to that process.

Using the VS.NET Debug|Processes dialog to attach to a running host process permits you to use the powerful integrated debugger in VS.NET without a lot of extra effort.