Monday, March 16, 2015

Introducing .NET C# Inversion Of Control and Microsoft Unity Hands-On Lab

Introduction

During past few months we introduced and heavily extended usage of Microsoft Unity IoC container in our code base as a part of the effort to make the code more loosely coupled.

As a result of those changes we now even more than before also rely on Inversion Of Control or more specifically Dependency Injection.

Thus both Microsoft Unity and IoC/DI are now crucial part of our toolbox. In order to bring everybody on our team up-to-speed as well as to have training material for newbies we decided to create a simple training material which should help us.

After brief discussion within the team we agreed that the best way how to handle it would be to:

  • Collect some solid resources describing the IoC/DI.
    • Martin Fowler is obviously first choice - though differences between IoC and DI are better explained in different resources :-).
  • Provide hands-on lab project which will cover all the specifics for:
    • Inversion Of Control/Dependency injection.
    • Microsoft Unity Container.
    • Will serve as a self-training material.
  • We will publish it on Github under MIT license.

Target audience

.NET Software developers/engineers and architects who:

  • Would and are willing to learn about IoC/DI.
  • Are familiar IoC/DI but would learn about Microsoft Unity IoC container.
  • Would learn about possible challenges which usage of the MS Unity can bring.

Training materials

With my colleagues we prepared set of projects which allows everybody to play with all the stuff on reasonably sized projects.

Brief introduction can be found here.

Github projects

If you are either familiar with Github or if you would use this as an opportunity to learn more about it you can just fork/clone repositories below.

Direct access

In the case that you do not like Git/Github you can use direct links below to get the latest version of training projects as well as sample solutions in form of ZIP packages:

Contributions

If you will find something which needs to be fixed or if you have some interesting sample task just send it as a Github pull request - we accept contributions under MIT license.

Thanks to everybody who already contributed with his time either in form of code or even advice :-).

Tuesday, March 10, 2015

Debugging T-SQL Stored Procedure Invoked From NUnit Tests In Visual Studio 2013 Debugger

Recently I had to write quite a few interesting stored procedures for MSSQL server which are covered by unit tests invoked as a part of continuous integration build in Team City.

Setting up the data and parameters for stored procedure takes some time and there are many scenarios thus I started looking for a ways:

  • How to debug stored procedures using the existing infrastructure without necessity to extract everything out and use separated debugger in the SQL Server Management Studio.
  • How to stub some of the data so the complex parts of queries can be easily verified.

In the end I got working debugging with following setup:

  • Stored procedures written in T-SQL for MSSQL.
  • Each stored procedure is covered by unit tests written in NUnit.
    • Thanks to tip of my colleague MSSQL guru Lubos I was able quickly setup SQL Server snapshots to be able revert the database quickly to its initial state.
    • Lubos also proposed very simple way on how to stub some data in procedures.
  • In order to be able quickly check what is going on inside the stored procedure use the Visual Studio 2013 debugger including the ability to step into the stored procedure.

Using stubs for data used inside stored procedures

  • Motivation here is that it is not always simple enough or even practical setup all the required data directly in the database.
    • Downside obviously is that since you are about to alter the stored procedure you have to be very careful.
  • My colleague proposed a very simple way for this purpose which seems to work:
    • Before running tests take a database snapshot so you can easily revert back.
    • Inside procedures use some markers which can be quickly identified and the content between them can be replaced - for example:
[Stub1]
SELECT * FROM [MyInvoices]
[#Stub1]
  • Next before you execute the stored procedure you fetch its source and replace the code
    between markers with select from data stub (for example temporary table):
[Stub1]
SELECT * FROM #MyInvoices
[#Stub1] 
  • Before you exercise the stored procedure you simply populate content of #MyInvoices temporary table and run it.

How to enable T-SQL debugging in Visual Studio 2013

This was the most tricky part of the whole procedure and it may be specific to my setup (MSSQL 2008 R2, VS2013).

  • As a prerequisite the Application debugging and SQL/CLR debugging must be enabled for the SQL Server in the SQL Server Object Explorer.
  • There are two ways how to get to SQL Server Object Explorer:
    • Directly open SQL Server Object Explorer via Visual Studio menu VIEW:
      Open SQL Server Object Explorer in VS 2013 menu
    • Alternatively use Server Explorer:
      • Firstly add a connection to your database
      • Then right click using mouse on registered database and select Browse in SQL Server Object Explorer:
        Open SQL Server Object Explorer from Server Explorer in VS2013
  • Once you get into the SQL Server Object Explorer enable both debugging options as visible on picture below:
    Enable debugging in SQL Server Object Explorer from VS2013

Running tests

  • Running tests just for verification purposes is very simple and basically any NUnit runner can be used.
  • In our case for the standard purpose serves very well Jetbrains Resharper.

Debugging tests

  • Unfortunately this didn’t work with built-in R# test runner.
  • Instead I use NUnit-x86.exe runner (I simply needed to force the process bitness to 32bits but I suppose that NUnit.exe will work as well):
    • Load test assembly into NUnit runner.
    • Attach Visual Studio 2013 debugger to running process.
    • Important part here is to have enabled both - Managed code and T-SQL code debugging prior to attaching to he NUnit-x86.exe process:
      Attaching to NUnit with enabled T-SQL debugging
    • Now set a .breakpoint in .NET code just around the code which is responsible for invocation of the stored procedure you are interested in, for example SqlDbCommand.Execute().
    • Run unit test from the NUnit runner and have it hit the breakpoint in Visual Studio.
    • Now from the _SQL Server Object Explorer open the body of stored procedure (just double-click on it),
    • Set a breakpoint inside the procedure.
    • And step thru the .NET code which is about to invoke the procedure.
    • If everything works well for you you are now inside the stored procedure and you can debug it.

Watching data inside the stored procedure

  • You can easily watch content of any variable inside the stored procedure.
  • I found very simple trick which can be used to watch also content of temporary tables and table variables.

    • At the place you would check the content add following statement (obviously adjusted for correct table/variable name):
    DECLARE @v XML = (SELECT * FROM #Parameters FOR XML AUTO, ROOT('MyRoot'))
    • Once you will hit the statement in the debugger you can easily watch the content of @v and visualize it for example via XML Visualizer.

Wednesday, January 7, 2015

How to find (and fix) files with missing CR (0xd) in Visual Studio

Motivation

On our project the primary source control is TFS. For larger features we have recently started using GIT repository with master branch automatically synchronized from TFS.
When resolving merge conflicts it happens from time to time that line ends get somehow corrupted (there are plenty of settings in GIT client and related diff/merge tools) which results into .cs files which have both/mixed - CR+LF and LF line endings.

How to find files with missing CR (0xd)

It is pretty simple to identify all the corrupted files using a regular expression in the Visual Studio ‘Find in Files’ dialog:

^(.*)[^\r]\n$

Dialog example:

visual_studio_search_for_files_containing_lf.png

How to fix files

Files can be easily fixed by reformatting:

  • Just press CTRL+A to select all the text inside the file
  • Then press CTRL+KF to make it formatted
  • After fixing the issues you can easily verify files using the above search pattern
    • It happens from time to time that the last empty line in the file is not corrected. The easiest way how to fix it is to remove it and perhaps add it again.

Thursday, December 18, 2014

Visual Studio 2012 debugger does not break after attaching to C#/.NET process

I had from time to time issue debug C#/.NET applications in Visual Studio 2012 after attaching Visual Studio 2012 debugger to a process.

Symptoms were that the debugger attached to the process but neither ‘Break All’ worked. The same applied for any preset breakpoint.

For some time I thought that Visual Studio installation for somehow corrupted on my system but since I was always able to workaround it via Debug.Assert() or Debugger.Break() calls put directly into code I had never motivation to really look for a solution nor reinstall the Visual Studio.

Today I really wanted to attach to a process to see what is going on inside and the issue happened again.

After a bit of playing I realized that in the case that debugger works after attaching correctly the ‘Attach to Process’ Visual Studio dialog looks like this (see ‘Attach to’ field):
visual_studio_debugger_does_recognize_process_type.png

For my process it didn’t work this time and ‘Attach to Process’ dialog looked like this (again see ‘Attach to’ field):
visual_studio_debugger_does_not_recognize_process_type.png

Apparently Visual Studio in some cases does not properly detect the type of the process and does not use correct debugger settings.

In order to solve my issue I finally found the ‘Select…’ button following ‘Attach to’ field where you can disable automatic detection of the process type and manually select a different one.

Zmrzka na hrázi

After selecting ‘Manager (v4.5, 4.0)’ and attaching debugger to process again everything worked well.

Wednesday, December 10, 2014

DBKeeperNet - DBKeeperNet - Automated database schema maintenance in .NET/C#

An article describing a simple .NET library which simply keeps your database schema up-to-date.

Introduction

Each project using database access solves how to distribute database schema and how to keep it up-to-date after upgrades. I was solving this problem multiple times, so I decided to write a common, easy to use, and freely available library. The result of this is the DbKeeperNet library which is pure ADO.NET framework (no dependency on Entity Framework).

This article will briefly show how to use DbKeeperNet library to fulfill this task. The library is designed as extensible and with planned support to any database engine.

Supported Features

  • Very simple usage.
  • Database commands are kept in a simple, structured XML file.
  • Each upgrade step is executed in a separate transaction (if supported by the database service). In the case of failure, all further steps are prohibited.
  • Rich set of built-in preconditions used for evaluation whether update should or shouldn’t be executed.
  • Support for unlimited and customizable list of database engines.
  • In single update, a script may be an alternative to SQL commands, for all database engine types if needed.
  • Support for custom preconditions.
  • Support for custom in-code upgrade steps (allows complex data transformations to be done in code instead of SQL).
  • DbKeeperNet provides deep logging of what is currently happening. Diagnostic output may be redirected through the standard .NET System.Diagnostics.Trace class or the System.Diagnostics.TraceSource class, or to a custom plug-in, allowing integration to an already existing application diagnostics framework.
  • XML update script structure is strictly defined by the XSD schema which can be used in any XML editor with auto-completion (intellisense).
  • Support for the Log4Net logging framework.
  • Support for MySQL Connect .NET.
  • Support for PostrgreSQL.
  • Support for SQLite.
  • Support for Oracle XE.
  • Support for Firebird.
  • Localizable log messages.
  • Support for customizable script sources (built-in are a disk file, embedded assembly).

Background

There are two basic principles on how to get your application’s database schema up-to-date:

  • Before each change, check directly in the database whether a change was already made or not (such as ask the database whether a table already exists or not).
  • Have a kind of database schema versioning table and record the current schema version.

DbKeeperNet supports both these principles; however, I suggest to use the second one.

DbKeeperNet’s design for this second principle is in a unique identifier for each update step. The database service implementation simply keeps track of these already executed steps (concrete implementation is strongly dependent on the used database service). This allows you to very simply search the database and check which steps were already executed.

Using DbKeeperNet

The code snippets below are taken from the DbKeeperNet.SimpleDemo project which is part of the source control. If you want to directly execute the demo project, you need the SQL Server 2005 Express Edition installed, or you must change the connection string in App.Config.

For more complex scenarios, you can check the DbKeeperNet.ComplexDemo project (there is an example of a custom step implementation, split XML scripts, etc.).

My favorite way to implement an upgrade script is by using an XML file stored as an embedded resource in an assembly. So, let’s prepare a simple upgrade script with an alternative statement for two different database engines (you can find it in the DbKeeperNet.Demo project as the file DatabaseSetup.xml):

<?xml version="1.0" encoding="utf-8" ?>
<upd:Updates xmlns:upd="http://code.google.com/p/dbkeepernet/Updates-1.0.xsd"
                    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation=
        "http://code.google.com/p/dbkeepernet/Updates-1.0.xsd Updates-1.0.xsd"
                AssemblyName="DbKeeperNet.SimpleDemo">
  <!-- Default way how to check whether to execute update step or not -->
  <DefaultPreconditions>
    <!-- We will use step information saving strategy -->
    <Precondition FriendlyName="Update step executed" 
                Precondition="StepNotExecuted"/>
  </DefaultPreconditions>

  <Update Version="1.00">
    <UpdateStep xsi:type="upd:UpdateDbStepType" 
    FriendlyName="Create table DbKeeperNet_SimpleDemo" Id="1">
      <!-- DbType attribute may be omitted - it will result in default value all
           which means all database types -->
      <AlternativeStatement DbType="MsSql">
        <![CDATA[
          CREATE TABLE DbKeeperNet_SimpleDemo
          (
          id int identity(1, 1) not null,
          name nvarchar(32),
          constraint PK_DbKeeperNet_SimpleDemo primary key clustered (id)
          )
        ]]>
      </AlternativeStatement>
    </UpdateStep>
    <UpdateStep xsi:type="upd:UpdateDbStepType" 
    FriendlyName="Fill table DbKeeperNet_SimpleDemo" Id="2">
      <AlternativeStatement DbType="MsSql">
        <![CDATA[
          insert into DbKeeperNet_SimpleDemo(name) values('First value');
          insert into DbKeeperNet_SimpleDemo(name) values('Second value');
        ]]>
      </AlternativeStatement>
    </UpdateStep>
  </Update>
</upd:Updates>

Now, we will implement the necessary steps for the code execution:

// Perform all configured database updates
using (UpdateContext context = new UpdateContext())
{
    context.LoadExtensions();
    context.InitializeDatabaseService("default");

    Updater updater = new Updater(context);
    updater.ExecuteXmlFromConfig();
}
// the above line is last required line for installation
// And now just print all inserted rows on console
// (just for demonstration purpose)
ConnectionStringSettings connectString = 
    ConfigurationManager.ConnectionStrings["default"];

using (SqlConnection connection = new SqlConnection(connectString.ConnectionString))
{
    connection.Open();

    SqlCommand cmd = connection.CreateCommand();
    cmd.CommandText = "select * from DbKeeperNet_SimpleDemo";
    SqlDataReader reader = cmd.ExecuteReader();
    while (reader.Read())
        Console.WriteLine("{0}: {1}", reader[0], reader[1]);
}

And finally, the setup configuration in the App.config or Web.Config file:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="dbkeeper.net" 
    type="DbKeeperNet.Engine.DbKeeperNetConfigurationSection,DbKeeperNet.Engine"/>
  </configSections>
  <dbkeeper.net loggingService="fx">
    <updateScripts>
      <add provider="asm" location="DbKeeperNet.SimpleDemo.DatabaseSetup.xml,DbKeeperNet.SimpleDemo" />
      <add provider="disk" location="c:\diskpath\DatabaseSetup.xml" />
    </updateScripts>
    <databaseServiceMappings>
      <add connectString="default" databaseService="MsSql" />
    </databaseServiceMappings>
  </dbkeeper.net>
  <connectionStrings>
    <add name="default" 
            connectionString="Data Source=.\SQLEXPRESS;
        AttachDbFilename='|DataDirectory|\DbKeeperNetSimpleDemo.mdf';
        Integrated Security=True;Connect Timeout=30;User Instance=True" 
            providerName="System.Data.SqlClient"/>
  </connectionStrings>
  <system.diagnostics>
    <!-- uncomment this for TraceSource class logger (fxts)-->
    <!--
    <sources>
      <source name="DbKeeperNet" switchName="DbKeeperNet">
        <listeners>
          <add name="file" />
        </listeners>
      </source>
    </sources>
    <switches>
      <add name="DbKeeperNet" value="Verbose"/>
    </switches>
    <sharedListeners>
      <add name="file" initializeData="dbkeepernetts.log" 
        type="System.Diagnostics.TextWriterTraceListener" />
    </sharedListeners>
    -->
    <trace autoflush="true">
      <!-- uncomment this for .NET Trace class logging (fx logger)-->
      <listeners>
        <add name="file" initializeData="dbkeepernet.log" 
        type="System.Diagnostics.TextWriterTraceListener" />
      </listeners>
    </trace>
  </system.diagnostics>
</configuration>

And that is all - all database changes are executed automatically, only in the case that they were not already executed.

Writing Database Update Scripts

  • If you are using the App.Config for the specification of executed XML scripts, all configured scripts are executed in the same order as they were defined in the configuration file. Also, the content of the XML file is processed exactly in the same order as it is written.
  • The Assembly attribute of the Updates element is in fact a namespace in which each Version and Step must be unique. If you would logically divide a single script into multiple files, you can use the same value in all the scripts.
  • The Version attribute of the Update element is intended to be used as a marker of database schema version. I suggest using a unique value for each distributed build changing the database schema (this value can be the same as the assembly version).
  • The Step attribute of the UpdateStep element should be unique inside each update version.
  • Never change the AssemblyName, Version, and Step steps after you deploy the application, unless you are absolutely sure what you are doing. 

Project location

If you have any questions, support requests, patches, your own extensions, you are looking for a binary package, documentation or if you are looking for the latest source files, the project is hosted at http://github.com/DbKeeperNet/DbKeeperNet.

Alternatively you can reference DbKeeperNet as Nuget packages.

Conclusion

This article shows only the basics from a set of supported functions. More information and examples of upgrade scripts can be find in the DbKeeperNet source files or in the unit tests.

History

  • 26th August, 2014: Update GitHub project reference
  • 17th July, 2014: Project moved to GitHub
  • 23rd September, 2012: Feature List updated, fixed App.Config example, update source package 
  • 4 June 2010: Feature list updated, new source package, updated examples according to new version.
  • 15 November, 2009: Feature list updated, new source package.
  • 4 September, 2009: Original article submitted.