Monday, July 19, 2010

Bing! Are you kidding me!

image

Seriously?? If these things happen, Bing! would never be the search engine that I wish it would become.

Friday, July 16, 2010

WPF Datagrid – Load and Performance

This post is not about performance numbers of WPF Datagrid but simply about what you should be aware of in order to make it perform well. I was not motivated enough to use profiler to show realistic numbers but instead used the Stopwatch class wherever applicable. This post does not go into techniques to handle large amounts of data such as Paging or how to implement paging, but focuses on how to make the datagrid work with large data.

Here is the C# class that generates the data I want to load the Datagrid with.

public class DataItem
{
public long Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public long Age { get; set; }
public string City { get; set; }
public string Designation { get; set; }
public string Department { get; set; }
}

public static class DataGenerator
{
private static int _next = 1;
public static IEnumerable GetData(int count)
{
for (var i = 0; i < count; i++)
{
string nextRandomString = NextRandomString(30);
yield return new DataItem
{
Age = rand.Next(100),
City = nextRandomString,
Department = nextRandomString,
Designation = nextRandomString,
FirstName = nextRandomString,
LastName = nextRandomString,
Id = _next++
};
}
}

private static readonly Random rand = new Random();

private static string NextRandomString(int size)
{
var bytes = new byte[size];
rand.NextBytes(bytes);
return Encoding.UTF8.GetString(bytes);
}
}

My ViewModel has been defined as shown below.

 public class MainWindowViewModel : INotifyPropertyChanged
{
private void Notify(string propName)
{
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(propName));
}
public event PropertyChangedEventHandler PropertyChanged;

private Dispatcher _current;
public MainWindowViewModel()
{
_current = Dispatcher.CurrentDispatcher;
DataSize = 50;
EnableGrid = true;
_data = new ObservableCollection();
}

private int _dataSize;
public int DataSize
{
get { return _dataSize; }
set
{
LoadData(value - _dataSize);
_dataSize = value;
Notify("DataSize");
}
}

private ObservableCollection _data;
public ObservableCollection Data
{
get { return _data; }
set
{
_data = value;
Notify("Data");
}
}

private bool _enableGrid;
public bool EnableGrid
{
get { return _enableGrid; }
set { _enableGrid = value; Notify("EnableGrid"); }
}

private void LoadData(int more)
{
Action act = () =>
{
EnableGrid = false;
if (more > 0)
{
foreach (var item in DataGenerator.GetData(more))
_data.Add(item);
}
else
{
int itemsToRemove = -1 * more;
for (var i = 0; i < itemsToRemove; i++)
_data.RemoveAt(_data.Count - i - 1);
}
EnableGrid = true;
};
//act.BeginInvoke(null, null);
_current.BeginInvoke(act, DispatcherPriority.ApplicationIdle);
}
}

As you can see, as the DataSize is changed, the data would be loaded. Currently I use a slider to change the load size. This is all pretty easy and fun stuff starts in the XAML.


In order to apply this "Data" to my WPF datagrid, I apply this viewmodel instance to the DataContext of my class. See below for the code-behind that I have for my window

 public partial class MainWindow : Window
{
private MainWindowViewModel vm;

public MainWindow()
{
InitializeComponent();
vm = new MainWindowViewModel();
this.Loaded += (s, e) => DataContext = vm;
}
}

Lets start with the following XAML.


<stackpanel>
<slider maximum="100" minimum="50" value="{Binding DataSize}" />
<label grid.row="1" content="{Binding DataSize}">
<datagrid grid.row="2" isenabled="{Binding EnableGrid}" itemssource="{Binding Data}">
</datagrid>
</stackpanel>

Now build the application and run. The result appear as shown below.


image


As you can see above, I loaded 100 items yet I do not see the scrollbar. Lets change the slider’s Maximum property from 100 to 1000 and rerun the application. Dragging the slider to 1000 at once. So even for the 1000 items, the grid does not respond that well.


image


Let us look at the memory usage.


image


This is pretty heavy for an application with just 1000 items of data loaded. So what is using all this memory? You can hook up a Memory Profiler or use Windbg to look at the memory content but since I already know what is causing this issue, I am not going through that.


This issue is that the DataGrid has been placed inside a StackPanel. When vertically stacked, the StackPanel basically gives its children all the space that they ask for. This makes the DataGrid create 1000 rows (all the UI elements needed for each column of each row !!) and render it. The virtualization of the DataGrid did not come into play here.


So let us make a simple change and put the DataGrid inside a grid. The XAML for which is shown below.

<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="30"/>
<RowDefinition Height="30"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Slider Value="{Binding DataSize}" Minimum="50" Maximum="1000"/>
<Label Content="{Binding DataSize}" Grid.Row="1"/>
<DataGrid ItemsSource="{Binding Data}" Grid.Row="2" IsEnabled="{Binding EnableGrid}">
</DataGrid>
</Grid>

When I run the application, you would notice that as I load 1000 items, the performance of the same application (no code changes, except that XAML one I just talked about) is a lot better than what it was. Moreover I see nice scrollbars.


image

Let us look at the memory usage.


image


Wow! 10 folds difference. This until now appears to be a re-talk about my previous post on WPF Virtualization. The same rules applies to DataGrid as well. Read this post if you are intertested.


So what else am I talking here.



  • If you notice the ViewModel code, you should be seeing that I disable the grid as I load data and enable it back once I am done. I have not really tested to see if this technique helps, but I did use this technique in HTML pages where loads of items in a listbox were all to be selected and this technique was very useful.
  • In all the screenshots I showed, the grid is sorted. So as the data changes, the grid has to keep sorting the data and show based on what you chose to sort. This, I believe, is a big overhead. Consider removing sort of the datagrid before you change the data if it is a viable option and does not impact the end user. Have not tested this, but the same should apply to the groupings as well (which most of the time cannot be simply removed).

With a simple point of loading the DataGrid into any other panel like Grid, instead of a StackPanel you get to see a lot of difference. The WPF datagrid performs just fine, as long as you keep the viewable region of the grid small.


Shown below is my grid with almost 1 Million data items loaded. The footprint is pretty small compared to the amount of data loaded. This means – either WPF Controls are memory intensive or WPF UI Virtualization is a boon.


Impact of sorting on the DataGrid



  • With no sorting applied on the datagrid, it took almost 20 seconds to load 1 Million items into my collection.
  • With sorting enabled, loading half those items iteself took over 2 minutes and the complete items took over 5 minutes and I killed the application because it was a pain. This matters because the application keeps the CPU busy with all the sort that has to keep happening as your data changes. So for every item added, the sort might be triggered, since I am placing it directly into an observable collection.
  • Instead consider sorting on the backend and not using the datagrid.

image


I can still scroll the application if the virtualization was properly utilized, inspite of the grid binding to 1 million items.


USING BeginInit() and EndInit() on the datagrid.


I changed the ViewModel’s LoadData() such that it calls BeginInit() as it starts loading the data and EndInit() when it done loading the data. This has helped quite a lot. Loading 1 Million items (without any sort applied on the grid) only took around 8 seconds (compared to the 18 seconds it took earlier). Unfortunately I did not spend enough time to use a profiler to show real numbers.


The changed code-behind for the Window is as shown.

public partial class MainWindow : Window
{
private MainWindowViewModel vm;

public MainWindow()
{
InitializeComponent();
vm = new MainWindowViewModel();
this.Loaded += (s, e) => DataContext = vm;
vm.DataChangeStarted += () => dg.BeginInit();
vm.DataChangeCompleted += () => dg.EndInit();
}
}

I also had to include the DataChangeStarted and DataChangeCompleted actions to the Viewmodel class. The changed portion of the ViewModel class is shown below.

	public event Action DataChangeStarted ;
public event Action DataChangeCompleted;

private void LoadData(int more)
{
Action act = () =>
{
//Before the data starts change, call the method.
if (DataChangeStarted != null) DataChangeStarted();
var sw = Stopwatch.StartNew();
EnableGrid = false;
if (more > 0)
{
foreach (var item in DataGenerator.GetData(more))
_data.Add(item);
}
else
{
int itemsToRemove = -1 * more;
for (var i = 0; i < itemsToRemove; i++)
_data.RemoveAt(_data.Count - i - 1);
}
EnableGrid = true;
sw.Stop();
Debug.WriteLine(sw.ElapsedMilliseconds);
if (DataChangeCompleted != null) DataChangeCompleted();
};
//act.BeginInvoke(null, null);
_current.BeginInvoke(act, DispatcherPriority.ApplicationIdle);
}

You can try this out and notice the performance difference yourself.


If the sorting is applied on the datagrid, the performance still hurts in spite of using the above mentioned trick. The overhead of sorting out weighs the performance gain we get calling the BeginInit and EndInit. May be having 1 million records is not realistic enough.

Thursday, July 15, 2010

Using LINQ Aggregate to solve the previous problem

In the previous post I talked about the problem which I simply re-iterate here. From the data which can look like


Name, Value


Sridhar, 1


Ashish,2


PRasanth,3


Ashish,5


Sridhar,6


Prasanth,34


.....

I want to aggregate the values for the names. Look at the previous post for some information on other approaches to solve this simple problem.

The LINQ way to do this would be :


[Test]
public void BTest()
{
var nvcs = tl.GroupBy(s => s.Name)
.Select(s => new NameValueCollection
{
{"Name", s.Key},
{"DrawerId", s.Aggregate(new StringBuilder(),
(seed, g) => seed.AppendFormat("{0};",g.DrawerId)).ToString()}
});
//foreach (var nvc in nvcs)
// Console.WriteLine(nvc["Name"] + " : " + nvc["DrawerId"]);
Assert.AreEqual(4, nvcs.Count());
}


Note that I wanted am generating a list of NameValueCollection and this is not of significance here. If you compare it with the previous implementation that uses dictionary or lists to generate, this solution appears more concise and to those who already knows LINQ should find this really simple.


  • All I would like to take away from this post is that the IEnumerable.Aggregate() method is a great method that is not often mentioned around. We often accumulate some value over a collection of items and aggregate method lets you do just that without all the extra for and seeds that you should track.

Algorithms, performance and getting burnt

After a long time, I am writing something on my blog. So here it is ..

This post is about me starting to solve a small but interesting problem with different approaches and ended up breaking my head against why an algorithm with supposedly O(n) complexity is 4 times slower than O(n^2).

So here's the issue. I have the following data :

Name,Value


Sridhar,1


Ashish,2


Prasanth,3


Sridhar,4


Ashish,5


Sridhar,8

and so on .. I hope you get the idea.

Now what I would like to do is to print the following output.


Sridhar : 1;4;8;....


Ashish : 2;4;.....


Prasanth: 3;......



Note that here, it does not matter what the values are, I am giving this data just for the example. So shown below is my setup which would be used by my implementations. (I am demoing it as a test).



private Stopwatch sw;
[SetUp]
public void SetUp()
{
GC.GetTotalMemory(true); // I dont know why i did this!
tl = new List(10000);
var names = new[] { "Krishna", "Ashish", "Sridhar", "Prasanth" };
foreach (var name in names)
for (var i = 0; i < 2500; i++)
tl.Add(new Ud { Name = name, DrawerId = i.ToString() });
tl.OrderBy(s => s.DrawerId);
sw = Stopwatch.StartNew();
}

[TearDown]
public void TearDown()
{
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
sw = null;
}

public class Ud
{
public string Name { get; set; }
public string DrawerId { get; set; }
}

private List tl;

The above code is self explanatory. I basically create a lot of Ud objects which generate the data that I presented earlier. Shown below is the most straight forward way to do it. It has two for-loops which makes the complexity O(n^2).



[Test]
public void BasicImplementation()
{
var nvcs = new List();
var list = new List();
foreach (var item in tl)
{
if (list.Contains(item.Name)) continue;

string val = string.Empty;

foreach (var item2 in tl)
{
if (item2.Name == item.Name)
val += item2.DrawerId + ";";
}

nvcs.Add(new NameValueCollection { { "Name", item.Name }, { "DrawerId", val } });
list.Add(item.Name);
}
//foreach (var nvc in nvcs)
// Console.WriteLine(nvc["Name"] + " : " + nvc["DrawerId"]);
Assert.AreEqual(4, nvcs.Count);
}

Now I went ahead and added another potential implementation which gives the same result but instead makes use of dictionary to track the strings that we build for each name in the list of objects. So instinctively, it appears that the dictionary method would be way faster than the one mentioned above. Lets look at that code.



[Test]
public void ADictionary()
{
var vals = new Dictionary();
foreach (var item in tl)
{
if (!vals.ContainsKey(item.Name))
vals[item.Name] = item.DrawerId;
else
vals[item.Name] = vals[item.Name] + item.DrawerId + ";";
}
Assert.AreEqual(4, vals.Values.Count);
}

When I ran these two tests, I did not notice any performance gain with the above O(n) implementation and in fact it was three times slower. So why was it slower? Look at the setup, it has GC.GetTotalMemory(true) which forced a full garbage collection and its time was accounted in the time consumed by this dictionary as well since for the second time (when test with dictionary was executing) it had a lot of strings to clean up. So why did I put it in the first place? The answer is "I was not thinking straight". Never ever use GC classes in your code. It is a bad-bad-bad practice.


So I remove this GC call made and rerun the tests again. Yet I do not see any performance gain. WHY?? I took a lot of time trying to diagnose why this is happening and eventually gave up manual inspection. I downloaded the trail version of dotTrace (which is freaking awesome tool) Performance 4.0 and made it profile both the tests. The culprit was the strings. If you look at the code right, we are generate a lot of strings whose "Concat" operation was so time consuming that it dominated the gain that we obtained using O(n) algorithm.


So the lesson here is "Be watchful of the strings that are generated when your code executes, otherwise you would be burned". It does not matter how small the string concatenation may seem but in cases like above it piles up a lot and screws up your clever algorithm. All I did was to change the tests to use stringbuilder instead of Strings.



  • Do not use GC calls in your code, especially those which force GC.
  • Use a profiler to accurately capture performance information of specific methods or your program. Stopwatch, Timers, etc are not good enough and waste of time.
  • Be aware of the impact of string operations. Use StringBuilder wherever possible. Use String.Format() in other simpler cases.

I will continue in the next post with some code that shows you how to approach the problem I initially started with using LINQ and how simple things would appear.

Sunday, July 11, 2010

Issues with SyntaxHighlighter on my blog

I just messed up my blog template and could not get the syntaxhighlighter plugin to work properly. I will be fixing this shortly but in the meantime if the code seems really ugly to you all, I apologize for the inconvenience.

Wednesday, May 05, 2010

WCF Security – the way I wanted to learn

For intranet applications where the users could be authenticated against Active Directory, using WindowsCredentials, setting up security for a WCF service might not be all that difficult. It might not be difficult even to set up a WCF Service hosted by IIS and make it use the ASP.NET Roles/Providers. But what I wanted was to come up with a series of steps that allows me to secure a WCF service for internet-like applications. While it appears that there would have been 1000s of implementations on the subject where the client application provides a UserName/Password login control and then on the authenticated users would be able to work with the service.

To go back a little, this is what I want

-  I have a client application which has a Login Control and the user enters Username and password. Without proper username.password combination, the service communications going forward should not be allowed. Remember Forms Authentication in ASP.NET ?? Something similar to that.

- I do not want to tie my service strongly to the host – meaning I do not want to make use of the ASP.NET Membership Provider models, though it is relatively easy to do so. So I have a console host program that serves as the WCF Service.

- For every call to an OperationContract, I do not want to read the Message Headers or add extra parameters to see the username and password. I don’t want specific logic within each operation that handles this check.

- I want the operations to be limited to those users with some kind of “Roles”. Basically, i have a set of operations that only users of a Role “X” should be able to perform; whereas there are some other operations for users with other roles.

- I don’t want my communication channel to be open and would want to prevent the users to sniff the traffic to see what is going on.

To summarize these requirements,

- I want a secure communication between client and server.
- I want to restrict access to the service unless the client sends in valid username/password.
- I want to restrict access to operations based on the roles of the calling user.
- I don’t want to deal with Windows Authentication at this moment, since I have plans to host my service on the internet in which case WindowsIdentity is not really preferred.

In this post, I would like to show the way I achieved these goals. Note that I am not qualified enough to make strong statements or give strong explanation on how the security works. The intention of this post would be to help assist developers like me who has little knowledge of WCF Security but do understand how Security works in general. I recommend you read MSDN documentation for the classes and terms I throw here and there.

While, the source code is available for download here : http://drop.io/yskic3h , here in this post I simply mention the steps that I used to achieve each of the goals mentioned above.

1. Secure Communication Channel

Used wsHttpBinding as the binding of my choice. The wsHttpBinding by default employs Windows security. We would have to change that to make use of Message security using “UserName” as clientCredentialType. All this is configured as a binding configuration.


<wsHttpBinding>
<binding name="secureBinding">
<!-- the security would be applied at Message level -->
<security mode="Message">
<message clientCredentialType="UserName"/>
</security>
</binding>
</wsHttpBinding>

Now this bindingConfiguration has to be set on the endpoint as shown



<services>
<service name="WcfService.SecureService" behaviorConfiguration="secureBehavior">
<!--
notice the bindingConfiguration, we are applying secureBinding that
was defined in the bindings section.
-->
<endpoint address="secureService"
binding="wsHttpBinding"
bindingConfiguration="secureBinding"
contract="WcfService.ISecureService"/>
<host>
<baseAddresses>
<add baseAddress="http://truebuddi:8080/" />
</baseAddresses>
</host>
<endpoint address="mex" binding="mexHttpBinding" contract="WcfService.ISecureService"/>
</service>
</services>


Now that we know the server is expecting username and password, we want a custom validator which checks this username and password combination against our custom repository of users. To do that, we would have to configure the service behavior this time. So binding ensures credentials are being passed and the service behavior validates them! ;)




<serviceBehaviors>
<behavior name="secureBehavior">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="true" />

<serviceCredentials>

<serviceCertificate
findValue="wcfSecureService"
storeLocation="LocalMachine"
storeName="My"
x509FindType="FindBySubjectName" />

<!--
Now in secureBinding (see in bindings section),
we set the Message security to use "UserName"
as ClientCredentialType. So we would like to
use a custom username password validator.
Here we specify that our custom validator should be used.
-->
<userNameAuthentication
userNamePasswordValidationMode="Custom"
customUserNamePasswordValidatorType="WcfService.CustomUserNamePasswordValidator, WcfService" />
</serviceCredentials>


<!--
The Custom Authorization policy is what used to verify the roles.
For a Role specified in the PrincipalPermission attribute,
IsInRole() method in the Principal that was set from the
CustomAuthorizationPolicy.Evaluate would be invoked.
-->
<serviceAuthorization principalPermissionMode='Custom'>
<authorizationPolicies>
<add policyType='WcfService.CustomAuthorizationPolicy, WcfService' />
</authorizationPolicies>
</serviceAuthorization>

</behavior>
</serviceBehaviors>



In the above configuration, using the serviceCredentials\userNameAuthentication element we specify that the userName/password are to be validated using custom validator type.
This is all configured there. While the username and password authenticates the client on the service, I think it does not really do anything to the communication channel.
To make the channel secure, we make use of certificates. In order to do this, the following steps are required to be done on the development machine so that the sample gets working :



  1. Using the makecert application (can run from Visual Studio Command), create and register the command for exchanging. Note that if you follow the MSDN article on creating certificates using makecert, it does not tell you about enabling the certificate such that it is suitable for key exchanging. So the command that worked for me is

    makecert.exe -sr LocalMachine -ss MY -a sha1 -n CN="wcfSecureService" -sky exchange -pe -r wcfSecureService.cer




  2. We specify the same certificate to be used in the service configuration file using the serviceCredentials\serviceCertificate element. See the configuration snippet shown previously. It basically says "find the certificate by subject name where subject name is 'wcfSecureService' in the certificate store on the local machine and the store would be Personal". For all this to work, note that HTTPS base address should be used.


  3. While the first two steps takes care of the certificate on the server, the client should have some knowledge (basically the client should know the public key with which the messages would be encrypted) of the existence of the certificate. we specify that in the endpoint\identity section of the client configuration [see below]. The encodedValue can be obtained by adding service reference from Visual Studio which generates shit load of configuration on the client, just save the encodedValue and revamp your configuration file.





<client>
<endpoint address="http://truebuddi:8080/secureService" binding="wsHttpBinding"
bindingConfiguration="secureWsHttpBinding"
behaviorConfiguration="ignoreCert"
contract="SecurityDemo.ISecureService">
<identity>
<!-- Don't panic this key is wrong ;) for the sake of this post-->
<certificate encodedValue="AwAAAAEAAAAUAAAAee8O3PpkfSCfjaa3mDmkK+HLb4QgAAAAAQAAAAcCAAAwggIDMIIBcKADAgECAhDgA4A6S0Z/j0d3IFg04e9gMAkGBSsOAwIdBQAwGzEZMBcGA1UEAxMQd2NmU2VjdXJlU2VydmljZTAeFw0xMDA1MDUyMzQ4NTZaFw0zOTEyMzEyMzU5NTlaMBsxGTAXBgNVBAMTEHdjZlNlY3VyZVNlcnZpY2UwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBALF7OJsZ6AV5yqSSQyne9j+xwdRLDRoVMleYg0vGvB7W7Bk5zBNbSDCbb+spJR3ykayDoZYpykyY8Q7qzvPuUPdHu7SkMVZ9Ng8B8yAq0zrD8sJwnaqTEY4a8mj8Dt86Yr0wK31aF4VSDRZaK+XDyFd5hWU8Eya+bohhixndMYwNAgMBAAGjUDBOMEwGA1UdAQRFMEOAEJRtYMFDVIgPHFrIf0LU5e+hHTAbMRkwFwYDVQQDExB3Y2ZTZWN1cmVTZXJ2aWNlghDgA4A6S0Z/j0d3IFg04e9gMAkGBSsOAwIdBQADgYEApQ+Hy6e4hV5rKRn93IMcEL3tW2tUYcj/oifGbEPRX329s3cc8QH6jYaNN8cgS5RN+6QffrkvupMSUauGsWia20WHTRI8lyb+1gvvX4NpTxZE6+sZkvIu6R/qIsC6V9pbRCHm3HRFnAoMNZmPTr5mJvzwAQZzOdXMFq0OwakJKEw=" />
</identity>
</endpoint>
</client>



You can also look at this link to get the public key in any case.



For testing purposes, you should also add a behaviorConfiguration on the client's endpoint such that certificates are not validated, once you deploy, you can remove this behavior.




<behaviors>
<endpointBehaviors>
<!-- ignore cerificates validation for testing purposes. -->
<behavior name="ignoreCert">
<clientCredentials>
<serviceCertificate>
<authentication certificateValidationMode="None" />
</serviceCertificate>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>


On the client config, you should also give the same wsHttpBinding with similar behavior but with few other options added. See snippet and compare it with the binding snippet that was shown earlier for the server




<bindings>
<wsHttpBinding>
<binding name="secureWsHttpBinding">
<security mode="Message">
<message clientCredentialType="UserName"
negotiateServiceCredential="true"
establishSecurityContext="true"/>
</security>
</binding>
</wsHttpBinding>
</bindings>


With this the communication channel is secure. You might have some issues with Certificates but you should be able to use the exception messages to bing for answers online in the forums. Only other part that is left on the client end is to make sure that the client proxy used are set with Username and password. Code for the full client is shown below.




SecureServiceClient client = new SecureServiceClient();
client.ClientCredentials.UserName.UserName = "Krishna";
client.ClientCredentials.UserName.Password = "test";
User test = client.Login();
client.SafeOperationByAdmin();


2. UserName and Password Custom Validation


Implement a type that derives from UserNamePasswordValidator class. You would have to reference the System.IdentityModel.dll and if you remember, we set the custom validator in the service behavior on the service configuration file. While the code shown below does not talk to DB, it should still serve as a good example on custom username and password validation. Note that this Validate() method gets called for evevery



public class CustomUserNamePasswordValidator : UserNamePasswordValidator
{
public override void Validate(string userName, string password)
{
Console.WriteLine("Username validation started");
if (userName == "Krishna" && password == "test")
return;
throw new InvalidCredentialException("Invalid credentials passed to the service");
}
}


3. Restriction of Operations using Roles


The operations can be restricted to users of certain roles by applying a PrincipalPermission attribute on the OperationContract [see below]. The current principal would be checked to see if it is in the role specified, otherwise the operation would not allowed to be executed. Now how do we set this Principal to something? To do this, we need a CustomPrincipal which you should derive from IPrincipal. This Principal implementation has IIdentity which can be WindowsIndetity for Windows Authentication and GenericIdentity for other scenarios. Now this CustomPrincipal should be created and applied somewhere right? This is where the IAuthorizationPolicy comes into play, We should have a custom authorization policy whose Evaluate method should take care of fetching the identity and passing it to a newly created custom principal. This custom principal has to be set as the current principal. All the three code snippets : PrincipalPermission attribute on operations, CustomPrincipal and CustomAuthorizationPolicy is shown below.




///
///This authorization policy is set on the service behavior using service authorization element.
public class CustomAuthorizationPolicy : IAuthorizationPolicy
{
public bool Evaluate(EvaluationContext evaluationContext, ref object state)
{
IIdentity client = (IIdentity)(evaluationContext.Properties["Identities"] as IList)[0];
// set the custom principal
evaluationContext.Properties["Principal"] = new CustomPrincipal(client);
return true;
}

private IIdentity GetClientIdentity(EvaluationContext evaluationContext)
{
return null;
}

public System.IdentityModel.Claims.ClaimSet Issuer
{
get { throw new NotImplementedException(); }
}

public string Id
{
get { throw new NotImplementedException(); }
}
}

public class CustomPrincipal : IPrincipal
{
private IIdentity identity;

public CustomPrincipal(IIdentity identity)
{
this.identity = identity;
}

public IIdentity Identity
{
get
{
return identity;
}
}

public bool IsInRole(string role)
{
return true;
}
}

///in the WCF Service implementation
[PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
public void SafeOperationByAdmin()
{
///more code
}


The newly created Authorization Policy should be configured inside the service configuration file in the serviceBehavior\serviceAuthorization as shown below.





<!--
The Custom Authorization policy is what used to verify the roles.
For a Role specified in the PrincipalPermission attribute,
IsInRole() method in the Principal that was set from the
CustomAuthorizationPolicy.Evaluate would be invoked.
-->
<serviceAuthorization principalPermissionMode='Custom'>
<authorizationPolicies>
<add policyType='WcfService.CustomAuthorizationPolicy, WcfService' />
</authorizationPolicies>
</serviceAuthorization>




To summarize, the following are the steps that should be performed to get the security working in WCF. (Message level security).


  1. Create and register a certificate. Configure the service configuration specifying the certificate to use. This is done in the bindingConfiguration and the binding configuration is then applied on the endpoint.
  2. Configure the service to make use of Message level security. Again done on the server configuration.
  3. Configure the client to use encodedValue for its communication which is the public key. This is done in the bindingConfiguration on the client's endpoint. For testing purposes you can make the client not validate the certificates. This is done on the end point behaviors.
  4. Configure the client's binding to make use of Message level security with UserName.
  5. The client's code should specify the username and password. To validate this information, register a custom UserNamePasswordValidator in the serviceBehavior on the server configuration.
  6. For roles, create a custom prinicipal and set it using a Custom Authorization policy. This authorization policy should be registered in the serviceAuthorization of the serviceBehavior in the server configuration file.



      Again the code is available for download at : http://drop.io/yskic3h . Sometime, in the future, I would try to upload the code to codeplex or put it in the Windows Live Skydrive.

Thursday, April 15, 2010

Running Moles using NUnit Console from Visual Studio

Create an external tool in Visual Studio as shown below

image

Command : C:\Program Files (x86)\Microsoft Moles\bin\moles.runner.exe

Arguments: $(BinDir)/$(TargetName).dll /runner:"c:\Development Tools\NUnit 2.5.2\bin\net-2.0\nunit-console.exe" /x86

Initialize Directory : $(ProjectDir)

Now you can use Moles from within Visual Studio. :)

Thursday, March 18, 2010

Introducing Comfy.Couch, a CouchDB API for .NET

WARNING : The work is still in progress.

From the past few days, I have been working on coming up with a nice little .NET Library that one could use with CouchDB. My goal was to stick with the CouchDB API documentation as much as I can so that it would be easier to work with the driver. The library has dependencies on Log4Net and Json.NET libraries.

I tried to be over-smart when picking that name – Comfy.Couch (I meant a comfortable couch to use :)). At the moment, the database API is functionally complete, but you cannot really use it unless you have the Document API which would be coming soon. The reference is at http://wiki.apache.org/couchdb/HTTP_database_API

I made sure I wrote unit tests to cover most part of my code, but I did miss some pieces here and there.

image

So I ended up having 94% of code coverage so far and I will be adding tests soon.

My idea is to first get the complete API in place and then worry about where it has to be tuned. So far, I have not focused on any kind of tuning, which I think could be done. I have thrown in a few asynchronous requests here and there but they are yet to be tested.

1. Creating a database.


CouchDatabaseOperations.Create("databasename"); //validation on database names is pending.


2. Getting database information from the server



ICouchDatabase db = CouchDatabaseOperations.Get("databasename");
CouchDatabaseInfo info = db.Metadata;


3. Deleting the database



//Delete operation uses MaxRetries as 2 since most of the times the first delete request fails on Windows.
CouchDatabaseOperations.Delete("databasename");


While I have successfully completed the rest of the API that is described in the Database API reference at CouchDB website, I just showed how the library could be used in here. The unit tests might be more helpful for you to get started, if you are interested.



Next steps:



  • Upload the sourcecode to comfy.codeplex.com
  • Improvize logging information
  • Work on the Document API
  • Work on the View API
  • Work on the Bulk Document API
  • Run some performance tests and identify the bottlenecks in the driver.
  • Sample Scrum Management tool in Silverlight 4 that uses CouchDB for data backend.

Wednesday, February 10, 2010

Using .NET 3.5 (CLR 2.0) DLL inside Visual Studio 2010 for a .NET 4.0 Project/Application

When you first create a .NET 4.0 project inside VS 2010 and add reference to a .NET 3.5 DLL (say log4net or Rhino.Mocks), the project tends to not build. You can get errors as shown below.

image

It might appear to be a CLR version issue – unable to run 2.0 DLL inside a 4.0 App Domain. And i thought that is what it was until now.

You can get over this :)

By default, VS 2010 creates the projects with Target Framework on the properties set to “.NET Framework 4 Client Profile”. You would have to change that to “.NET Framework 4” by going to Project Properties –> Application –>  Target Framework. And every thing begins to compile.

So I guess, one has to be aware of this when migrating old solutions from Visual Studio 2008 to Visual Studio 2010.

Proof that it works :) Notice the .NET Framework 4.0 features as well Log4Net and Rhino.Mocks used in all the same example. (It is a stupid example, but the intention was to show it works).

image

If for some reason, it does not work for you, try to add

<?xml version ="1.0"?>
<configuration>
  <startup useLegacyV2RuntimeActivationPolicy="true">
     <supportedRuntime version="v2.0.5727" /> 
    <supportedRuntime version="v4.0.21006"/>
  </startup>
</configuration>

to your csc.exe.config/msbuild.exe.config/VCSExpress.exe.config/devenv.exe.config …

I initially thought it has something to do with not enabling Side-By-Side Execution of the  compiler and stuff but it turns out that it is not the case. For your information,  i have added supported runtime as .NET 2.0 but then commented it to be sure that its ONLY the Target Framework that has to be changed.

Thursday, February 04, 2010

Breaking my head with message passing and Scala Actors

I recently started working on a personal project on which a friend of mine is helping. After some discussion, we thought Scala might be a good bet to use as the development platform of our choice (and we integrate Spring into Scala). Anyway for that, I got into Scala actors. Actors appear easy code – all you have to do is create an actor (if you use Actor.actor construct, it starts automatically, otherwise you have to invoke start) and from somewhere keep sending messages to the Actor.

So let us first define what message that I want to send using Case Classes. Refer the documentation on case classes.


case class Message(someData : String)


Now let us create our component which is an Actor and for each string passed in the constructor, we append the message (Some stupid behavior, but serves the example here)




class MyActor(toInform: Array[String]) extends scala.actors.Actor{
private val noticeTo = toInform
def act(){
loop{
react{
case Message(r) =>{
for(item<-noticeTo)
println(item+"_"+r)
}
}
}
}
}


Now with the following code to test the above actor




val actor = new MyActor(Array("Krishna"))
actor ! "Welcome"


With all the excitement in the world, you run the test and it just hangs in there, nothing happens :). So you suddenly realize "YOU FORGOT TO START THE ACTOR". Damn! it was the actor.start that is causing the issue. So you cleverly add the actor.start statement. ("you" meaning "me")




val actor = new MyActor(Array("Krishna"))
actor.start //do not forget this :)
actor ! "Welcome"


You run the test again, to have no success. Then you realize that your actor expects Message(r) where as you are sending a String. So you change that




val actor = new MyActor(Array("Krishna"))
actor.start //do not forget this :)
actor ! Message("Welcome")
//actor.exit


Also notice the commented out explicit kill of the exit call outside the actor. It is not advised to be doing that. Remeber that "!" is a send-and-continue kinda call (asynchronous call). So before the actor is scheduled to work on the message, it might be killed. Instead it is advised that you specify an "Exit" like message within the case on the Actor. Now you run the test and it works :) Hurray!!!!



Later, smart-ass like you (this time, it is you :)), decide to write a very bad code (ok, you is not really you, for now lets assume the actor code i wrote is simply perfect) like shown below.




object Launcher extends Application{
val actor = new MyActor(null)
actor start() //do not forget this
actor ! Message("Welcome")
actor ! "E" //this is the exit message.
}


You repeat the test again :) and this time it does nothing, it appears to be blocked :). Again....damn... so how would we know what the issue is? You spend a your day-off trying to figure out what is wrong with this simple code...then after spending 10 hours trying all the magic tricks (aparently, i know too many magic tricks that never work, hence the time), you realize the actor must be dead (netbeans threads view would show what threads are running and you never see any FJ Threads). So the actor died! So what can kill an actor? - call to exit(), or an exception!!!! There it is .. so let us change the actor code so that it can catch the exception.




class MyActor(toInform: Array[String]) extends scala.actors.Actor{
private val noticeTo = toInform
def act(){
loop{
react{
case Message(r) =>{
try{
for(item<-noticeTo)
println(item+"_"+r)
}catch{
case e => e.printStackTrace
}
}
case "E" => exit
}
}
}
}


Run the test again and notice the stacktrace!




java.lang.NullPointerException
at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:68)
at scala.collection.IndexedSeqLike$class.foreach(IndexedSeqLike.scala:86)
at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:20)
at tryscala.MyActor$$anonfun$act$1$$anonfun$apply$1.apply(Launcher.scala:13)
at tryscala.MyActor$$anonfun$act$1$$anonfun$apply$1.apply(Launcher.scala:10)
at scala.actors.Reaction$$anonfun$$init$$1.apply(Reaction.scala:33)
at scala.actors.Reaction$$anonfun$$init$$1.apply(Reaction.scala:29)
at scala.actors.ReactorTask.run(ReactorTask.scala:33)
at scala.actors.scheduler.ForkJoinScheduler$$anon$1.compute(ForkJoinScheduler.scala:111)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:147)
at scala.concurrent.forkjoin.ForkJoinTask.quietlyExec(ForkJoinTask.java:422)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.mainLoop(ForkJoinWorkerThread.java:340)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:325)
BUILD SUCCESSFUL (total time: 7 seconds)


So the fix is to :) pass in an empty array at least, instead of sending in a null ( use Array.empty).



Clearly, this is not the exact sample that I was running, it was a little bit complicated and myself being totally new to Scala had a hard time trying to figure out that an exception can kill scala Actors!!! I know I am dumb, but with this post, I want to save you from considering yourself dumb after trying very hard for a few hours.

Tuesday, January 26, 2010

Dependency Injection in Scala and some other stuff

My new  found love is the Java Spring framework. I initially looked at the SpringFramework for .NET and thought it was really good and well designed. But then for a pet project of mine, my friend promised to help me given the project would be on the JVM. So we decided that we shall consider Scala as the language of choice on the JVM. Now, I know how I can do DI using Spring Container in Java. So I was wondering if I could do the same in Scala. Turns out, it is not so difficult after all (so far).

For my scala development environment, I played with Eclipse, IntelliJ Community Edition and then Netbeans. So far, Netbeans has the best support for Scala in the form of IDE integration. There are some issues with it but then it is not all too bad. One initial issue was that even though the Spring JAR files were in my class path, the netbeans editor complained that it could locate the Spring Jars. The error message was something like “the value springframework is not a member of the package org”. Turns out, if you clean and build the project, it would build successfully but the editor keeps complaining. The Netbeans wiki talked about having some “Reset Scala Parser” item on the context menu but I am so dumb that I still could not find it. So if there is no reset trigger then how about we close and start the IDE. And it works! For now, since I am using nightly builds (follow the wiki link here http://wiki.netbeans.org/Scala68v1) I would not mind these minor issues. But let me tell you, I spent a whole evening trying to figure out which is the best IDE for latest Scala and the winner is Netbeans, without any question.

Now at the time I created my sample project, I made sure that I check the settings in Netbeans project creation wizard which allows me to copy all my libraries into a common folder such that it would be easier when working as a team. Then I added the following files from the Spring Distribution.

image

Apart from these, I added commons-logging jar too.

Following code shows how you can use Spring within scala. So lets say I have an interface Sample. (which are almost traits in Scala). And I have a concrete implementation ScalaImpl. Follow the code carefully and with some fundamental understanding of Spring  framework, you should be all set.


/** Main.scala **/
package tryscala
import org.springframework.context.support.FileSystemXmlApplicationContext

object Main {

/**
* @param args the command line arguments
*/
def main(args: Array[String]): Unit = {
val fs = new FileSystemXmlApplicationContext("Configs\\Spring.xml")
val sample = fs.getBean(classOf[Sample]).asInstanceOf[Sample]
println(sample sayHello)
}
}

/** Sample.scala **/
package tryscala

trait Sample {
def sayHello(): String
}

/** SampleImpl.scala"
package tryscala

class SampleImpl(var firstName:String) extends Sample {
def sayHello() = "Welcome to Scala : " + firstName
}





The spring configuration file (Configs\\Spring.xml) is shown below.




<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<bean id="sample" class="tryscala.SampleImpl">
<constructor-arg name="firstName" value="Krishna Vangapandu"/>
</bean>
</beans>


I hope to be able to share more about what we are doing and how Scala effects us, as we move on.

Thursday, January 14, 2010

Rhino Mocks : How to mock read-only properties.

As part of my never-ending quest to do something big, I started working on an application for which I am learning to use Rhino Mocks and I thought it would make a small and easy example to understand mocks. I have not particularly followed TDD approach so far, but anyway here is my attempt to show how you would use Rhino Mocks to stub/mock (am yet to understand when to use what…I mean i know the difference … just did not develop enough maturity in that aspect).

Let us say I have some interface which is as simple as shown below.



public interface IColumn {
public string Name { get; }
public string DataType { get; }
}


Now if I were to mock/stub this interface, I had some problems with Rhino Mocks throwing exceptions on how the property should have a getter/setter when I was trying to configure the mock such that whenever Name property is accessed you should return "Krishna Bhargava" and for DataType you should return "System.String". I tried different mechanims like using Expect.On(col).Call(it.Name).Return("Krishna Bhargava") or tried to define property behavior (PropertyBehavior()). Finally after struggling for an hour, looking at various examples online from good people like you and me, I was able to come up with proper code that can generate a stub for the column defined in this interface. The code snippet shows how you can use Rhino Mocks (which i think the most convenient framework - i tried Moq (somehow it does not click with me), NUnit.Mocks( too much ground work)) so that u can generate stubs for read-only properties! Unfortunately at the moment, I cannot comment much on the Rhino Mocks classes .. I hope the code snippet is easy to understand.




private IColumn MockColumn(string name, string type)
{
IColumn col = MockRepository.GenerateStub();
col.Stub(it => it.DataType).Return(Type.GetType(type));
col.Stub(it => it.Name).Return(name);
return col;
}


you can later use this method as shown below




[Test]
public void ColumnStubNameCanBeSet()
{
IColumn col = MockColumn("name", "String.String");
Assert.AreEqual("name",col.Name);
Assert.AreEqual(typeof(string), col.DataType);
}

/*a little bit more real world usage of a column stub is shown below. */
[Test]
public void ValuesCanBeSetOnARecord()
{
IRecord record = new Record(); //NOT DESCRIBED in this blog, but this is my personal class....
record.SetValue(MockColumn("name", "System.String"), "Krishna Vangapandu");
record.SetValue(MockColumn("age", "System.Int32"), 25);
Assert.AreEqual("Krishna Vangapandu",record.Value("name"));
Assert.AreEqual(25, record.Value("age"));
}



And by the way, Castle Windsor has the best and easiest configuration schema in all the IoC containers I played with. I personally like Unity Container but its configuration is a mess and over-engineered. I will try to post an implementation of Observer-Pattern where observers and observables are linked with pure configuration - the best use I made of IoC container so far!