Seriously?? If these things happen, Bing! would never be the search engine that I wish it would become.
Monday, July 19, 2010
Friday, July 16, 2010
WPF Datagrid – Load and Performance
This post is not about performance numbers of WPF Datagrid but simply about what you should be aware of in order to make it perform well. I was not motivated enough to use profiler to show realistic numbers but instead used the Stopwatch class wherever applicable. This post does not go into techniques to handle large amounts of data such as Paging or how to implement paging, but focuses on how to make the datagrid work with large data.
Here is the C# class that generates the data I want to load the Datagrid with.
public class DataItem
{
public long Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public long Age { get; set; }
public string City { get; set; }
public string Designation { get; set; }
public string Department { get; set; }
}
public static class DataGenerator
{
private static int _next = 1;
public static IEnumerableGetData(int count)
{
for (var i = 0; i < count; i++)
{
string nextRandomString = NextRandomString(30);
yield return new DataItem
{
Age = rand.Next(100),
City = nextRandomString,
Department = nextRandomString,
Designation = nextRandomString,
FirstName = nextRandomString,
LastName = nextRandomString,
Id = _next++
};
}
}
private static readonly Random rand = new Random();
private static string NextRandomString(int size)
{
var bytes = new byte[size];
rand.NextBytes(bytes);
return Encoding.UTF8.GetString(bytes);
}
}
My ViewModel has been defined as shown below.
public class MainWindowViewModel : INotifyPropertyChanged
{
private void Notify(string propName)
{
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(propName));
}
public event PropertyChangedEventHandler PropertyChanged;
private Dispatcher _current;
public MainWindowViewModel()
{
_current = Dispatcher.CurrentDispatcher;
DataSize = 50;
EnableGrid = true;
_data = new ObservableCollection();
}
private int _dataSize;
public int DataSize
{
get { return _dataSize; }
set
{
LoadData(value - _dataSize);
_dataSize = value;
Notify("DataSize");
}
}
private ObservableCollection_data;
public ObservableCollectionData
{
get { return _data; }
set
{
_data = value;
Notify("Data");
}
}
private bool _enableGrid;
public bool EnableGrid
{
get { return _enableGrid; }
set { _enableGrid = value; Notify("EnableGrid"); }
}
private void LoadData(int more)
{
Action act = () =>
{
EnableGrid = false;
if (more > 0)
{
foreach (var item in DataGenerator.GetData(more))
_data.Add(item);
}
else
{
int itemsToRemove = -1 * more;
for (var i = 0; i < itemsToRemove; i++)
_data.RemoveAt(_data.Count - i - 1);
}
EnableGrid = true;
};
//act.BeginInvoke(null, null);
_current.BeginInvoke(act, DispatcherPriority.ApplicationIdle);
}
}
As you can see, as the DataSize is changed, the data would be loaded. Currently I use a slider to change the load size. This is all pretty easy and fun stuff starts in the XAML.
In order to apply this "Data" to my WPF datagrid, I apply this viewmodel instance to the DataContext of my class. See below for the code-behind that I have for my window
public partial class MainWindow : Window
{
private MainWindowViewModel vm;
public MainWindow()
{
InitializeComponent();
vm = new MainWindowViewModel();
this.Loaded += (s, e) => DataContext = vm;
}
}
Lets start with the following XAML.
<stackpanel>
<slider maximum="100" minimum="50" value="{Binding DataSize}" />
<label grid.row="1" content="{Binding DataSize}">
<datagrid grid.row="2" isenabled="{Binding EnableGrid}" itemssource="{Binding Data}">
</datagrid>
</stackpanel>
Now build the application and run. The result appear as shown below.
As you can see above, I loaded 100 items yet I do not see the scrollbar. Lets change the slider’s Maximum property from 100 to 1000 and rerun the application. Dragging the slider to 1000 at once. So even for the 1000 items, the grid does not respond that well.
Let us look at the memory usage.
This is pretty heavy for an application with just 1000 items of data loaded. So what is using all this memory? You can hook up a Memory Profiler or use Windbg to look at the memory content but since I already know what is causing this issue, I am not going through that.
This issue is that the DataGrid has been placed inside a StackPanel. When vertically stacked, the StackPanel basically gives its children all the space that they ask for. This makes the DataGrid create 1000 rows (all the UI elements needed for each column of each row !!) and render it. The virtualization of the DataGrid did not come into play here.
So let us make a simple change and put the DataGrid inside a grid. The XAML for which is shown below.
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="30"/>
<RowDefinition Height="30"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Slider Value="{Binding DataSize}" Minimum="50" Maximum="1000"/>
<Label Content="{Binding DataSize}" Grid.Row="1"/>
<DataGrid ItemsSource="{Binding Data}" Grid.Row="2" IsEnabled="{Binding EnableGrid}">
</DataGrid>
</Grid>
When I run the application, you would notice that as I load 1000 items, the performance of the same application (no code changes, except that XAML one I just talked about) is a lot better than what it was. Moreover I see nice scrollbars.
Let us look at the memory usage.
Wow! 10 folds difference. This until now appears to be a re-talk about my previous post on WPF Virtualization. The same rules applies to DataGrid as well. Read this post if you are intertested.
So what else am I talking here.
- If you notice the ViewModel code, you should be seeing that I disable the grid as I load data and enable it back once I am done. I have not really tested to see if this technique helps, but I did use this technique in HTML pages where loads of items in a listbox were all to be selected and this technique was very useful.
- In all the screenshots I showed, the grid is sorted. So as the data changes, the grid has to keep sorting the data and show based on what you chose to sort. This, I believe, is a big overhead. Consider removing sort of the datagrid before you change the data if it is a viable option and does not impact the end user. Have not tested this, but the same should apply to the groupings as well (which most of the time cannot be simply removed).
With a simple point of loading the DataGrid into any other panel like Grid, instead of a StackPanel you get to see a lot of difference. The WPF datagrid performs just fine, as long as you keep the viewable region of the grid small.
Shown below is my grid with almost 1 Million data items loaded. The footprint is pretty small compared to the amount of data loaded. This means – either WPF Controls are memory intensive or WPF UI Virtualization is a boon.
Impact of sorting on the DataGrid
- With no sorting applied on the datagrid, it took almost 20 seconds to load 1 Million items into my collection.
- With sorting enabled, loading half those items iteself took over 2 minutes and the complete items took over 5 minutes and I killed the application because it was a pain. This matters because the application keeps the CPU busy with all the sort that has to keep happening as your data changes. So for every item added, the sort might be triggered, since I am placing it directly into an observable collection.
- Instead consider sorting on the backend and not using the datagrid.
I can still scroll the application if the virtualization was properly utilized, inspite of the grid binding to 1 million items.
USING BeginInit() and EndInit() on the datagrid.
I changed the ViewModel’s LoadData() such that it calls BeginInit() as it starts loading the data and EndInit() when it done loading the data. This has helped quite a lot. Loading 1 Million items (without any sort applied on the grid) only took around 8 seconds (compared to the 18 seconds it took earlier). Unfortunately I did not spend enough time to use a profiler to show real numbers.
The changed code-behind for the Window is as shown.
public partial class MainWindow : Window
{
private MainWindowViewModel vm;
public MainWindow()
{
InitializeComponent();
vm = new MainWindowViewModel();
this.Loaded += (s, e) => DataContext = vm;
vm.DataChangeStarted += () => dg.BeginInit();
vm.DataChangeCompleted += () => dg.EndInit();
}
}
I also had to include the DataChangeStarted and DataChangeCompleted actions to the Viewmodel class. The changed portion of the ViewModel class is shown below.
public event Action DataChangeStarted ;
public event Action DataChangeCompleted;
private void LoadData(int more)
{
Action act = () =>
{
//Before the data starts change, call the method.
if (DataChangeStarted != null) DataChangeStarted();
var sw = Stopwatch.StartNew();
EnableGrid = false;
if (more > 0)
{
foreach (var item in DataGenerator.GetData(more))
_data.Add(item);
}
else
{
int itemsToRemove = -1 * more;
for (var i = 0; i < itemsToRemove; i++)
_data.RemoveAt(_data.Count - i - 1);
}
EnableGrid = true;
sw.Stop();
Debug.WriteLine(sw.ElapsedMilliseconds);
if (DataChangeCompleted != null) DataChangeCompleted();
};
//act.BeginInvoke(null, null);
_current.BeginInvoke(act, DispatcherPriority.ApplicationIdle);
}
You can try this out and notice the performance difference yourself.
If the sorting is applied on the datagrid, the performance still hurts in spite of using the above mentioned trick. The overhead of sorting out weighs the performance gain we get calling the BeginInit and EndInit. May be having 1 million records is not realistic enough.
Thursday, July 15, 2010
Using LINQ Aggregate to solve the previous problem
Name, Value
Sridhar, 1
Ashish,2
PRasanth,3
Ashish,5
Sridhar,6
Prasanth,34
.....
I want to aggregate the values for the names. Look at the previous post for some information on other approaches to solve this simple problem.
The LINQ way to do this would be :
[Test]
public void BTest()
{
var nvcs = tl.GroupBy(s => s.Name)
.Select(s => new NameValueCollection
{
{"Name", s.Key},
{"DrawerId", s.Aggregate(new StringBuilder(),
(seed, g) => seed.AppendFormat("{0};",g.DrawerId)).ToString()}
});
//foreach (var nvc in nvcs)
// Console.WriteLine(nvc["Name"] + " : " + nvc["DrawerId"]);
Assert.AreEqual(4, nvcs.Count());
}
Note that I wanted am generating a list of NameValueCollection and this is not of significance here. If you compare it with the previous implementation that uses dictionary or lists to generate, this solution appears more concise and to those who already knows LINQ should find this really simple.
- All I would like to take away from this post is that the IEnumerable.Aggregate() method is a great method that is not often mentioned around. We often accumulate some value over a collection of items and aggregate method lets you do just that without all the extra for and seeds that you should track.
Algorithms, performance and getting burnt
This post is about me starting to solve a small but interesting problem with different approaches and ended up breaking my head against why an algorithm with supposedly O(n) complexity is 4 times slower than O(n^2).
So here's the issue. I have the following data :
Name,Value
Sridhar,1
Ashish,2
Prasanth,3
Sridhar,4
Ashish,5
Sridhar,8
and so on .. I hope you get the idea.
Now what I would like to do is to print the following output.
Sridhar : 1;4;8;....
Ashish : 2;4;.....
Prasanth: 3;......
Note that here, it does not matter what the values are, I am giving this data just for the example. So shown below is my setup which would be used by my implementations. (I am demoing it as a test).
private Stopwatch sw;
[SetUp]
public void SetUp()
{
GC.GetTotalMemory(true); // I dont know why i did this!
tl = new List(10000);
var names = new[] { "Krishna", "Ashish", "Sridhar", "Prasanth" };
foreach (var name in names)
for (var i = 0; i < 2500; i++)
tl.Add(new Ud { Name = name, DrawerId = i.ToString() });
tl.OrderBy(s => s.DrawerId);
sw = Stopwatch.StartNew();
}
[TearDown]
public void TearDown()
{
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
sw = null;
}
public class Ud
{
public string Name { get; set; }
public string DrawerId { get; set; }
}
private Listtl;
The above code is self explanatory. I basically create a lot of Ud objects which generate the data that I presented earlier. Shown below is the most straight forward way to do it. It has two for-loops which makes the complexity O(n^2).
[Test]
public void BasicImplementation()
{
var nvcs = new List();
var list = new List();
foreach (var item in tl)
{
if (list.Contains(item.Name)) continue;
string val = string.Empty;
foreach (var item2 in tl)
{
if (item2.Name == item.Name)
val += item2.DrawerId + ";";
}
nvcs.Add(new NameValueCollection { { "Name", item.Name }, { "DrawerId", val } });
list.Add(item.Name);
}
//foreach (var nvc in nvcs)
// Console.WriteLine(nvc["Name"] + " : " + nvc["DrawerId"]);
Assert.AreEqual(4, nvcs.Count);
}
Now I went ahead and added another potential implementation which gives the same result but instead makes use of dictionary to track the strings that we build for each name in the list of objects. So instinctively, it appears that the dictionary method would be way faster than the one mentioned above. Lets look at that code.
[Test]
public void ADictionary()
{
var vals = new Dictionary();
foreach (var item in tl)
{
if (!vals.ContainsKey(item.Name))
vals[item.Name] = item.DrawerId;
else
vals[item.Name] = vals[item.Name] + item.DrawerId + ";";
}
Assert.AreEqual(4, vals.Values.Count);
}
When I ran these two tests, I did not notice any performance gain with the above O(n) implementation and in fact it was three times slower. So why was it slower? Look at the setup, it has GC.GetTotalMemory(true) which forced a full garbage collection and its time was accounted in the time consumed by this dictionary as well since for the second time (when test with dictionary was executing) it had a lot of strings to clean up. So why did I put it in the first place? The answer is "I was not thinking straight". Never ever use GC classes in your code. It is a bad-bad-bad practice.
So I remove this GC call made and rerun the tests again. Yet I do not see any performance gain. WHY?? I took a lot of time trying to diagnose why this is happening and eventually gave up manual inspection. I downloaded the trail version of dotTrace (which is freaking awesome tool) Performance 4.0 and made it profile both the tests. The culprit was the strings. If you look at the code right, we are generate a lot of strings whose "Concat" operation was so time consuming that it dominated the gain that we obtained using O(n) algorithm.
So the lesson here is "Be watchful of the strings that are generated when your code executes, otherwise you would be burned". It does not matter how small the string concatenation may seem but in cases like above it piles up a lot and screws up your clever algorithm. All I did was to change the tests to use stringbuilder instead of Strings.
- Do not use GC calls in your code, especially those which force GC.
- Use a profiler to accurately capture performance information of specific methods or your program. Stopwatch, Timers, etc are not good enough and waste of time.
- Be aware of the impact of string operations. Use StringBuilder wherever possible. Use String.Format() in other simpler cases.
I will continue in the next post with some code that shows you how to approach the problem I initially started with using LINQ and how simple things would appear.
Sunday, July 11, 2010
Issues with SyntaxHighlighter on my blog
Wednesday, May 05, 2010
WCF Security – the way I wanted to learn
For intranet applications where the users could be authenticated against Active Directory, using WindowsCredentials, setting up security for a WCF service might not be all that difficult. It might not be difficult even to set up a WCF Service hosted by IIS and make it use the ASP.NET Roles/Providers. But what I wanted was to come up with a series of steps that allows me to secure a WCF service for internet-like applications. While it appears that there would have been 1000s of implementations on the subject where the client application provides a UserName/Password login control and then on the authenticated users would be able to work with the service.
To go back a little, this is what I want
- I have a client application which has a Login Control and the user enters Username and password. Without proper username.password combination, the service communications going forward should not be allowed. Remember Forms Authentication in ASP.NET ?? Something similar to that.
- I do not want to tie my service strongly to the host – meaning I do not want to make use of the ASP.NET Membership Provider models, though it is relatively easy to do so. So I have a console host program that serves as the WCF Service.
- For every call to an OperationContract, I do not want to read the Message Headers or add extra parameters to see the username and password. I don’t want specific logic within each operation that handles this check.
- I want the operations to be limited to those users with some kind of “Roles”. Basically, i have a set of operations that only users of a Role “X” should be able to perform; whereas there are some other operations for users with other roles.
- I don’t want my communication channel to be open and would want to prevent the users to sniff the traffic to see what is going on.
To summarize these requirements,
- I want a secure communication between client and server.
- I want to restrict access to the service unless the client sends in valid username/password.
- I want to restrict access to operations based on the roles of the calling user.
- I don’t want to deal with Windows Authentication at this moment, since I have plans to host my service on the internet in which case WindowsIdentity is not really preferred.
In this post, I would like to show the way I achieved these goals. Note that I am not qualified enough to make strong statements or give strong explanation on how the security works. The intention of this post would be to help assist developers like me who has little knowledge of WCF Security but do understand how Security works in general. I recommend you read MSDN documentation for the classes and terms I throw here and there.
While, the source code is available for download here : http://drop.io/yskic3h , here in this post I simply mention the steps that I used to achieve each of the goals mentioned above.
1. Secure Communication Channel
Used wsHttpBinding as the binding of my choice. The wsHttpBinding by default employs Windows security. We would have to change that to make use of Message security using “UserName” as clientCredentialType. All this is configured as a binding configuration.
<wsHttpBinding>
<binding name="secureBinding">
<!-- the security would be applied at Message level -->
<security mode="Message">
<message clientCredentialType="UserName"/>
</security>
</binding>
</wsHttpBinding>
Now this bindingConfiguration has to be set on the endpoint as shown
<services>
<service name="WcfService.SecureService" behaviorConfiguration="secureBehavior">
<!--
notice the bindingConfiguration, we are applying secureBinding that
was defined in the bindings section.
-->
<endpoint address="secureService"
binding="wsHttpBinding"
bindingConfiguration="secureBinding"
contract="WcfService.ISecureService"/>
<host>
<baseAddresses>
<add baseAddress="http://truebuddi:8080/" />
</baseAddresses>
</host>
<endpoint address="mex" binding="mexHttpBinding" contract="WcfService.ISecureService"/>
</service>
</services>
Now that we know the server is expecting username and password, we want a custom validator which checks this username and password combination against our custom repository of users. To do that, we would have to configure the service behavior this time. So binding ensures credentials are being passed and the service behavior validates them! ;)
<serviceBehaviors>
<behavior name="secureBehavior">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="true" />
<serviceCredentials>
<serviceCertificate
findValue="wcfSecureService"
storeLocation="LocalMachine"
storeName="My"
x509FindType="FindBySubjectName" />
<!--
Now in secureBinding (see in bindings section),
we set the Message security to use "UserName"
as ClientCredentialType. So we would like to
use a custom username password validator.
Here we specify that our custom validator should be used.
-->
<userNameAuthentication
userNamePasswordValidationMode="Custom"
customUserNamePasswordValidatorType="WcfService.CustomUserNamePasswordValidator, WcfService" />
</serviceCredentials>
<!--
The Custom Authorization policy is what used to verify the roles.
For a Role specified in the PrincipalPermission attribute,
IsInRole() method in the Principal that was set from the
CustomAuthorizationPolicy.Evaluate would be invoked.
-->
<serviceAuthorization principalPermissionMode='Custom'>
<authorizationPolicies>
<add policyType='WcfService.CustomAuthorizationPolicy, WcfService' />
</authorizationPolicies>
</serviceAuthorization>
</behavior>
</serviceBehaviors>
In the above configuration, using the serviceCredentials\userNameAuthentication element we specify that the userName/password are to be validated using custom validator type.
This is all configured there. While the username and password authenticates the client on the service, I think it does not really do anything to the communication channel.
To make the channel secure, we make use of certificates. In order to do this, the following steps are required to be done on the development machine so that the sample gets working :
- Using the makecert application (can run from Visual Studio Command), create and register the command for exchanging. Note that if you follow the MSDN article on creating certificates using makecert, it does not tell you about enabling the certificate such that it is suitable for key exchanging. So the command that worked for me is
makecert.exe -sr LocalMachine -ss MY -a sha1 -n CN="wcfSecureService" -sky exchange -pe -r wcfSecureService.cer
-
We specify the same certificate to be used in the service configuration file using the serviceCredentials\serviceCertificate element. See the configuration snippet shown previously. It basically says "find the certificate by subject name where subject name is 'wcfSecureService' in the certificate store on the local machine and the store would be Personal". For all this to work, note that HTTPS base address should be used. - While the first two steps takes care of the certificate on the server, the client should have some knowledge (basically the client should know the public key with which the messages would be encrypted) of the existence of the certificate. we specify that in the endpoint\identity section of the client configuration [see below]. The encodedValue can be obtained by adding service reference from Visual Studio which generates shit load of configuration on the client, just save the encodedValue and revamp your configuration file.
<client>
<endpoint address="http://truebuddi:8080/secureService" binding="wsHttpBinding"
bindingConfiguration="secureWsHttpBinding"
behaviorConfiguration="ignoreCert"
contract="SecurityDemo.ISecureService">
<identity>
<!-- Don't panic this key is wrong ;) for the sake of this post-->
<certificate encodedValue="AwAAAAEAAAAUAAAAee8O3PpkfSCfjaa3mDmkK+HLb4QgAAAAAQAAAAcCAAAwggIDMIIBcKADAgECAhDgA4A6S0Z/j0d3IFg04e9gMAkGBSsOAwIdBQAwGzEZMBcGA1UEAxMQd2NmU2VjdXJlU2VydmljZTAeFw0xMDA1MDUyMzQ4NTZaFw0zOTEyMzEyMzU5NTlaMBsxGTAXBgNVBAMTEHdjZlNlY3VyZVNlcnZpY2UwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBALF7OJsZ6AV5yqSSQyne9j+xwdRLDRoVMleYg0vGvB7W7Bk5zBNbSDCbb+spJR3ykayDoZYpykyY8Q7qzvPuUPdHu7SkMVZ9Ng8B8yAq0zrD8sJwnaqTEY4a8mj8Dt86Yr0wK31aF4VSDRZaK+XDyFd5hWU8Eya+bohhixndMYwNAgMBAAGjUDBOMEwGA1UdAQRFMEOAEJRtYMFDVIgPHFrIf0LU5e+hHTAbMRkwFwYDVQQDExB3Y2ZTZWN1cmVTZXJ2aWNlghDgA4A6S0Z/j0d3IFg04e9gMAkGBSsOAwIdBQADgYEApQ+Hy6e4hV5rKRn93IMcEL3tW2tUYcj/oifGbEPRX329s3cc8QH6jYaNN8cgS5RN+6QffrkvupMSUauGsWia20WHTRI8lyb+1gvvX4NpTxZE6+sZkvIu6R/qIsC6V9pbRCHm3HRFnAoMNZmPTr5mJvzwAQZzOdXMFq0OwakJKEw=" />
</identity>
</endpoint>
</client>
You can also look at this link to get the public key in any case.
For testing purposes, you should also add a behaviorConfiguration on the client's endpoint such that certificates are not validated, once you deploy, you can remove this behavior.
<behaviors>
<endpointBehaviors>
<!-- ignore cerificates validation for testing purposes. -->
<behavior name="ignoreCert">
<clientCredentials>
<serviceCertificate>
<authentication certificateValidationMode="None" />
</serviceCertificate>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>
On the client config, you should also give the same wsHttpBinding with similar behavior but with few other options added. See snippet and compare it with the binding snippet that was shown earlier for the server
<bindings>
<wsHttpBinding>
<binding name="secureWsHttpBinding">
<security mode="Message">
<message clientCredentialType="UserName"
negotiateServiceCredential="true"
establishSecurityContext="true"/>
</security>
</binding>
</wsHttpBinding>
</bindings>
With this the communication channel is secure. You might have some issues with Certificates but you should be able to use the exception messages to bing for answers online in the forums. Only other part that is left on the client end is to make sure that the client proxy used are set with Username and password. Code for the full client is shown below.
SecureServiceClient client = new SecureServiceClient();
client.ClientCredentials.UserName.UserName = "Krishna";
client.ClientCredentials.UserName.Password = "test";
User test = client.Login();
client.SafeOperationByAdmin();
2. UserName and Password Custom Validation
Implement a type that derives from UserNamePasswordValidator class. You would have to reference the System.IdentityModel.dll and if you remember, we set the custom validator in the service behavior on the service configuration file. While the code shown below does not talk to DB, it should still serve as a good example on custom username and password validation. Note that this Validate() method gets called for evevery
public class CustomUserNamePasswordValidator : UserNamePasswordValidator
{
public override void Validate(string userName, string password)
{
Console.WriteLine("Username validation started");
if (userName == "Krishna" && password == "test")
return;
throw new InvalidCredentialException("Invalid credentials passed to the service");
}
}
3. Restriction of Operations using Roles
The operations can be restricted to users of certain roles by applying a PrincipalPermission attribute on the OperationContract [see below]. The current principal would be checked to see if it is in the role specified, otherwise the operation would not allowed to be executed. Now how do we set this Principal to something? To do this, we need a CustomPrincipal which you should derive from IPrincipal. This Principal implementation has IIdentity which can be WindowsIndetity for Windows Authentication and GenericIdentity for other scenarios. Now this CustomPrincipal should be created and applied somewhere right? This is where the IAuthorizationPolicy comes into play, We should have a custom authorization policy whose Evaluate method should take care of fetching the identity and passing it to a newly created custom principal. This custom principal has to be set as the current principal. All the three code snippets : PrincipalPermission attribute on operations, CustomPrincipal and CustomAuthorizationPolicy is shown below.
///
///This authorization policy is set on the service behavior using service authorization element.
public class CustomAuthorizationPolicy : IAuthorizationPolicy
{
public bool Evaluate(EvaluationContext evaluationContext, ref object state)
{
IIdentity client = (IIdentity)(evaluationContext.Properties["Identities"] as IList)[0];
// set the custom principal
evaluationContext.Properties["Principal"] = new CustomPrincipal(client);
return true;
}
private IIdentity GetClientIdentity(EvaluationContext evaluationContext)
{
return null;
}
public System.IdentityModel.Claims.ClaimSet Issuer
{
get { throw new NotImplementedException(); }
}
public string Id
{
get { throw new NotImplementedException(); }
}
}
public class CustomPrincipal : IPrincipal
{
private IIdentity identity;
public CustomPrincipal(IIdentity identity)
{
this.identity = identity;
}
public IIdentity Identity
{
get
{
return identity;
}
}
public bool IsInRole(string role)
{
return true;
}
}
///in the WCF Service implementation
[PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
public void SafeOperationByAdmin()
{
///more code
}
The newly created Authorization Policy should be configured inside the service configuration file in the serviceBehavior\serviceAuthorization as shown below.
<!--
The Custom Authorization policy is what used to verify the roles.
For a Role specified in the PrincipalPermission attribute,
IsInRole() method in the Principal that was set from the
CustomAuthorizationPolicy.Evaluate would be invoked.
-->
<serviceAuthorization principalPermissionMode='Custom'>
<authorizationPolicies>
<add policyType='WcfService.CustomAuthorizationPolicy, WcfService' />
</authorizationPolicies>
</serviceAuthorization>
To summarize, the following are the steps that should be performed to get the security working in WCF. (Message level security).
- Create and register a certificate. Configure the service configuration specifying the certificate to use. This is done in the bindingConfiguration and the binding configuration is then applied on the endpoint.
- Configure the service to make use of Message level security. Again done on the server configuration.
- Configure the client to use encodedValue for its communication which is the public key. This is done in the bindingConfiguration on the client's endpoint. For testing purposes you can make the client not validate the certificates. This is done on the end point behaviors.
- Configure the client's binding to make use of Message level security with UserName.
- The client's code should specify the username and password. To validate this information, register a custom UserNamePasswordValidator in the serviceBehavior on the server configuration.
- For roles, create a custom prinicipal and set it using a Custom Authorization policy. This authorization policy should be registered in the serviceAuthorization of the serviceBehavior in the server configuration file.
Again the code is available for download at : http://drop.io/yskic3h . Sometime, in the future, I would try to upload the code to codeplex or put it in the Windows Live Skydrive.
Thursday, April 15, 2010
Running Moles using NUnit Console from Visual Studio
Create an external tool in Visual Studio as shown below
Command : C:\Program Files (x86)\Microsoft Moles\bin\moles.runner.exe
Arguments: $(BinDir)/$(TargetName).dll /runner:"c:\Development Tools\NUnit 2.5.2\bin\net-2.0\nunit-console.exe" /x86
Initialize Directory : $(ProjectDir)
Now you can use Moles from within Visual Studio. :)
Thursday, March 18, 2010
Introducing Comfy.Couch, a CouchDB API for .NET
WARNING : The work is still in progress.
From the past few days, I have been working on coming up with a nice little .NET Library that one could use with CouchDB. My goal was to stick with the CouchDB API documentation as much as I can so that it would be easier to work with the driver. The library has dependencies on Log4Net and Json.NET libraries.
I tried to be over-smart when picking that name – Comfy.Couch (I meant a comfortable couch to use :)). At the moment, the database API is functionally complete, but you cannot really use it unless you have the Document API which would be coming soon. The reference is at http://wiki.apache.org/couchdb/HTTP_database_API
I made sure I wrote unit tests to cover most part of my code, but I did miss some pieces here and there.
So I ended up having 94% of code coverage so far and I will be adding tests soon.
My idea is to first get the complete API in place and then worry about where it has to be tuned. So far, I have not focused on any kind of tuning, which I think could be done. I have thrown in a few asynchronous requests here and there but they are yet to be tested.
1. Creating a database.
CouchDatabaseOperations.Create("databasename"); //validation on database names is pending.
2. Getting database information from the server
ICouchDatabase db = CouchDatabaseOperations.Get("databasename");
CouchDatabaseInfo info = db.Metadata;
3. Deleting the database
//Delete operation uses MaxRetries as 2 since most of the times the first delete request fails on Windows.
CouchDatabaseOperations.Delete("databasename");
While I have successfully completed the rest of the API that is described in the Database API reference at CouchDB website, I just showed how the library could be used in here. The unit tests might be more helpful for you to get started, if you are interested.
Next steps:
- Upload the sourcecode to comfy.codeplex.com
- Improvize logging information
- Work on the Document API
- Work on the View API
- Work on the Bulk Document API
- Run some performance tests and identify the bottlenecks in the driver.
- Sample Scrum Management tool in Silverlight 4 that uses CouchDB for data backend.
Wednesday, February 10, 2010
Using .NET 3.5 (CLR 2.0) DLL inside Visual Studio 2010 for a .NET 4.0 Project/Application
When you first create a .NET 4.0 project inside VS 2010 and add reference to a .NET 3.5 DLL (say log4net or Rhino.Mocks), the project tends to not build. You can get errors as shown below.
It might appear to be a CLR version issue – unable to run 2.0 DLL inside a 4.0 App Domain. And i thought that is what it was until now.
You can get over this :)
By default, VS 2010 creates the projects with Target Framework on the properties set to “.NET Framework 4 Client Profile”. You would have to change that to “.NET Framework 4” by going to Project Properties –> Application –> Target Framework. And every thing begins to compile.
So I guess, one has to be aware of this when migrating old solutions from Visual Studio 2008 to Visual Studio 2010.
Proof that it works :) Notice the .NET Framework 4.0 features as well Log4Net and Rhino.Mocks used in all the same example. (It is a stupid example, but the intention was to show it works).
If for some reason, it does not work for you, try to add
<?xml version ="1.0"?>
<configuration>
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedRuntime version="v2.0.5727" />
<supportedRuntime version="v4.0.21006"/>
</startup>
</configuration>
to your csc.exe.config/msbuild.exe.config/VCSExpress.exe.config/devenv.exe.config …
I initially thought it has something to do with not enabling Side-By-Side Execution of the compiler and stuff but it turns out that it is not the case. For your information, i have added supported runtime as .NET 2.0 but then commented it to be sure that its ONLY the Target Framework that has to be changed.
Thursday, February 04, 2010
Breaking my head with message passing and Scala Actors
I recently started working on a personal project on which a friend of mine is helping. After some discussion, we thought Scala might be a good bet to use as the development platform of our choice (and we integrate Spring into Scala). Anyway for that, I got into Scala actors. Actors appear easy code – all you have to do is create an actor (if you use Actor.actor construct, it starts automatically, otherwise you have to invoke start) and from somewhere keep sending messages to the Actor.
So let us first define what message that I want to send using Case Classes. Refer the documentation on case classes.
case class Message(someData : String)
Now let us create our component which is an Actor and for each string passed in the constructor, we append the message (Some stupid behavior, but serves the example here)
class MyActor(toInform: Array[String]) extends scala.actors.Actor{
private val noticeTo = toInform
def act(){
loop{
react{
case Message(r) =>{
for(item<-noticeTo)
println(item+"_"+r)
}
}
}
}
}
Now with the following code to test the above actor
val actor = new MyActor(Array("Krishna"))
actor ! "Welcome"
With all the excitement in the world, you run the test and it just hangs in there, nothing happens :). So you suddenly realize "YOU FORGOT TO START THE ACTOR". Damn! it was the actor.start that is causing the issue. So you cleverly add the actor.start statement. ("you" meaning "me")
val actor = new MyActor(Array("Krishna"))
actor.start //do not forget this :)
actor ! "Welcome"
You run the test again, to have no success. Then you realize that your actor expects Message(r) where as you are sending a String. So you change that
val actor = new MyActor(Array("Krishna"))
actor.start //do not forget this :)
actor ! Message("Welcome")
//actor.exit
Also notice the commented out explicit kill of the exit call outside the actor. It is not advised to be doing that. Remeber that "!" is a send-and-continue kinda call (asynchronous call). So before the actor is scheduled to work on the message, it might be killed. Instead it is advised that you specify an "Exit" like message within the case on the Actor. Now you run the test and it works :) Hurray!!!!
Later, smart-ass like you (this time, it is you :)), decide to write a very bad code (ok, you is not really you, for now lets assume the actor code i wrote is simply perfect) like shown below.
object Launcher extends Application{
val actor = new MyActor(null)
actor start() //do not forget this
actor ! Message("Welcome")
actor ! "E" //this is the exit message.
}
You repeat the test again :) and this time it does nothing, it appears to be blocked :). Again....damn... so how would we know what the issue is? You spend a your day-off trying to figure out what is wrong with this simple code...then after spending 10 hours trying all the magic tricks (aparently, i know too many magic tricks that never work, hence the time), you realize the actor must be dead (netbeans threads view would show what threads are running and you never see any FJ Threads). So the actor died! So what can kill an actor? - call to exit(), or an exception!!!! There it is .. so let us change the actor code so that it can catch the exception.
class MyActor(toInform: Array[String]) extends scala.actors.Actor{
private val noticeTo = toInform
def act(){
loop{
react{
case Message(r) =>{
try{
for(item<-noticeTo)
println(item+"_"+r)
}catch{
case e => e.printStackTrace
}
}
case "E" => exit
}
}
}
}
Run the test again and notice the stacktrace!
java.lang.NullPointerException
at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:68)
at scala.collection.IndexedSeqLike$class.foreach(IndexedSeqLike.scala:86)
at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:20)
at tryscala.MyActor$$anonfun$act$1$$anonfun$apply$1.apply(Launcher.scala:13)
at tryscala.MyActor$$anonfun$act$1$$anonfun$apply$1.apply(Launcher.scala:10)
at scala.actors.Reaction$$anonfun$$init$$1.apply(Reaction.scala:33)
at scala.actors.Reaction$$anonfun$$init$$1.apply(Reaction.scala:29)
at scala.actors.ReactorTask.run(ReactorTask.scala:33)
at scala.actors.scheduler.ForkJoinScheduler$$anon$1.compute(ForkJoinScheduler.scala:111)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:147)
at scala.concurrent.forkjoin.ForkJoinTask.quietlyExec(ForkJoinTask.java:422)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.mainLoop(ForkJoinWorkerThread.java:340)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:325)
BUILD SUCCESSFUL (total time: 7 seconds)
So the fix is to :) pass in an empty array at least, instead of sending in a null ( use Array.empty).
Clearly, this is not the exact sample that I was running, it was a little bit complicated and myself being totally new to Scala had a hard time trying to figure out that an exception can kill scala Actors!!! I know I am dumb, but with this post, I want to save you from considering yourself dumb after trying very hard for a few hours.
Tuesday, January 26, 2010
Dependency Injection in Scala and some other stuff
My new found love is the Java Spring framework. I initially looked at the SpringFramework for .NET and thought it was really good and well designed. But then for a pet project of mine, my friend promised to help me given the project would be on the JVM. So we decided that we shall consider Scala as the language of choice on the JVM. Now, I know how I can do DI using Spring Container in Java. So I was wondering if I could do the same in Scala. Turns out, it is not so difficult after all (so far).
For my scala development environment, I played with Eclipse, IntelliJ Community Edition and then Netbeans. So far, Netbeans has the best support for Scala in the form of IDE integration. There are some issues with it but then it is not all too bad. One initial issue was that even though the Spring JAR files were in my class path, the netbeans editor complained that it could locate the Spring Jars. The error message was something like “the value springframework is not a member of the package org”. Turns out, if you clean and build the project, it would build successfully but the editor keeps complaining. The Netbeans wiki talked about having some “Reset Scala Parser” item on the context menu but I am so dumb that I still could not find it. So if there is no reset trigger then how about we close and start the IDE. And it works! For now, since I am using nightly builds (follow the wiki link here http://wiki.netbeans.org/Scala68v1) I would not mind these minor issues. But let me tell you, I spent a whole evening trying to figure out which is the best IDE for latest Scala and the winner is Netbeans, without any question.
Now at the time I created my sample project, I made sure that I check the settings in Netbeans project creation wizard which allows me to copy all my libraries into a common folder such that it would be easier when working as a team. Then I added the following files from the Spring Distribution.
Apart from these, I added commons-logging jar too.
Following code shows how you can use Spring within scala. So lets say I have an interface Sample. (which are almost traits in Scala). And I have a concrete implementation ScalaImpl. Follow the code carefully and with some fundamental understanding of Spring framework, you should be all set.
/** Main.scala **/
package tryscala
import org.springframework.context.support.FileSystemXmlApplicationContext
object Main {
/**
* @param args the command line arguments
*/
def main(args: Array[String]): Unit = {
val fs = new FileSystemXmlApplicationContext("Configs\\Spring.xml")
val sample = fs.getBean(classOf[Sample]).asInstanceOf[Sample]
println(sample sayHello)
}
}
/** Sample.scala **/
package tryscala
trait Sample {
def sayHello(): String
}
/** SampleImpl.scala"
package tryscala
class SampleImpl(var firstName:String) extends Sample {
def sayHello() = "Welcome to Scala : " + firstName
}
The spring configuration file (Configs\\Spring.xml) is shown below.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<bean id="sample" class="tryscala.SampleImpl">
<constructor-arg name="firstName" value="Krishna Vangapandu"/>
</bean>
</beans>
I hope to be able to share more about what we are doing and how Scala effects us, as we move on.
Thursday, January 14, 2010
Rhino Mocks : How to mock read-only properties.
As part of my never-ending quest to do something big, I started working on an application for which I am learning to use Rhino Mocks and I thought it would make a small and easy example to understand mocks. I have not particularly followed TDD approach so far, but anyway here is my attempt to show how you would use Rhino Mocks to stub/mock (am yet to understand when to use what…I mean i know the difference … just did not develop enough maturity in that aspect).
Let us say I have some interface which is as simple as shown below.
public interface IColumn {
public string Name { get; }
public string DataType { get; }
}
Now if I were to mock/stub this interface, I had some problems with Rhino Mocks throwing exceptions on how the property should have a getter/setter when I was trying to configure the mock such that whenever Name property is accessed you should return "Krishna Bhargava" and for DataType you should return "System.String". I tried different mechanims like using Expect.On(col).Call(it.Name).Return("Krishna Bhargava") or tried to define property behavior (PropertyBehavior()). Finally after struggling for an hour, looking at various examples online from good people like you and me, I was able to come up with proper code that can generate a stub for the column defined in this interface. The code snippet shows how you can use Rhino Mocks (which i think the most convenient framework - i tried Moq (somehow it does not click with me), NUnit.Mocks( too much ground work)) so that u can generate stubs for read-only properties! Unfortunately at the moment, I cannot comment much on the Rhino Mocks classes .. I hope the code snippet is easy to understand.
private IColumn MockColumn(string name, string type)
{
IColumn col = MockRepository.GenerateStub();
col.Stub(it => it.DataType).Return(Type.GetType(type));
col.Stub(it => it.Name).Return(name);
return col;
}
you can later use this method as shown below
[Test]
public void ColumnStubNameCanBeSet()
{
IColumn col = MockColumn("name", "String.String");
Assert.AreEqual("name",col.Name);
Assert.AreEqual(typeof(string), col.DataType);
}
/*a little bit more real world usage of a column stub is shown below. */
[Test]
public void ValuesCanBeSetOnARecord()
{
IRecord record = new Record(); //NOT DESCRIBED in this blog, but this is my personal class....
record.SetValue(MockColumn("name", "System.String"), "Krishna Vangapandu");
record.SetValue(MockColumn("age", "System.Int32"), 25);
Assert.AreEqual("Krishna Vangapandu",record.Value("name"));
Assert.AreEqual(25, record.Value("age"));
}
And by the way, Castle Windsor has the best and easiest configuration schema in all the IoC containers I played with. I personally like Unity Container but its configuration is a mess and over-engineered. I will try to post an implementation of Observer-Pattern where observers and observables are linked with pure configuration - the best use I made of IoC container so far!
Saturday, November 21, 2009
Programming in Scala – Part 2/?
In my previous post, we got started with simple Scala HelloWorld and moved on to write a bubble sort in Scala. This time let us look at some differences between a “var” and a “val” in scala. I hope you all know what “immutable” means – simply put Strings in Java/.NET are immutable. Anytime you modify a string, a new object of string is created – they cannot be changed in place. Well, in scala when you declare a variable with a “val” it would be immutable. Look at the following scala code.
object ValVar
{
def main(args: Array[String])
{
val immutableValue = 200
//immutableValue = 20 -> gives compiler error
var mutableValue = 200
mutableValue = 20
println("Immutable : "+immutableValue+"\n Mutable : "+mutableValue)
}
}
The decompiled program would look like shown in Java
import java.rmi.RemoteException;
import scala.Predef.;
import scala.ScalaObject;
import scala.ScalaObject.class;
import scala.StringBuilder;
import scala.runtime.BoxesRunTime;
public final class ValVar$
implements ScalaObject
{
public static final MODULE$;
static
{
new ();
}
public ValVar$()
{
MODULE$ = this;
}
public void main(String[] args) {
int immutableValue = 200;
int mutableValue = 200;
mutableValue = 20;
Predef..MODULE$.println(new StringBuilder().append("Immutable : ").
append(BoxesRunTime.boxToInteger(immutableValue)).
append("\n Mutable : ").
append(BoxesRunTime.boxToInteger(mutableValue)).toString());
}
public int $tag()
throws RemoteException
{
return ScalaObject.class.$tag(this);
}
}
Clearly, it does not look like "immutable" variables are declared final in the Java code. So it appears to me that Scala compiler does the job of making sure that the val'bles are immutable. For those curios to see what happens if you attempt to change a val'ble, see the screenshot below. Also it would be interesting to note that the scala compiler takes care of optimizing our string concatenation to make use of String builder, just like the javac!
So what did we learn?: If you wish to keep changing the values for a variable, then use "var" and if you want immutable variables, use "var".
Now that we know how to create both variables and val’bles, let us look at some fancy stuff that we could do with lists.
Whats your range?
Let us say, we need all odd numbers between 20 and 2000 which are divisible by both 5 and 7. If you were like me, we write the program to look like shown below.
object RangeAction1
{
def main(args: Array[String])
{
for(i <- 20 to 2000)
{
if( i % 5 == 0 && i % 7 == 0)
println(i)
}
}
}
Can we do any better? This looks too long now that I have been imagining things, increasing expectations about scala being so nice.
object RangeAction1
{
def main(args: Array[String])
{
(20 to 2000).filter(i=>i%5==0&&i%7==0).foreach(i=>println(i))
}
}
When we say “20 to 2000”, it returns a Range object. Look in the documentation to see what all magic could we do with range. Similarly if we were to work with lists, we could do something similar. Now to add 1 cent to the 3 cents we covered so far, what if i want the range to start with 20 and end with 2000 but increment by 10 and exclusive of 2000.
(20 until 2000 by 10).filter{i=> i % 5 == 0 & i % 7 == 0}.foreach{i=> println(i)}
Also, I wanted to be more like a regular programmer and put my closure inside {} instead of (). More fun later!
Friday, November 20, 2009
Scala for dummies like me!
Lets first get started and write a Hello World program.
class HelloWorld
{
def main(args: Array[String]){
println("Krishna Vangapandu - Hello Scala world!");
}
}
When you compile this and execute it, you would get a “java.lang.NoSuchMethodException: HelloWorld.main is not static”. Well, the mistake that I did was to put a “class” – but it should be object. So the hello world would be
object HelloWorld
{
def main(args: Array[String]){
println("Krishna Vangapandu - Hello Scala world!");
}
}
So what's the difference between "class" and an "object". Obviously there is no problem for the compiler, only the runtime blows! So what does the documentation say about this? Well even better lets use a decompiler to decompile the .class file we obtained and see how the scala code would when written in java. By the way, I am using this decompiler - which i should say is freaking awesome.
"class HelloWorld" decompiled.
import java.rmi.RemoteException;
import scala.Predef.;
import scala.ScalaObject;
import scala.ScalaObject.class;
public class HelloWorld
implements ScalaObject
{
public void main(String[] args)
{
Predef..MODULE$.println("Krishna Vangapandu - Hello Scala world!");
}
public int $tag()
throws RemoteException
{
return ScalaObject.class.$tag(this);
}
}
"object HelloWorld" decompiled.
import java.rmi.RemoteException;
public final class HelloWorld
{
public static final void main(String[] paramArrayOfString)
{
HelloWorld..MODULE$.main(paramArrayOfString);
}
public static final int $tag()
throws RemoteException
{
return HelloWorld..MODULE$.$tag();
}
}
But what the hell is a $tag() method? Well, I looked into the source code which had a comment on the $tag method which says
This method is needed for optimizing pattern matching expressions which match on constructors of case classes.
Well, then what is the HelloWorld actually doing? Well, looks to me it was using the HelloWorld$ which was also generated by “scalac”. I cannot dig into what is going on here, may be sometime in the future.
So far, what I understood is that “object” creates a final class whereas “class” creates a ScalaObject and methods inside would all be instance methods. So anything declared “object” can act only as a static-only container.
Lets do a simple bubble sort program in Scala. What should we be knowing to write a button sort?
- Assuming we pass the numbers to sort from command line, how do we convert strings to numbers ?
- How do we loop the array?
object BubbleSort
{
def main(ip_args: Array[String]) //we shall get the input numbers to sort into "args"
{
/*
we have a collection of strings, we should get a collection of numbers.
so we use the map which says for each i in the ip_args,
return the value after converting into expression. we get a 1 to 1 returned array.
*/
val args = ip_args.map { i => Integer.parseInt(i)}
/*
Looping : for index j which starts from 0 and ends at args.length-1 (inclusive)
*/
for(j <- 0 to args.length-1)
{
for(i <- 0 to args.length-2-j)
{
if(args(i) > args (i+1))
{
//we do an ascending order sort.
// Swap routine is shown belo.
val temp = args(i) //this is how we define variables in scala
args(i) = args(i+1)
args(i+1) = temp
}
}
}
//print all the numbers
for(i <- 0 to args.length-1)
println(args(i))
}
}
I do agree that even without the comments the code does not look as concise as it should be. But right now, we are just getting started – we would slowly look at how we can write concise code when we think functional programming (I am saying this with my past experience with Groovy and C# Closures – by functional, 90% of the time I mean closures – which I know is not accurate).I have also observed that when you compile the BubbleSort.scala, you end up getting more than one .class files - which I believe is for the anonymous method (closure) we used.
That's it! for this post. See you soon with functional programming using Scala!
Thursday, November 12, 2009
Working with JQuery and list boxes
Code (read comments):
<html>
<head>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js">
</script>
<script type="text/javascript">
/*
when the document is ready (after all the HTML page is loaded, this shall be executed)
we attach the event handlers.
*/
$(document).ready(function(){ attachEventHandlers(); });
function attachEventHandlers(){
/*
In this method we attach the function(){} to the change event on all the <select> items.
The code inside the function(){} shall be executed when item selections are changed on the select box.
*/
$("select").change(function(){
//using "this" to access the current listbox. the jQuery wrapper would be $(this)
var selectedItems = $(":selected",this);
$("#itemsCount").text(selectedItems.length); //set the items selected count.
var toAppendHtml = ""; //lets store the html that we shall put inside the itemsSelected element.
//for each selected item we add a new line with selected item's text and value to the toAppendHtml.
selectedItems.each(function(){
toAppendHtml += $(this).text() +" : "+$(this).val()+"<br/>";
});
//finally put everything in there as html content to itemsSelected.
$("#itemsSelected").html(toAppendHtml);
});
/*
Alternatively you can use ExternalFunction. The external function should have one parameter "e" called the event object.
$("select").click(ExternalFunction);
*/
}
/*
function ExternalFunction(e)
{
//now within this function, the element which raised the event can be accessed using "this", or "e.currentTarget".
//So the statement "this == e.currentTarget" will always be true.
alert($(":selected",this).length);
}
*/
</script>
<style type="text/css">
select{
width: 100px;
height: 200px;
}
</style>
</head>
<body>
<select name="items" id="items" multiple="true">
<option value="1">Item 1</option>
<option value="2">Item 2</option>
<option value="3">Item 3</option>
<option value="4">Item 4</option>
<option value="5">Item 5</option>
<option value="6">Item 6</option>
<option value="7">Item 7</option>
</select>
<p/>
<div>total items selected : <span id="itemsCount">0</span></div>
<span>Selected Items are </span>
<div id="itemsSelected"/>
</body>
</html>
Thursday, November 05, 2009
Problem running Scala
I was just trying to run scala on my machine and failed to do so with an error message “…..\java.exe unexpected at this time.” Look at the screenshot shown below.
Well, the problem was that environment variables for JAVA was set up using JAVA_HOME variable. The JAVA_HOME environment variable was pointing to the JDK directory and the Path was modified to include the “%JAVA_HOME%\bin” directory.
I removed the JAVA_HOME and then modified the path to specify the complete path to the JDK bin folder and it works now. :)
I might be talking more about Scala in the future, the concurrency scala supports appears to be interesting.
Monday, August 31, 2009
Visual Studio Test System : “Test Run Deployment Issue : The location of the file or directory … is not trusted”
I just came across this issue when trying to run tests from Visual Studio test system. To resolve this, simple go to the source of the DLL (in my case, I placed them under LIB directory inside my solution directory), right click on each of the libraries that were taken from external sources (like Log4Net, Moq, etc.) and view the Properties. In the properties window, you should be seeing “Unblock” option as shown below.
Simply click “Unblock” button and release the library’s security constraint. Now perform a clean on the solution and rebuild the solution. Your tests should run without any deployment issues.
good luck!
Monday, July 27, 2009
ConfigStation for WCF – prototype for minimal configuration based WCF services
To start off, I would like to stress that I am not a WCF expert and if you go around my blog, you can notice me writing about lot of different things – WPF, DLR, Web Development and what not. So what I present is just something that I made recently as a part of a bigger project that I plan to release. Apparently, this is pretty good start for what I envision for avoiding Configuration Hell in WCF services.
The StockTrader sample from Microsoft comes with a great library – Configuration Service 2.04. The library is pretty good and provides wonderful functionality but the major problem with that, for me, is its strong dependency on the SQL Server backend. In short, the configuration service maintains a service configuration repository which is used to provide centralized configuration repository, load balancing, fail-over in WCF based SOA applications. And I always wanted such a repository which would minimize my effort in developing distributed applications using WCF.
I believe, WCF should allow very simple way to develop services and should provide an easier means to configure them. One way away from configuration through App.Config is to code the configuration, but there seems to be a big lack of proper documentation and decent-real-world samples explaining code-based WCF configuration. Anyway, I would envision hosting a service to be as simple as :
var serviceHost = new AutoConfiguredServiceHost<ServiceImpl>();
With no or minimal configuration, the service host should be clever enough to determine what the configuration defaults are. Similarly, consuming the service should be as simple as :
var client = new RemoteServiceProxy<IService>();
The proxy should be created by accessing the repository to figure out implementation for IService and then use the configuration obtained to create the proxy.
With these goals in mind, the config station has been developed and here is what I have so far.
Sample : Test Service which hosts the ConfigRepository as well as a sample WCF Service implementation. The host method is shown below.
class Program
{
static void Main(string[] args)
{
using (var configHost = new ConfigStationHost())
{
var hostFacade = new ServiceHostFacade<TestImpl>();
var host = hostFacade.Host;
host.Open();
Console.WriteLine("Test service launched.Enter to Stop");
Console.ReadLine();
//host.Close();
host.Abort();
}
}
}
If you look at the using block, I am creating an instance of ConfigStationHost – which actually hosts the ConfigStation – a repository WCF service. At the moment, this service requires App.Config based configuration of the service – which can be easily removed and which would be my next enhancement to the project. In this example, I am actually hosting the ConfigStation within the same process as my actual WCF service, which is not required at all. You can host the ConfigStation in a totally separate program – all you have to do is create the instance of ConfigStationHost (see required configuration below) and dispose it when you are done.
The configuration for the Test Service is shown below.
<configuration>
<appsettings>
<add value="net.tcp" key="ServiceScheme" /> <!-- you can set this to http as well or even msmq ...-->
<add value="9989" key="ServicePort" />
</appsettings>
<system.servicemodel>
<services>
<service name="ConfigStation.Repository">
<endpoint contract="ConfigStation.ServiceContracts.IRepository" binding="wsHttpBinding" address="http://localhost:8731/ConfigStation/Repository" />
</service>
</services>
<!-- This demo acts as a client to ConfigStation, so it is all good-->
<client>
<endpoint name="ConfigStation" contract="ConfigStation.ServiceContracts.IRepository" binding="wsHttpBinding" address="http://localhost:8731/ConfigStation/Repository">
</endpoint>
</client>
</system.servicemodel>
</configuration>
In the shown configuration, the <service> element configuration is used to host the ConfigStation repository in the current process. The <client> configuration is the WCF client configuration which is used to access the ConfigStation service hosted. The TestService makes interacts with the ConfigStation using WCF and the ConfigStation is treated as a WCF service hosted somewhere remote. So, if we were to host the configstation separately the only configuration required would be that of the <service> defined. The TestService would then have the AppSettings and the <client> configuration – which is pretty easy to set and even easier for me to remove.
Now, the ServiceScheme dictates what communication protocol (BINDING, in terms of WCF) would be used when exposing the service and what binding would be used by the clients consuming this service. The ServicePort tells what port the service should be hosted on. Note that WCF allows hosting multiple services on the same port as long as their address is different (Except for MSMQ, i think).
The test client to consume the TestService is a different program, whose configuration is shown below. The program contains a WCF client to the TestImpl service whose details are obtained from the ConfigStation. Thus, the client process requires configuration which points to the ConfigStation.
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.serviceModel>
<client>
<endpoint address="http://localhost:8731/ConfigStation/Repository" name="ConfigStation"
binding="wsHttpBinding"
contract="ConfigStation.ServiceContracts.IRepository">
</endpoint>
</client>
</system.serviceModel>
</configuration>
As you can see, the above is the only configuration required – which would be eliminated once I enhance the ConfigStation. The actual code to access the client is shown below.
namespace Test.Client
{
class Program
{
static void Main(string[] args)
{
var cf = new ClientProxyFacade<ITest>();
ITest test = cf.Interface;
var td = test.SayHello();
Console.WriteLine("Remote Server returned : " + td.Message);
}
}
}
You just create the ClientProxyFacade Of ITest, the service contract used by TestImpl. Then the interface is obtained via the “Interface” property. Then you can execute any method exposed by the Service Contract.
The library is available on codeplex – making this my first public release of open source software, of any kind. In the process, I would like to stress that the library uses the amazing ServiceModelEx library from Juval Lowy, IDesign. I actually tried to contact Juval whether or not to use his library but guess he is too busy so I took the liberty to publish the project having seen a WCF project on google code doing the same. In case I breach any license, please go easy on me and let me know so that I can fix my mistake.
I appreciate any positive feedback and any expert advice on the library. I am glad to learn and fix any changes requested. :) Hope this helps a few of us devs who like to play with some convention based WCF programming. I would be talking more details on the actual implementation, on how the library performs auto-generation of the service configuration and how bad the current repository implementation is, in my next post.
Wednesday, July 22, 2009
Powershell Script to delete bin/obj folders
Get-childitem c:\Temp\MyProjectSolutionFolder -recurse -include *.exe,*.dll,*.pdb,*.exe.config, bin,obj | where {$_ -notmatch 'Libraries'} | Remove-Item -Recurse
This script saves my time a lot and also does not require me to install any tools that does the same but adds registry entries for explorer context menu.
NOTE: Please verify thoroughly before blindly running the script. Use the -WhatIf flag to simulate the execution of the command instead of actually running it.
Saturday, June 20, 2009
Bing : Not a search engine, but decision engine??
I am an active contributor to the MSDN WPF Forums and most of the times I go to the forums through the search engine. And being an ardent supporter of Microsoft, I made Bing my default search engine on my home desktop. So I searched for “MSDN Social” and look what one of the sponsored result is!
seriously we are in the 21st century. And BING “decided” for me that I want to visit BAD GIRLS IN MY AREA! when all I wanted to do was to visit MSDN Social Forums. Bing – get a life.