When we write web services, we need ways to test them.
Instead of writing an elaborate harness or using curl or wget or some other tool, I discovered that Fiddler can be used to invoke SOAP services, with either a GET or a POST. Fiddler is a great, free tool you might already be using to monitor net traffic. If not, I would highly recommend it.

To use Fiddler to test you service that accepts a POST of param1, here is a quick rundown:

Start Fiddler and go to the Request Builder tab.

image

In this tab, enter your service path.
Note: you can run Visual Studio and grab the (dynamic Cassini) address from there.

Choose the submit type aka verb

image

For a POST type, we have to specify the Header to contain additional info.
viz. Content-type: application/x-www-form-urlencoded

 

image

 

Enter the various parameters to submit to the service in the Request bodyfield.

e.g. for 2 params, with value1 and 2

image

 

Hit the Execute button (in the far top-right) to submit the Request.

I usually choose Tear Off in the Options so that I have a nice little floating window to do my testing.

You can put a debug point in VS and step through the Request if you want.

oAuth is an open protocol that provides an authorization mechanism for clients/third parties to access resources on a server, after being given permission by a user.
For e.g. You have photos on a photo-sharing site (server), and you (user) permit a social networking software (third-party) to allow access and thereby show these photos.

oAuth requires the server to authenticate the user before they present the option to let a user authorize the resources.

oAuth is used more in B2C scenarios, and a third-party is an authority between the user and server. This basic scenario is called a love triangle, and there can be many servers involved in the middle for authentication, called hops.
http://oauth.net/ has all the details.

oAuth is implemented by a number of vendors like Facebook, Twitter, etc. and is usually used in an application integrated with these third parties. These are the “authorities” that issue access tokens, and accept them (in the future) to verify access.

If you are have an integration with a third-party which implements oAuth, you could avoid 1authenticating your user, and just rely on the third-party. The assumption here, of course, is that your user is registered on the third-party provider’s website.
If you need, your website can also implement oAuth and be an authority to grant access rights to third-parties.

Other possibilities that involve third-parties for verifying a user include:

– Active Directory Federation services – This is a Microsoft protocol that allows a single sign-on mechanism for websites. You can read all about it here: http://technet.microsoft.com/en-us/library/cc736690(WS.10).aspx
Microsoft Passport is a provider.

– LDAP – http://www.gracion.com/server/whatldap.html, which is usually used by programs and for lower-level resource access like printers, etc.

– OpenID – There are many providers including google, yahoo, flickr that support openID as a standard way to authenticate users on the web.

– WS-Trust and WS-Federation – Used more in Enterprise scenarios, a Provider of information such as a Healthcare company or a bank can be an authority and use WS* to allow third-parties access to the data. Read about it here: http://msdn.microsoft.com/en-us/library/bb498017.aspx

Hope this and the previous post gives you some possibilities to think about, when it comes to authentication and authorization on your websites and applications.

Microsoft Silverlight and Adobe Flex are currently the 2 good options to make a rich client-web sevices app.

If you’re a .NET shop like us, you most likely have your web-services in C#. If you have a rich-client, it is most likely built in Flex/Flash. This is most likely market/business driven but also Flex has a decent model for building rich Internet clients consuming web-services and is platform independent (we’re not considering phones in this because Flex/Flash is no workie on many of them yet).

Although, Silverlight might take a while to reach a tipping point in the marketplace, the medium itself and related technologies have matured a lot and are quite compelling as a development platform, and would be my personal choice.
Silverlight definitely has the edge over Flex in terms of development, because it integrates tightly with the .NET IDE so you  have just one development platform and language which itself is a big win from a HR and team standpoint.

For development purposes, Microsoft has provided a framework called RIA (Rich Internet Application) Services that makes it easier to write a Silverlight Client that integrates with your web service.
I highly prefer the type integration it provides through library references instead of wsdl, which you would if developing with Flex.
Reflecting changes in web services (during development) seems to be a hassle with Flex Builder, if you regenerate from the wsdl and only need to copy over the new stuff if you want to do it selectively; I think a part of this hassle is our process/implementation.
A big hole in the strong typing via the proxy is that the proxy itself could be out of sync with the service, which does happen often during the development phase, and lead to runtime errors.

Apart from types being shared between the client and the server code, things like Validation rules are also shared, which is quite powerful.

This makes it a no-brainer to use Silverlight for Management Console/Support apps (because they are in-house and under your control) if you really want to go rich-client, while it might take some convincing of the business to use it for web apps, which honestly might be impossible as of now. 

Anyway, sticking to development, here is a quick overview of the various pieces involved in a RIA Services solution.

Project
You use the Project – New – Silverlight Application template (like you always have for Silverlight projects). Following this, you will see the New Silverlight Application dialog.
It is important to check the box “Enable .NET RIA Services” at the bottom of the screen (which will show up only if you have RIA Services installed)
So, you will end up with a Solution containing 2 projects, viz. the Silverlight client with the RIA link enabled and the Website Server project.

Data model
Define your data classes with the ADO Entity Framework designer

Service
This is the critical part, where you choose: Web Project – Add New Item – Domain Service class. Enter required details, which also let you choose Entities to manipulate, and you will end up with a class inheriting from the LinqToEntitiesDomainService of your Data Context class type.
The service class is decorated with: [EnableClientAccess()]  which exposes it to the client.
A Getxxx() method is generated to provide data.
Application logic should be added in/below the service class.

You can run the Project and see that the data entities are available in the client and server. This is an important distinction from writing a Flex client where there is no type sharing, and is a huge benefit for development.
This should not be confused with “no strong typing” – strong typing in Flex will be with proxy classes generated from the WSDL obtained by referencing your web service.

Client
As mentioned earlier, type sharing is achieved by generating proxy classes in hidden code when you build the Project. The proxy classes include your DataContext, Entities, etc.
You can then access data from the Service by using a generic LoadOperation class which is in the System.Windows.Ria.Data namespace.

I have purposely kept this post at a high-level to get you started. There are a lot of details which I will delve into in future posts.

Some good resources are:
http://www.nikhilk.net/NET-RIA-Services-Vision-Architecture.aspx

http://blogs.msdn.com/brada/archive/2009/03/19/what-is-net-ria-services.aspx

http://www.silverlightshow.net/items/Creating-applications-with-.NET-RIA-Service-Part-1-Introduction.aspx

A good practice for a web service is to provide and store data in a locale agnostic way, e.g. timestamps should be stored as UTC and the data should also be returned/serialized in an agnostic way, so the client can apply localization settings, user preferences, etc. This is a good practice so that servers are location agnostic and the client does the lifting.

In the past, I returned a DateTime with the specification that it was UTC, so the client blindly localized it.
I recently ran into a problem where our Flex/Flash client used the UTC indicator to localize instead of blindly doing it, which meant that my method had to return timestamps marked as UTC, i.e. with a “z” at the end. There is an issue with that if you use .NET to write your web service because the XmlSerializer cannot just take a Datetime type and serialize it as UTC.

In fact, .NET framework 1.1 had no way to designate a timestamp as UTC or local and this messed up web service designs and frustrated developers because data had to be converted to a locale before it was returned/serialized in a web service.
This issue was addressed in .NET framework 2.0 by adding a DateKind flag to a DateTime type. So, to serialize (and mark) a DateTime as UTC, you use:

DateTime dt = new DateTime(timestamp, DateTimeKind.Utc)

Now, if you are like me, you think it might be easier to add an extension method to a DateTime type in your service layer (or lower layers, if you really need it there), so you could provide the synctactic sugar elegantly write

DateTime dt = timestamp.UTCKind

But, unfortunately that does not work…
Why not?

From what I understand, extension methods are implemented in .NET as delegates with currying, which limits the first parameter (of a delegate) to be a reference type – ergo you cannot extend a value type or, in the case of Datetime, a struct.

So, what is currying? Is it dinner time yet? (if you like Indian or Thai food)

Going further down the rabbit-hole of functional programming, currying is “transforming a function that takes n arguments into a function that takes only one argument and returns a curried function of n – 1 arguments”.
In layman terms, when you curry a function, you return a lambda expression which is then used in the “higher” calling function.

A good explanation here: http://blogs.msdn.com/ericlippert/archive/2009/06/25/mmm-curry.aspx
and here: http://blogs.msdn.com/wesdyer/archive/2007/01/29/currying-and-partial-function-application.aspx

Origins of its name: http://c2.com/cgi/wiki?CurryingSchonfinkelling

Here are some related articles:

http://stackoverflow.com/questions/1016033/extension-methods-defined-on-value-types-cannot-be-used-to-create-delegates-why

http://blogs.msdn.com/sreekarc/archive/2007/05/01/extension-methods-and-delegates.aspx

So, ultimately, I created a utility method that takes in a DateTime and returns a UTC type DateTime, i.e. its kind is set to UTC.

public static DateTime UTCKind(DateTime dt)
{
    return DateTime.SpecifyKind(dt, DateTimeKind.Utc); //set time as UTC
}

Visual Studio 2008 has Unit testing support built into it, whereby you can define batches of unit tests and run them as part of your build process.

However, when I tried writing a test against an asmx Web Service, I could not get it to work using the .asmx (endpoint/page) request.

I used a web request style unit test:

[TestMethod()]
[HostType(“ASP.NET”)]
[AspNetDevelopmentServerHost(“C:\\projects\\fooService”, “/”)]
[UrlToTest(“http://localhost:62534/fooService)%5D”>http://localhost:62534/fooService”)]

public void GetFooTest()
{
      fooService svc = new fooService();
     foo.doSomething();
    Assert(blah, blah);

Unfortunately, this does not work and most likely you will get a 404.
I got this error when I ran my unit test:

Could not connect to the Web server for page ‘http://localhost:62534/fooService’. The remote server returned an error: (404) Not Found.. Check that the Web server is running and visible on the network and that the page specified exists.

I tried a few hacks like adding a default page to the service project and specifying that in the UrlToTest attribute, etc. but to no avail.

Finally, I resorted to just adding a Service Reference to the target and removed the three web attributes. This means that my service class is instantiated directly and not through a web service request but honestly I don’t care about the transport part. I’m mainly interested in testing functionality anyway.

Hope this post helps…

I thought I’d write about my experience about how I tried to use my smarts for being lazy (actually that might be a harsh word, it was more for flexibility, re-usability and maintainability) and how important it is for me, and for you, to realize when a design gets cumbersome and should be abandoned.

Now, many of the pieces I will talk about have some value and can be applied in other scenarios for sure. So, I advise you to focus on the concepts rather than the specific application of them in my case.

The design problem I had was to expose my data entities through a web service.
Following good design principles, I wanted to expose the data contracts, and not the underlying business entities and/or data schemas, although I just wanted to expose my (table) data. For e.g. I wanted to return a Person object which was essentially an entity object that mapped to a Person table, which is a standard CRUD scenario.

Now, I did not want to manually hand-code and duplicate the entity object properties and decorate them with Service attributes, so I thought I could do it dynamically by creating a service object (shell) which would inherit from the required entity object and control which properties were exposed.

A service object is serializable and that meant that I would need to dynamically figure out the parent properties and write them out. Clearly, I could not just use the XmlSerializer attribute and so I implemented the IXmlSerializable interface and added a Serializable attribute to the class.

[Serializable]
public class Person : PersonDb, IXmlSerializable

The IXmlSerializable member of importance to me was WriteXml, since i just had to return data.
Writing out xml for serialization is very simple. all you have to do is add the names and values:
public void WriteXml(System.Xml.XmlWriter writer)
{
       writer.WriteElementString(“foo”, “foovalue”);
}

There were two mechanisms involved, first to figure out which properties/data needs to be exposed and how to get that data from the parent.
I figured this could be accomplished through a custom attribute, which specified what the (target) property name was and the (parent) property that returned the data.

[AttributeUsage(AttributeTargets.Class, AllowMultiple = true)]
    public class SimpleFacadeAttribute : System.Attribute
    {
        public string PropertyName { get; set; }
        public string XmlElementName { get; set; }

        /// <summary>
        /// Specify Class Property, Serialized Xml Element will have the same name
        /// </summary>
        /// <param name=”basePropertyName”></param>
        public SimpleFacadeAttribute(string PropertyName)
        {
            this.PropertyName = PropertyName;
            this.XmlElementName = PropertyName; //use base prop name
        }

        /// <summary>
        /// Specify Base Class Property and Target Xml Element Name
        /// </summary>
        /// <param name=”basePropertyName”></param>
        /// <param name=”targetPropertyName”></param>
        public SimpleFacadeAttribute(string PropertyName, string XmlElementName)
        {
            this.PropertyName = PropertyName;
            this.XmlElementName = XmlElementName;
        }

}

This itself is a valuable pattern, although I have the solution, not the problem :), whereby the output can be specified by adding attributes to the Class. If only the source property name (parent property) is specified, the same name is emitted; alternatively you could specify an alias.

[SimpleFacadeAttribute(“Address1”, “Line1”)]
[SimpleFacadeAttribute(“City”)]
[SimpleFacadeAttribute(“State”)]
[SimpleFacadeAttribute(“ZipCode”, “Zip”)]
[Serializable]
public class Person : PersonDb, IXmlSerializable

Ok, now I had to implement the writing, which I did by looping through the attributes and calling the parent property to get the value via Reflection. In fact, I wrote a generic method, so it could be called by other classes too.

public static void SerializeSimple<T>(T source, System.Xml.XmlWriter writer)
{
    Type myType = source.GetType();

    //go through ServiceFacade attributes and serialize them
    object[] attributes = myType.GetCustomAttributes(false);
    foreach (object attrib in attributes)
    {
        if (attrib.ToString() == SIMPLE_FACADE_ATTRIB)
        {
            //get current attribute
            SimpleFacadeAttribute sfAttrib = (SimpleFacadeAttribute)attrib;

            //get value of property specified
            System.Reflection.MethodInfo getMethod = myType.GetProperty(sfAttrib.PropertyName).GetGetMethod();
            object val = getMethod.Invoke(source, null);

            string value = val == null ? “” : val.ToString();

            writer.WriteStartElement(sfAttrib.PropertyName);
            writer.WriteString(value);
            writer.WriteEndElement();

      } 
    }
}

Note, the const SIMPLE_FACADE_ATTRIB = “SimpleFacadeAttribute”, the custom attribute class name.

Ok, so we are dynamically serializing an object, but then comes the next hurdle. How is this going to be defined in a contract? Most likely WSDL. So, I needed to generate WSDL for this class. Now, it is not dynamic in the sense of “changing” or morphing whenever it’s parent changes because the custom attributes control the definition; just that it’s being produced at run-time.
There were 2 requirements for the WSDL, generation and publication.
The generation is what made me realize that my initial goal of making things simpler and writing less code was not achievable. This is because I did not find an easy way to generate the WSDL elements like I did with serialization, for e.g. for starters, the Name had to be specified as a string
<s:element minOccurs=”1” maxOccurs=”1” name=”name” type=”s:string” />
and also all the elements had to be tied together as a sequence, document definition, etc. and I had no desire, nor will I ever, to write all the code to generate it.

It was time to abandon the pattern…

Conceptually, I thought I could emit the required class and use wsdl.exe to produce its wsdl. Fyi, this related article by Craig from Pluralsight is pretty good:
http://www.pluralsight.com/community/blogs/craig/archive/2004/10/18/2877.aspx
He uses the ServiceDescriptionReflector  class to generate the wsdl.

Now, out of curiosity, I had to figure out how publication of the WSDL might work. The standard for obtaining a WSDL from a web service is by appending ?wsdl to its URI, e.g. http://personservice.com?wsdl
In my case using an asp.net (asmx) service, the engine produces the wsdl (I need to research this). But essentially, I would have to implement an HttpHandler that intercepted the request and returned the required wsdl.

Taking a step back, I realize that perhaps writing an add-in or tool to generate the code for the service class from different classes might be valuable. So, my Person class might map to PersonDb and Address and I could have a tool select the classes and properties and spit out code. To make this process repeatable, I could use my custom attributes to specify the mapping.
This approach also makes everything strongly-typed since nothing (like property names used in Reflection, etc.) is dynamic at run time.

This would be a cool tool to have in Visual Studio too. Ah, time to go back to the drawing board…