Cod Chowder

moby-dick

Early in Melville’s Moby Dick, Peter Coffin, proprietor of the Spouter Inn, recommends the Try Pots, an inn known for its chowders and run by Peter Coffin’s cousin Hosea Hussey, as a good place for a meal.

Fishiest of all fishy places was the Try Pots, which well deserved its name; for the pots there were always boiling chowders. Chowder for breakfast, and chowder for dinner, and chowder for supper, till you began to look for fish-bones coming through your clothes. The area before the house was paved with clam-shells. Mrs. Hussey wore a polished necklace of codfish vertebra; and Hosea Hussey had his account books bound in superior old shark-skin. There was a fishy flavor to the milk, too, which I could not at all account for, till one morning happening to take a stroll along the beach among some fishermen’s boats, I saw Hosea’s brindled cow feeding on fish remnants, and marching along the sand with each foot in a cod’s decapitated head, looking very slip-shod, I assure ye.

The description of the cod chowder at the Try Pots has always captivated me.  I’m a fan of canned clam chowder and have occasionally had the pleasure of a bowl of clam chowder at Legal Sea Foods next to the Georgia Aquarium – but cod chowder has never made its way to my table.

"Come on, Queequeg," said I, "all right. There’s Mrs. Hussey."

And so it turned out; Mr. Hosea Hussey being from home, but leaving Mrs. Hussey entirely competent to attend to all his affairs. Upon making known our desires for a supper and a bed, Mrs. Hussey, postponing further scolding for the present, ushered us into a little room, and seating us at a table spread with the relics of a recently concluded repast, turned round to us and said—"Clam or Cod?"

"What’s that about Cods, ma’am?" said I, with much politeness.

"Clam or Cod?" she repeated.

"A clam for supper? a cold clam; is THAT what you mean, Mrs. Hussey?" says I, "but that’s a rather cold and clammy reception in the winter time, ain’t it, Mrs. Hussey?"

But being in a great hurry to resume scolding the man in the purple Shirt, who was waiting for it in the entry, and seeming to hear nothing but the word "clam," Mrs. Hussey hurried towards an open door leading to the kitchen, and bawling out "clam for two," disappeared.

"Queequeg," said I, "do you think that we can make out a supper for us both on one clam?"

However, a warm savory steam from the kitchen served to belie the apparently cheerless prospect before us. But when that smoking chowder came in, the mystery was delightfully explained. Oh, sweet friends! hearken to me. It was made of small juicy clams, scarcely bigger than hazel nuts, mixed with pounded ship biscuit, and salted pork cut up into little flakes; the whole enriched with butter, and plentifully seasoned with pepper and salt. Our appetites being sharpened by the frosty voyage, and in particular, Queequeg seeing his favourite fishing food before him, and the chowder being surpassingly excellent, we despatched it with great expedition: when leaning back a moment and bethinking me of Mrs. Hussey’s clam and cod announcement, I thought I would try a little experiment. Stepping to the kitchen door, I uttered the word "cod" with great emphasis, and resumed my seat. In a few moments the savoury steam came forth again, but with a different flavor, and in good time a fine cod-chowder was placed before us.

We resumed business; and while plying our spoons in the bowl, thinks I to myself, I wonder now if this here has any effect on the head? What’s that stultifying saying about chowder-headed people? "But look, Queequeg, ain’t that a live eel in your bowl? Where’s your harpoon?"

We have a crock pot in our kitchen – a repackaged gift from Christmases past – and I decided to put it to good use this past weekend.  The recipe itself was quite simple:

    • 1 cup finely chopped onion
    • 1 stick butter
    • 4 cups diced potato
    • 1 can creamed corn
    • 1 1/2 lb Cod
    • 1 1/2 cup water
    • 1 pint half-and-half
    • salt, pepper and thyme to taste
    • 1 bay leaf

Cook the onion in the butter until it is transparent.  Throw chopped onion and liquid butter in the crock-pot along with potatoes, creamed corn, water, cod and spices.  Cook on low for 4 1/2 to 5 hours and then add the half-and-half.  Cook for another hour.

I served it with some hushpuppies and an upside-down cake for dessert.  I have heard that crumbled bacon on top is also tasty.  The cod was a bit pricey at around $9 a pound at Kroger, and I imagine that tilapia would make a good replacement – though it wouldn’t fill my literary hunger quite so well.

 

A Lazier Singleton with .NET 4.0

This post examines how the new Lazy<T> type can improve standard implementations of the Singleton pattern in C#.

I will ignore for the moment the common jeremiads against the Singleton pattern and the reports made by some latter-day design pattern nihilists that the Singleton is dead.   I do not mean to imply that they are wrong – it’s just that it galls me that the Singleton pattern should be the object of such scorn and ridicule when the Flyweight is allowed to go along its merry way.

Besides which, the Singleton and the Facade are the only patterns I can write from memory and without a lot of research, so I love ‘em.

The best source for design patterns in C# is probably Judith Bishop’s C# 3.0 Design Patterns published by O’Reilly Press, which provides C# versions of all 23 patterns from the Gang of Four’s Elements of Reusable Object-Oriented Software.  The elegant implementation of the Singleton pattern she recommends looks like this:

 

public sealed class Singleton {
   // Private Constructor
   Singleton( ) { }

   // Private object instantiated with private constructor
   static readonly Singleton instance = new Singleton( );

   // Public static property to get the object
   public static Singleton Instance {
          get { return instance;}
       }
}

There is a problem with this, however.  Because of the way classes with static methods work in C# (or in this case, a static property), type instantiation of the private Singleton field instance happens at an unexpected point.  For an interesting if somewhat dense read on the effect of the beforeFieldInit flag, go here.

I will simply demonstrate the problem by adding some tracking code to Judith Bishop’s recommended implementation:

    public sealed class Singleton
    {
        private static readonly Singleton instance = new Singleton();
        private Singleton()
        {
            // no default constructor
            Console.WriteLine(" >> singleton initialized");
        }

        public static Singleton Instance
        {
            get
            {
                Console.WriteLine("before singleton retrieval");
                return instance;

            }
        }
    }

I will retrieve an instance of this Singleton class from a console application like so:

    class Program
    {
        static void Main(string[] args)
        {

            Console.WriteLine("Calling Singleton instance");
            var s = Singleton.Instance;
            Console.WriteLine("Finished calling Singleton instance");
            Console.ReadLine();
        }
    }

When will the private type be initialized? When will the private constructor be called?  In w

hat order do you think the Console.WriteLines will be invoked?

Ideally the static members of this class would be initialized only when we needed them, and the output would be:

  1. Calling Singleton instance
  2. before singleton retrieval
  3. >> singleton initialized
  4. Finished calling Singleton instance

In actuality, however, this is the result:

singlton_results_1

This is not so bad, you may be thinking.  If our singleton is a large object this creates some additional strain to the system – but as long as the Singleton instance gets used fairly soon after it is instantiated it’s no big deal.

However, what if I come in after you have coded the singleton and decide to add another static method to your class — not understanding the intricate details of beforeFieldInit – like this:

        public static void Test()
        {
            Console.WriteLine("testing singleton");
        }

 

and rewrote the calling code like this:

            Console.WriteLine("Calling Singleton test method");
            Singleton.Test();
            Console.WriteLine("Calling Singleton instance");
            var s = Singleton.Instance;
            Console.WriteLine("Finished calling Singleton instance");
            Console.ReadLine();

 

It may not be immediately obvious but I have seriously messed up your code.  Here is the output:

singlton_results_2

Even if we never retrieve the Singleton instance, it will still be initialized when any other static method on our type is called – this is commonly known as a language runtime bummer.

.NET 4.0 introduces a new generic type called Lazy<T> which helps us out of this dilemma.  Lazy<T> is a wrapper class that facilitates thread safe, lazy instantiation of objects.  We can use it to create a new Singleton implementation that replaces the private static Singleton instance with a private static Lazy<Singleton>  instance.  The Instance property will also require a small rewrite to pull our Singleton out of the Lazy wrapper.

The full implementation of the lazy version of the singleton looks like this:

    public sealed class LazySingleton
    {
        // Private object with lazy instantiation
        private static readonly Lazy<LazySingleton> instance = 
            new Lazy<LazySingleton>(
                delegate { 
                    return new LazySingleton(); 
                }
                //thread safety first
                ,LazyExecutionMode.EnsureSingleThreadSafeExecution);

        private LazySingleton()
        {
           // no public default constructor
        }

        // static instance property
        public static LazySingleton Instance
        {
            get{ return instance.Value; }
        }
    }

Some things of note:

1. I pass a delegate as the first parameter to the Lazy constructor. There is a no parameter constructor for the generic Lazy<T> class, but it requires that type T have a public default constructor – which I obviously do not want to provide.  The delegate parameter allows me to indicate that I want to use a different constructor – in order to pass a constructor parameter to Type T, for instance, or to invoke a private constructor, in this case – than the default. 

2. The second parameter, also optional, tells the Lazy instance that I want the lazy instantiation of type T to be thread safe.

3. I retrieve the wrapped type T by asking for the Lazy type’s Value property.

Now it’s time for a contest. I add some Console.WriteLine statements as in the original and I append the malicious static Test() method as in the original.  I rewrite my Console app code to call my original Singleton code and then the new and improved — .NET 4.0 enhanced — Lazy Singleton code:

    Console.WriteLine("Calling Singleton test method");
    Singleton.Test();
    Console.WriteLine("Calling Singleton instance");
    var s = Singleton.Instance;
    Console.WriteLine("Finished calling Singleton instance");
    Console.WriteLine();

    Console.WriteLine("Calling Lazy Singleton test method");
    LazySingleton.Test();
    Console.WriteLine("Calling Lazy Singleton instance");
    var lazyS = LazySingleton.Instance;
    Console.WriteLine("Finished calling Lazy Singleton instance");
    Console.ReadLine();

and get the following, very pleasing, results:

singlton_results_3

Now that’s lazy!

Visual Studio 2008 Toolbox Crash Redux

crash

I had written a while ago about a quick way to resolve this issue by simply uninstalling Power Commands.

For some reason, the problem reappeared for me a few weeks ago — I think after installing an SDK — and I did not have Power Commands installed!  So I had to find an alternative solution.  It’s a little tedious, but it does the trick. 

Just to recap, the problem is that when the Visual Studio 2008 IDE reaches a certain state, attempts to Choose Items in the toolbox leads to Visual Studio shutting completely down, usually after a long and fretful wait.

1. To clear this peculiar issue up, you will want to run Visual Studio in safe mode.  To do so, open up a command line utility and run DEVENV /safemode.  Visual Studio should come up for you.

2. Right click on your toolbox and select Choose Items… from the context menu. 

3. Methodically select each tab in the dialog box that is presented.  Accept any recommendations or error messages that come up.

4. After this time consuming but effective process, you may close Visual Studio and bring it up again in normal mode.  All blemishes should be gone, and you can continue with your work.

WCF REST Starter Kit 2: Calling Twitter

The great power of the WCF REST starter Kit comes from allowing programmers to easily call pre-existing REST services written on non-Microsoft platforms.  Of those, Twitter is probably the most popular and easiest to understand.  In order to use twitter, we need at a bare minimum to be able to read tweets, to write tweets, and occasionally to delete a tweet.  Doing this also showcases the core structure of REST calls: they allow us to perform CRUD operations using the following web methods: GET, POST, PUT and DELETE.

The following example will use GET to read tweets, POST to insert tweets, and DELETE to remove tweets.

Ingredients: Visual Studio 2008 SP1 (C#), WCF REST Starter Kit Preview 2, a Twitter account

Sample: download (10 KB)

In this tutorial, I will walk you through building a simple Twitter command line application — I know, I know, Twitter from a console is what we’ve all been longing for!

This tutorial will use the techniques from the previous Getting Started post on WCF REST and expand on them to develop a “real world” application.

The first stop is the documentation on the Twitter API, which can be found here.  Based on the documentation, we can see that we want to call the friends_timeline resource in order to keep up with our twitter buddies.  We will want the update resource in order to insert new tweets.  We will also want the destroy resource so we can immediately delete tweets like “I am testing my console app”, “testing again”, “still testing 3”, etc.

Reading our friends’ tweets is the easiest part of all this.  The basic code looks like this:

var client = new HttpClient();

 

//set authentication credential

NetworkCredential cred = new NetworkCredential(_userName, _password);

client.TransportSettings.Credentials = cred;

 

//set page parameter

var queryString = new HttpQueryString();

queryString.Add(“page”, page.ToString());

 

//call Twitter service

var response =

        client.Get(

        new Uri(“http://twitter.com/statuses/friends_timeline.xml”)

        , queryString);

 

var statuses = response.Content.ReadAsXmlSerializable<statuses>();

foreach (statusesStatus status in statuses.status)

{

    Console.WriteLine(string.Format(“{0}: {1}”

        , status.user.name

        , status.text));

    Console.WriteLine();

    Console.WriteLine();

}

Twitter uses basic authentication to identify users.  To insert this information into our header, we create a new Network Credential and just inject it into our HttpClient instance.  (_userName and _password are simply private static fields I created as placeholders.)  The service can also take a “page” parameter that identifies which page of our friends’ statuses we want to view.  This code uses the HttpQueryString type that comes with the Starter Kit to append this information.

The results should look something like this:

console4

Probably the trickiest thing in getting this up and running is using Paste XML as Types from the Edit menu in order to generate the statuses class for deserialization.  To get the raw XML, you will need to browse to http://twitter.com/statuses/friends_timeline.xml, at which point you will be prompted for your Twitter account name and password.  In my case, I then copied everything out of the browser and pasted it into notepad.  I then stripped out the XML declaration, as well as all the hyphens that IE puts in for readability.  Having too many status elements turned out to make Visual Studio cry, so I removed almost all of them leaving only two.  Leaving two turned out to be important, because this allowed the underlying Paste XML as Types code to know that we were dealing with an array of status elements and to generate our classes appropriately.  At the end of this exercise, I had a statuses class, statusesStatus, statusesStatusUser, and a few others.

Posting a twitter status is a little bit harder, partly due to the way the Twitter API implements it.  Here’s the basic code for that:

var client = new HttpClient();

 

//set authentication credential

NetworkCredential cred = new NetworkCredential(_userName, _password);

client.TransportSettings.Credentials = cred;

 

//fix weird twitter problem

System.Net.ServicePointManager.Expect100Continue = false;

 

//send update

if (p.Length > 140) p = p.Substring(0, 140);

HttpContent body = HttpContent.Create(string.Format(“status={0}&source={1}”

    , HttpUtility.UrlEncode(p)

    , “CommandLineTwitterer”)

    , Encoding.UTF8

    , “application/x-www-form-urlencoded”);

 

var response = client.Post(“http://twitter.com/statuses/update.xml”, body);

Console.WriteLine(“Your tweet was twutted successfully.”);

This took a bit of trial and error to figure out, and in the end I just opened my Twitter home page with the Web Development Helper to see what the message was supposed to look like.  The Expect100Continue is needed to handle a change in Twitter’s API that showed up sometime at the beginning of the year, and which is explained here, here and here.

In order to make delete workable in a console application, I have tacked the following lines of code to the end of the status update method:

var status = response.Content.ReadAsXmlSerializable<status>();

_lastTweetId = status.id;

which hangs onto the id of the successful tweet so the user can turn around and perform a DESTROY command in order to delete the previous tweet.

 

You will notice that in the above snippet, I am using a type called status instead of one the objects associated with statuses from the simple GET service call above.  It is actually identical to the statusesStatus type above.  I gen-ed the status class out of the response simply because I needed a class called “status” in order to map the XML element returned to a clr type.  An alternative way to do this is to add an XmlRoot attribute to the statusesStatus class in order to make the serialization work properly:

 

[System.Xml.Serialization.XmlRoot(“status”)]

public partial class statusesStatus

{

    …

This would allow us to write our deserialize code like this:

var status = response.Content.ReadAsXmlSerializable<statusesStatus>();

_lastTweetId = status.id;

The delete code finishes off this recipe since it will demonstrate using one of the methods one rarely sees in typical REST examples (the other being PUT) .  It looks like this:

 

if (_lastTweetId == 0)

    return;

 

HttpClient client = new HttpClient();

 

//set authentication credential

NetworkCredential cred = new NetworkCredential(_userName, _password);

client.TransportSettings.Credentials = cred;

HttpResponseMessage response = null;

response = client.Delete(

    string.Format(“http://twitter.com/statuses/destroy/{0}.xml”

        , _lastTweetId)

    );

In order to pull this all together and have a functioning app, you just need a simple processing method.  The main processing code here that interprets commands and calls the appropriate methods simply loops until the EXIT command is called:

static void Main(string[] args)

{

    if (!CheckArgsForCredentials(args))

        SetCredentials();

 

    var input = string.Empty;

    while (input.ToUpper() != “EXIT”)

    {

        input = ProcessInput(Console.ReadLine());

    }

}

 

private static string ProcessInput(string input)

{

    //INPUT = READ

    if (input.ToUpper().IndexOf(“READ”) > -1)

    {

        if (input.ToUpper().IndexOf(“PAGE”) > -1)

        {

            var iPage = GetPageNumber(input);

            ReadFriendsTimeline(iPage);

        }

        else if (input.ToUpper().IndexOf(“ME”) > -1)

        {

            ReadUserTimeline();

        }

        else

            ReadFriendsTimeline(1);

 

    }

    //INPUT = TWEET

    else if (input.ToUpper().IndexOf(“TWEET”) > -1)

    {

        UpdateTwitterStatus(input.Substring(5).Trim());

    }

    //INPUT = DESTROY

    else if (input.ToUpper().IndexOf(“DESTROY”) > -1)

    {

        DestroyLastTweet();

    }

    return input;

}

This should give you everything you need to cobble together a simple Twitter client using the WCF REST Starter Kit.  Mention should be made of Aaron Skonnard’s excellent blog. After spending half a day on this I discovered that Aaron Skonnard had already written a much more succinct and elegant example of calling Twitter using the HttpClient.  I also found this cool example from Kirk Evans’s blog explaining how to accomplish the same thing using WCF without the HttpClient.

The full source code (10K) for this sample app has the properly generated types for deserialization as well as exception handling and status code handling, in case you run into any problems.  In order to run it, you will just need to add in the pertinent assemblies from the WCF REST Starter Kit Preview 2.

Getting Started with the WCF REST Starter Kit Preview 2 HttpClient

Ingredients: Visual Studio 2008 SP1 (C#), WCF REST Starter Kit Preview 2

In Preview 1 of the WCF REST Starter Kit, Microsoft provided many useful tools for building RESTful services using WCF.  Missing from those bits was a clean way to call REST services from the client.  In Preview 2, which was released on March 13th, this has been made up for with the HttpClient class and an add-in for the Visual Studio IDE called Paste XML as Types.

The following getting-started tutorial will demonstrate how to use the HttpClient class to call a simple RESTful service (in fact, we will use the default implementation generated by the POX service template).  If you haven’t downloaded and installed the WCF REST Starter Kit, yet, you can get Preview 2 here. You can read more about the Starter Kit on Ron Jacobs’ site.

The sample solution will include two projects, one for the client and one for the service. 

1. Start by creating a Console Application project and solution called RESTfulClient.

2. Add a new project to the RESTfulClient solution using the Http Plain XML WCF Service project template that was installed when you installed the Starter Kit.  Call your new project POXService. 

The most obvious value-added features of the WCF REST Starter Kit are the various new project templates that are installed to make writing RESTful services easier.  Besides the Http Plain XML WCF Service template, we also get the ATOM Publishing Protocol WCF Service, the ATOM Feed WCF Service, REST Collection WCF Service, REST Singleton WCF Service and good ol’ WCF Service Application.

PoxService 

For this recipe, we will just use the default service as it is. 

[WebHelp(Comment = “Sample description for GetData”)]

[WebGet(UriTemplate = “GetData?param1={i}&param2={s}”)]

[OperationContract]

public SampleResponseBody GetData(int i, string s)

{

 

   return new SampleResponseBody()

   {

     Value = String.Format(“Sample GetData response:”   {0}’, ‘{1}'”, i, s)

   };

}

For the most part, this is a pretty straightforward WCF Service Method.  There are some interesting additional elements, however, which are required to make the service REST-y.

(In a major break with convention, you will notice that the default service created by this template is called Service rather than Service1.  I, for one, welcome this change from our new insect overlords.)

The WebGet attribute, for instance, allows us to turn our service method into a REST resource accessed using the GET method.  The UriTemplate attribute parameter specifies the partial Uri for our resource, in this case GetData.  We also specify in the UriTemplate how parameters can be passed to our resource.  In this case, we will be using a query string to pass two parameters, param1 and param2.

By default, the template provides a Help resource for our service, accessed by going to http://localhost:<port>/Service.svc/Help .  It will automatically include a description of the structure of our service.  The WebHelp attribute allows us to add further notes about the GetData resource to the Help resource.

You will also notice that the GetData service method returns a SampleResponseBody object.  This is intended to make it explicit in our design that we are not making RPC’s.  Instead, we are receiving and returning messages (or documents, if you prefer).  In this case, the message we return is simply the serialized version of SimpleResponseBody, which is a custom type that is specified in the service.svc.cs file and which does not inherit from any other type.

public class SampleResponseBody
{
    public string Value { get; set; }
}

3. Right click on the PoxService project and select Debug | Start New Instance to see what our RESTful service looks like.  To see what the service does, you can browse to http://localhost:<port>/Service.svc/Help .  To see what the schema for our GetData resouce looks like, go to http://localhost:<port>/Service.svc/help/GetData/response/schema .  Finally, if you want to go ahead and call the GetData service, browse to http://localhost:<port>/Service.svc/GetData .

(In the rest of this tutorial, I will simply use  port 1300, with the understanding that you can specify your own port in the code.  By default, Visual Studio will randomly pick a port for you.  If you want to specify a particular port, however, you can go into the project properties of the PoxService project and select a specific port in the project properties Web tab.)

<SampleResponseBody xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Value>Sample GetData response: '0', ''</Value> 
</SampleResponseBody>

4. Go to the RESTfulClient project and add a new class called SampleResponseBody.  We are going to create a type for deserializing our SampleResponseBody XML element.  We could write out the class by hand, and prior to Preview 2 we probably would have had to.  It is no longer necessary, however.  If you copy the XML returned from browsing our resource (you may need to use View Source in your browser to get a clean representation) at http://localhost:1300/Service.svc/GetData you can simply paste this into our SampleResponseBody.cs file by going to the Visual Studio Edit menu and selecting Paste XML to Types.  To get all the necessary types in one blow, you can also go to http://localhost:1300/Service.svc/Help and use Paste XML to Types.  As a third alternative, just copy the XML above and try Paste XML to Types in your SampleResponseBody class.  However you decide to do it, you are now in a position to translate XML into CLR types.

Your generated class should look like this:

[System.CodeDom.Compiler.GeneratedCodeAttribute(“System.Xml”, “2.0.50727.3053”)]

[System.Diagnostics.DebuggerStepThroughAttribute()]

[System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]

[System.Xml.Serialization.XmlRootAttribute(Namespace = “”, IsNullable = false)]

public partial class SampleResponseBody

{

 

        private string valueField;

 

        /// <remarks/>

        public string Value

        {

            get

            {

                return this.valueField;

            }

            set

            {

                this.valueField = value;

            }

        }

}

 

5.  Next, we need to add the Starter Kit assemblies to our console application.  The default location for these assemblies is C:\Program Files\Microsoft WCF REST\WCF REST Starter Kit Preview 2\Assemblies .  The two assemblies we need for the client are Microsoft.Http.dll and Microsoft.Http.Extensions.dll .  (I happen to like to copy CodePlex bits like these into a folder in the My Documents\Visual Studio 2008 directory, to make it relatively easier to track drop versions).

6. We will now finally add some code to call call our GetData resource using the HttpClient class.  The following code will simply prompt the user to hit {Enter} to call the resource.  It will then return the deserialized message from the resource and prompt the user to hit {Enter} again.  Add the following using references to your Program.cs file:

using Microsoft.Http;

using System.Xml.Serialization;

  Now place the following code in the Main() method of Program.cs:

static void Main(string[] args)

{

    Console.WriteLine(“Press {enter} to call service:”);

    Console.ReadLine();

 

    var client = new HttpClient(http://localhost:1300/Service.svc/);

    HttpResponseMessage response = client.Get(“GetData”);

 

    var b = response.Content.ReadAsXmlSerializable<SampleResponseBody>();

 

    Console.WriteLine(b.Value);

    Console.ReadLine();

}

Both the Get request and the deserialization are simple.  We pass a base address for our service to the HttpClient constructor.  We then call the HttpClient’s Get method and pass the path to the resource we want (in true REST idiom, PUT, DELETE and POST are some additional methods on HttpClient).

The deserialization, in turn, only requires one line of code.  We call the Content property of the HttpResponseMessage instance returned by Get() to retrieve an HttpContent instance, then call its generic ReadAsXmlSerializable method to deserialize the XML message into our SampleResponseBody type.

While you could previously do this using WCF and deserialization, or even the HttpWebRequest and HttpWebResponse types and an XML parser, this is significantly easier.

 

console1

 

7. If you recall, the signature of the GetData service method actually takes two parameters, an integer and a string.  When translated into a REST resource, the parameters are passed in a query string.  To complete this example, we might want to go ahead and pass these parameters to our GetData resource.  To do so, replace the code above with the following:

static void Main(string[] args)

{

    Console.WriteLine(“Enter a number:”);

    int myInt;

    Int32.TryParse(Console.ReadLine(), out myInt);

 

    Console.WriteLine(“Enter a string:”);

    var myString = Console.ReadLine();

 

    var q = new HttpQueryString();

    q.Add(“param1”, myInt.ToString());

    q.Add(“param2”, myString);

 

    var client = new HttpClient(“http://localhost:1300/Service.svc/”);

    HttpResponseMessage response = client.Get(new Uri(client.BaseAddress + “GetData”), q);

    var b = response.Content.ReadAsXmlSerializable<SampleResponseBody>();

 

    Console.WriteLine(b.Value);

    Console.WriteLine(“Press {enter} to end.”);

    Console.ReadLine();

}

The first block asks for a number and we verify that a number was indeed submitted.  The next block requests a string.

The third block creates an HttpQueryString instance using our console inputs.

The important code is in the fourth block.  You will notice that we use a different overload for the Get method this time.  It turns out that the only overload that accepts a query string requires a Uri for its first parameter rather than a partial Uri string.  To keep things simple, I’ve simply concatenated “GetData” with the BaseAddress we previously passed in the constructor (this does not overwrite the BaseAddress, in case you were wondering).  We could have also simply performed a string concatenation like this, of course:

HttpResponseMessage response =

client.Get(string.Format(“GetData?param1={0}&param2={1}”,myInt.ToString(), myString));

but using the HttpQueryString type strikes me as being somewhat cleaner. 

If you run the console app now, it should look something like this:

console2

 

And that’s how we do RESTful WCF as of Friday, March 13th, 2009. 

To see how we used to do it prior to March 13th, please see this excellent blog post by Pedram Rezai from April of last year.

 

 



			

Not a User Error: Moving my DasBlog directory

If your rss feed reader just went crazy over my site, I apologize.  I recently moved my blog from the application root to a subdirectory in order to allow other .NET applications to run properly in other subdirectories.

In order to make the imaginativeuniversal site still work with searches that expect to find content at the old location, I had to create an HttpHandler to redirect page requests to the “/blog” subdirectory, including requests for the rss feed.  This had some unintended results, such as republishing all the old rss entries.

This was a migration of a DasBlog application to another directory.  Should anyone be interested, this is how I wrote the HttpHandler CategoryHandler to redirect all requests to the root to my “/blog” subdirectory:

using System;

using System.Web;

using System.Web.UI;

 

namespace IUHandlers

{

    public class CategoryHandler : IHttpHandler

    {

 

        public bool IsReusable

        {

            get { return true; }

        }

 

        public void ProcessRequest(HttpContext context)

        {

            string path = context.Request.Path;

            string syndicationPath = “/SyndicationService.asmx”;

            if (path.IndexOf(“/blog”) == -1)

            {

 

                int lastSlash = path.LastIndexOf(‘/’);

                string pre = path.Substring(0, lastSlash);

                string post = path.Substring(lastSlash);

                if (path.IndexOf(syndicationPath

                    ,StringComparison.CurrentCultureIgnoreCase) > -1)

                {

                    post = syndicationPath + post;

                }

                context.Response.Redirect(“~/blog” + post);

            }

        }

    }

}

I compiled my handler and placed it in the bin folder at the root of my site.  I then added a web.config file to the root with this setting:

<httpHandlers>

      <add verb=GET path=*.asmx type=IUHandlers.CategoryHandler, IUHandlers/>

      <add verb=GET path=*.aspx type=IUHandlers.CategoryHandler, IUHandlers/>

This allows me to run other ASP.NET applications beneath the root directory of my site as long as I remember to add the following two settings to each of their web.config files:

    <httpHandlers>

      <remove verb=GET path=*.asmx/>

      <remove verb=GET path=*.aspx/>

I’m not sure if it’s the most elegant solution, but it seemed to do the job.

Playing with the Kindle 2’s Web Browser

small_browser

I have been spending the day trying to upload PDF’s from my safaribooksonline account to my Kindle, so far without much success.  Mobipocket Creator, which is recommended for converting various file formats to the Mobi format used by the Kindle, seems to get mixed up over the images.  I am currently trying to see if Amazon.com’s converter handles them any better.

On the other hand, I’ve found that the new http://m.safaribooksonline.com site works fairly well on the Kindle’s simplified browser (though not perfectly).  I can access my bookshelf and browse through my books.

The Basic Web browser seems very well suited for twittering, though. You can access your twitter account on the Kindle by going through http://m.twitter.com

To access the Basic Web browser on the Kindle, click on the Menu button from your home page.  Then select Experimental.  From the Experimental page, you will be able to start the Basic Web browser, which lets you search google, search Wikipedia, or simply browse to a url.

Also, contrary to my expectations, the Text-to-Speech feature on the Kindle 2 is actually rather good.  It even attempts to modify intonation based on the sentence structure.  Still not up to Morgan Freeman standards, however.

Ajax AutoComplete Extender with WCF

blackstone

The problem with conjuring tricks is that they lose practically all their glamour once you find out how they are done.  It’s very cool to see David Blaine walk down the street, do a few passes over his hand, and resurrect a fly which proceeds to flee.  It’s rather disappointing to do a google search and discover that in order to prepare for this trick, the first requirement is that you freeze a fly.

My trick is to make an autocomplete extender from the Ajax Control Toolkit call a WCF service instead of an asmx service.  For this recipe, I assume that you are already familiar with the autocomplete extender, and that you are using Visual Studio 2008.  I warn you in advance — my trick disappoints.  It is so trivially easy that, once the technique spreads, it is very unlikely to impress your colleagues at work, much less get you a date with a supermodel.

Start by creating a new web project called AutocompleteWCF.  Add a reference to the AjaxControlToolkit.dll.  Open up the default aspx page that is generated with your project, and add the following code to:

    <form id="form1" runat="server">
    <asp:ScriptManager ID="ScriptManager1" runat="server">
    </asp:ScriptManager>
    <div>
            <asp:TextBox runat="server" ID="myTextBox" Width="300" autocomplete="off" />
            <ajaxToolkit:AutoCompleteExtender
                runat="server" 
                BehaviorID="AutoCompleteEx"
                ID="autoComplete1" 
                TargetControlID="myTextBox"
                ServicePath="Autocomplete.svc" 
                ServiceMethod="GetCompletionList"
                MinimumPrefixLength="0" 
                CompletionInterval="1000"
                EnableCaching="true">
            </ajaxToolkit:AutoCompleteExtender>
    </div>
    </form>

 

This is the standard demo code that is shipped with the Ajax Control Toolkit Sample Website.  I’ve simplified it a bit by removing the animations.  The only significant change I’ve made is to change the ServicePath from Autocomplete.asmx to Autocomplete.svc, the latter being the extension for a WCF service.

The next step is to create our service and add a GetCompletionList operation to it.  The easiest way to do this is to go to Add | New Item and just select the Ajax-enabled WCF Service item template, but this would be so easy that it is hardly worth doing.

Instead, create a new WCF Service using the WCF Service Item Template and call it Autocomplete.svc.  Visual Studio will automatically generate a service interface for you.  Delete the interface.  We don’t need it.  (To be more specific, I don’t know how to get this to work with an interface, so I’m just going to ignore that it is possible.)

Again, I am going to rip off the ACT sample app and just borrow the code from their webservice and place it in our WCF service.  The WCF service class (Autocomplete.svc.cs) will look like this:

    [ServiceContract(Namespace = "")]

    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]

    public class Autocomplete

    {

 

        [OperationContract]

        public string[] GetCompletionList(string prefixText, int count)

        {

            if (count == 0)

            {

                count = 10;

            }

 

            if (prefixText.Equals("xyz"))

            {

                return new string[0];

            }

 

            Random random = new Random();

            List<string> items = new List<string>(count);

            for (int i = 0; i < count; i++)

            {

                char c1 = (char)random.Next(65, 90);

                char c2 = (char)random.Next(97, 122);

                char c3 = (char)random.Next(97, 122);

 

                items.Add(prefixText + c1 + c2 + c3);

            }

 

            return items.ToArray();

        }

 

A few things worth noting:

1. Autocomplete does not implement the IAutocomplete Interface.  Even though this is generated automatically, with the WCF Service item template, you should remove it.

2. The service contract has a blank Namespace explicitly declared. 

3. The ASPNetCompatibilityRequirements attribute must be added to our class.

 

This takes care of the code that calls the WCF service, as well as the service itself.  We now have rig up the web.config file.  If you’ve been working with WCF for any length of time, then you know that this is where the problems usually occur.  Fortunately, the configuration is fairly simple.  You need to set up an endpoint behavior for your service that enables web scripting (much the way asmx web services must be decorated with the ScriptService attribute in order to be called from client-script).  You also will need to turn AspNetCompatibilityEanbled on for the hosting environment.

 

    <system.serviceModel>

        <behaviors>

            <endpointBehaviors>

                <behavior name="AjaxBehavior">

                    <enableWebScript/>

                </behavior>

            </endpointBehaviors>

        </behaviors>

        <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>

        <services>

            <service name="AutocompleteWCF.Autocomplete">

                <endpoint address="" behaviorConfiguration="AjaxBehavior" binding="webHttpBinding" contract="AutocompleteWCF.Autocomplete"/>

            </service>

        </services>

    </system.serviceModel>

 

And that is all you need to do make the AutoComplete Extender work with a WCF service instead of an asmx web service.  I told you it would be unimpressive.

Of course, using a WCF service for Ajax has all the limitations that using an asmx file for Ajax did.  First of all, you can’t call a service that is in a different domain than the page which hosts your client-code.  This is a security feature, to prevent malicious code from redirecting your harmless javascript to something nasty on the world wide web.

Second, you can’t call just any service from your client-side code.  The service must be explicitly marked as something that can be called from client code.  In asmx web services, we used ScriptService for this.  In WCF services, we similarly use EnableWebScript binding property.

Now I feel like I’ve wasted your time, so here’s a YouTube video of David Blaine to make up for it.  And remember, David Blaine is to Chris Angel what Daisy Duke was to Alexis Carrington.  It’s an existential thing, and at some point, you’ve just got to pick sides and stay put in a way that will determine who you are for the rest of your life.

Are you a David Blaine/Daisy Duke kind of person or are you a Chris Angel/Alexis Carrington sort?  Do some soul searching and please let me know what you learn about yourself.

Fingerprint Scanner Interferes with Media Player Extender

fingerprint

Microsoft has a technology called Media Center Extender that basically allows you to use your XBOX 360 as a media center.  All that’s required is that you have a computer connected to your XBOX over a network with the Media Center software installed (it comes standard with Vista Premier) and turned on.  The XBOX can then be used to play movies and music files located on your harddrive.

I haven’t looked at this much until recently, when I found out about vmcNetFlix.  vmcNetFlix is one of those great ideas.  The developer saw that NetFlix was allowing subscribers to download movies to their desktops, and that Microsoft was allowing people to stream movies to their TV’s through an XBOX, and he put in the final pieces to connect all of this together.  vmcNetFlix has its issues at times, but hey, it’s one guy providing a solution on his own time and it’s free.

Before I could get any of this working, however, I had to get my very sweet HP entertainment laptop to talk to my XBOX, and kept running into the same issues with the XBOX complaining that it could not connect the media center extender to my laptop, despite my repeated attempts to reboot both systems and clear out caches and certificates and blowing on both ends of my ethernet cable for no particular reason except that some guy on some newsgroup told me to.

Finally, based on another internet tip, I uninstalled the nice biometric software that came with my laptop and everything started working.  For whatever reason, every piece of biometric software, which allows you to scan in your fingerprint to identify yourself to the operating system rather than type in a password, interferes with Media Center.  I was using DigitalPersona, but it appears that the problem is not unique to them.

So now the fingerprint scanner on my laptop doesn’t do anything.  This is because it turned out to be a technological bottleneck.  On the other hand, I can now stream movies, including BlueRay movies, to my HD TV anytime I want using free technology built in someone’s basement that removes bottlenecks.  Is it worth it?

Well, yes. Not only can I watch any episode of Buck Rogers in the 25th century whenever I want, but I’ve also got most of the Werner Herzog and Rainer Werner Fassbinder catalogs ready for instant streaming.  That’s hot.  That’s Erin Grey hot.

Extending the Ajax Control Toolkit Tab Container with Lazy Loading

multi-tabs

 

download source code

ASP.NET has been missing a good, free tab control for a long time.  With the ACT Tab Container, we were finally given one.  It typically runs in client-side only mode, but can interact with server-code if we set its AutoPostback property to true.

Compared to what we had before, it is a huge improvement.  The peculiar thing about it, however, is that it isn’t actually an Ajax control.  It doesn’t use asynchronous postbacks or web service calls to talk to the server — instead you just have these two mode: run it using client script only, or run it using server-side events and code-behind only.

So a few months ago I rectified this for a project, and only found out afterwards that Matt Berseth had already outlined the technique on his blog.  You basically run the tab container in client-side mode, and add update panels to the tab panels that you want to be ajaxy.  You then hook up the client-side ActivePageChanged event in such a way that it spoofs the Update Panel contained in the tab, causing an asynchronous (or partial) postback.

Matt also gave this technique a cool name.  He called it ‘lazy loading the tab panel’.  Like lazy loading in OOP, using this technique the update panels inside each tab panel only do something when its tab is selected.  Information is loaded only when its needed, and not before.

I must admit that I hold some resentment against Matt for coming up with this first, and for coming up with the cool moniker for it.  On the other hand, the solution I came up with encapsulates all of the javascript needed for this into a nice simple extender control that you can drop on your page, which his does not, and I’m rather proud of this.

The VS 2008 project for this extender is linked at the top of this post.  To use it, you need to compile the project and add the compiled assembly to your project, or else just add the project to your solution and add a project reference.

1. Drop the TabContainerExtender control into your page.

2. Set the Extender’s TargetControlID property to your TabContainer’s ID.

3. In the RegisterUpdatePanels element of the Extender, map your tabs to your update panels.  This mapping tells the extender which Update Panels to activate when each tab is selected.

Your markup will look something like this:

    <cc2:TabContainerExtender ID="TabContainerExtender1" 
    runat="server" 
    TargetControlID="TabContainer1" OnActiveTabChanged="ActiveTabChanged">
    <RegisterUpdatePanels>
    <cc2:UpdatePanelInfo TabIndex="0" UpdatePanelID="UpdatePanel1" />
    <cc2:UpdatePanelInfo TabIndex="1" UpdatePanelID="UpdatePanel2" />
    <cc2:UpdatePanelInfo TabIndex="2" UpdatePanelID="UpdatePanel3" />
    </RegisterUpdatePanels>
    </cc2:TabContainerExtender> 

4. If you want to add some code-behind to your active tab changed event, add set the OnActiveTabChanged property of the Extender to the name of your handler.  The thrown event will pass the correct Index number for the active Tab, as well as the ID of the mapped Update Panel.  The handler’s signature looks like this:

        protected void ActiveTabChanged(int index, string panelID)

        {

            …

        }

I highly encourage you to read Matt Berseth’s blog entry (which I have to admit is pretty good) to get a clear idea of the techniques being applied in this ajax extender.  If you just need a quick solution, however, feel free to download this code from the link at the top and use it any way you like with no strings attached.  There is a sample project attached to the solution that will demonstrate how to use the Tab Container Extender, in case you run into any problems with lazy loading your panels.

For reference, here is the code for the sample implementation, which loads controls on the fly based on the tab selected:

    <cc1:TabContainer ID="TabContainer1" runat="server">
    <cc1:TabPanel ID="TabPanel1" runat="server" HeaderText="Tab Panel 1">
    <ContentTemplate>
        <asp:UpdatePanel ID="UpdatePanel1" runat="server">
        <ContentTemplate>
        Content 1 ...
        <br />
            <asp:PlaceHolder ID="PlaceHolder1" runat="server"></asp:PlaceHolder>      
        </ContentTemplate>
        </asp:UpdatePanel>    
    </ContentTemplate>
    </cc1:TabPanel>
        <cc1:TabPanel ID="TabPanel2" runat="server" HeaderText="Tab Panel 2">
    <ContentTemplate>
        <asp:UpdatePanel ID="UpdatePanel2" runat="server">
        <ContentTemplate>
        Content 2 ...
        <br />
            <asp:PlaceHolder ID="PlaceHolder2" runat="server"></asp:PlaceHolder>      
        </ContentTemplate>
        </asp:UpdatePanel>    
    </ContentTemplate>
    </cc1:TabPanel>
        <cc1:TabPanel ID="TabPanel3" runat="server" HeaderText="Tab Panel 3">
    <ContentTemplate>
        <asp:UpdatePanel ID="UpdatePanel3" runat="server">
        <ContentTemplate>
        Content 3 ...
        <br />
            <asp:PlaceHolder ID="PlaceHolder3" runat="server"></asp:PlaceHolder>      
        </ContentTemplate>
        </asp:UpdatePanel>    
    </ContentTemplate>
    </cc1:TabPanel>
    </cc1:TabContainer>
    <cc2:TabContainerExtender ID="TabContainerExtender1" 
    runat="server" 
    TargetControlID="TabContainer1" OnActiveTabChanged="ActiveTabChanged">
    <RegisterUpdatePanels>
    <cc2:UpdatePanelInfo TabIndex="0" UpdatePanelID="UpdatePanel1" />
    <cc2:UpdatePanelInfo TabIndex="1" UpdatePanelID="UpdatePanel2" />
    <cc2:UpdatePanelInfo TabIndex="2" UpdatePanelID="UpdatePanel3" />
    </RegisterUpdatePanels>
    </cc2:TabContainerExtender>