Sunday 27 November 2011

Parallel Ajax requests

In our last Web project (rather rich jQuery UI + Asp.Net MVC) we had one page where we needed to do several async Ajax calls to load different screen items (combos, jqGrid, several lists in one Accordion...) just on page load. All these calls were independent from each other (results from one of them were not needed in another of the calls). This led us to some interesting head scratching about which was the best approach:
  • Just launch them sequentially. Each call is started when the previous one finishes. This basic approach seems rather wise if the failure to retrieve some data means that the whole page turns invalid, in that case you stop you sequence of calls and show the error message.
    For this you just can queue your calls just by passing them as callbacks to the success event of your previous call. This is fine for a few calls, but for many of them it ends up being rather messy. A better approach here is the use jQuery's deferred objects to create your queue. You would use a chain of calls to the pipe method (the documentation is a bit confusing to me, but that's the way to go for chaining calls with deferred).
    Of course, you can always create you own Queue.
  • Just launch them all in parallel. Pretty simple from a code perspective (I would reccomend to wrap your "Loading..." modal dialog in some helper class with a counter that increases-decreases with each show-hide call, that way you don't need to care of when all calls have completed to turn the page operative, each call just calls to the Dialog show-hide and that's all)
  • Keep a limit on the number of concurrent calls. I mean, maybe you have to do 5 calls but don't want to have more than 2 concurrent calls. So you would start launching 2 calls and queuing the rest, and each time one call is done you would launch the next call in the queue. To my knowledge jQuery's deferred does not have an out of the box way to do this, so you would have to implement your own system.

So, what's the best way to proceed? I don't have a firm answer to that, so I'll just outline some of the many things that we should take into account.

  1. HTTP persistent connection. Almost all browsers and servers use this technique, so one same http connection is reused for requesting different resources. OK, good, but, what about requesting those resources in parallel? Let's move to point 2.
  2. Number of allowed concurrent requests in modern browsers (notice that though he's asking about Ajax requests, this applies to any kind of request). Seems like this ranges from 2 to 6. The number of concurrent Http connections to the same domain needs to keep a balance between what is good for the client (more connections) and what is good for the Server (don't flood it with too many connections, you know, the C10k problem...).
    From these 2 points, let's think that we have 3 persistent http connections open to the server, each one requesting data, does each request have to wait for its response, or we could send several requests in parallel through the same connection? This is called Pipelining (similar to processor pipelining, when several instructions in different steps run in parallel in the same processor core), so, jump to point 3.
  3. HTTP pipelining Long in short, unfortunately this interesting feature is scarcely supported by most major browsers.
  4. Your server technology also plays a role here. For example, when using Asp.Net, any request that make use of the Session object will get serialized, so, it'll be just the same as if we had done the requests sequentially. This is so because concurrent access to the Session object should be avoided, and the Framework is doing just that locking for us. With Asp.Net MVC it's assumed that all actions make use of the Session, and that means that we can't have 2 actions running in parallel (so forget your concurrent Ajax requests from being really parallel, you'll see them both running in firebug, but on the server side they'll run sequentially). Hopefully this can be changed at the controller level, by means of this attribute: SessionState
    Read this article thoroughly for a much better explanation

Tuesday 15 November 2011

Some musings about Roslyn

There's been quite a lot of excitement in the last weeks around the first preview of the "Compiler as a Service" technology to be added to .Net 5 (aka Project Roslyn).

I have mixed feelings about this. On one side, I can't share all that much excitement, cause for the most part I don't see it as a revolutionary feature (as I see the DLR or even Linq), but as a rather needed feature that should not have taken so long to make it into the Framework. Furthermore, something similar has been present in Mono for a long time (one of the main reasons I always hurry up to install Mono on any new machine is the wonderful C# REPL console that comes with it). On the other side, having full access to the compiler pipeline means that I think very interesting things can be built on top of it, this made my imagination fly, but it suddenly hit the wall of reality. One of the fist and more obvious things that I could think of was adding your own new custom keywords and constructs to the language (turn it into an open language...), but unfortunately, seems like that's out of the scope for now

Actually, it isn't true that you can use Roslyn to extend C# with additional keywords. – Dustin Campbell Oct 21 at 19:35
thanks... corrected... although not in the first release I am pretty that this will be possible... – Yahia Oct 21 at 19:36

@DustinCampbell, What if you handled whatever compiler error that the pseudo keyword caused by generating code? – Rodrick Chapman Oct 21 at 20:17

You'd need to do a rewrite before passing it to the compiler. First, parse code with your special keywords. The code will parse and, unless the parser couldn't make heads or tails of it, the invalid keywords will show up as SkippedTokenTrivia in the resulting tree. Then, detect the skipped keywords and rewrite the tree with valid code (e.g. AOP weaving). Finally, pass the new tree to the compiler. This is definitely a hack though, and it's not guaranteed to work with future versions of Roslyn. E.g. the parser might produce not produce the same tree for broken code in future releases. – Dustin Campbell Oct 21 at 21:59

Anyway, I guess (even over Anders Hejlsberge dead body) compile time AOP will turn much more prevalent in the .Net world thanks to this.

Reflection is something that has always deeply appealed to me, from basic introspection to the full beauty of runtime code generation. The way to generate code at runtime in .Net has evolved over the years. Already in the first version we had 2 ways to do this:

  • write some C# code, write it to a file, invoke the compiler (launch the csc.exe process) to create an Assembly and load that assembly
  • use CodeDom to generate C# code and compile it (under the cover this also invokes csc.exe)
  • use System.Reflection.Emit
By the way, we've got a good discussion of CodeDom vs Reflection.Emit here.

These options above have the drawback of having to create a new assembly for each new code that we want to run, which adds some overhead. Later on, (.Net 2.0) things got improved, allowing us to create new code by using LCG (Lightweight Code Generation) aka Dynamic Methods. No new assemblies are created/loaded, and the new code can be referenced from a delegate (using DynamicMethod.CreateDelegate). The main drawback is that we're not writing C# code here, but IL... and thought going low level in these times of higher and higher abstractions can have much appeal for the Geek inside us, you get a feeling of being brought back to the times of C and inline assembler :-)

.Net 3.5 came with new Code Generation candy in the form of Expression Trees, candy that got extra sugar in .Net 4, where Expression Trees can be used for creating statements, not just expressions.

In .NET Framework 4, the expression trees API also supports assignments and control flow expressions such as loops, conditional blocks, and try-catch blocks.

This gives us full power, but again, at the cost of a non trivial, not much natural syntax.

All this said, I think a common wish for many of us would be something that cute as JavaScript's almighty eval function. Finally, Roslyn seems to bring something similar to the table, but well, will have to see how it evolves, cause right now there seem to be some limitations

Can we now take a piece of code in a string, and compile that to a DynamicMethod?
We don't have this feature implemented yet, but it's definitely something on our radar.

One interesting topic here is, why Mono seems to have been some steps ahead in the "Compiler as a Service" area? Well, Mono has had a managed compiler since its early steps, while C# has been burdened by a native compiler so far. I think that's a big difference that we tend to overlook. Even contrary to what we could think based on how the native JavaScript interpreter (jscript.dll), Xml Parser (msxml.dll)... work (COM component in a dll that this way can be reused by IE, WSH, HTAs...), the C# compiler logic is not implemented in a separate dll-COM, but just in the csc.exe executable. This means that any C# to IL compilation involves spawning the csc.exe process. Even now I'm still amazed when I see the w3wp.exe process launch a csc.exe instance to compile some Razor view or .aspx page...

Tuesday 1 November 2011

C# Params and MethodInfo.Invoke

I guess everyone will agree that the params keyword in C# is pretty useful, but probably many of us will have never thought too much about how it works.
As I had a issue with it recently, I had to give it some extra thinking...

I was trying to call via MethodInfo.Invoke a method that was expecting a params object[].
MethodInfo.Invoke also expects a params object[].
And I was getting a nasty:
System.Reflection.TargetParameterCountException: Parameter count mismatch.

public static void WriteParams(params Object[] myParams)
 {
  foreach(Object param in myParams)
  {
   Console.WriteLine(param.ToString());
  }
 }

object[] myValues = new object[]{"hi", "man"};
  MethodInfo method = typeof(App).GetMethod("WriteParams");
  //runtime error: Parameter Count Mismatch
  try
  {
   method.Invoke(null, myValues);
  }
  catch (Exception ex)
  {
  }

If we have a look at the bytecodes generated for a call to a method expecting params where we're passing several arguments to the call, we'll see that there's some compiler help involved. The compiler takes care of wrapping those parameters into an array, that is what will be passed to the method. If we were already calling the method with an array, the compiler does not do any extra wrapping. This is rather well explained here

The params parameter modifer gives callers a shortcut syntax for passing multiple arguments to a method. There are two ways to call a method with a params parameter: 1) Calling with an array of the parameter type, in which case the params keyword has no effect and the array is passed directly to the method:
object[] array = new[] { "1", "2" };
// Foo receives the 'array' argument directly.
Foo( array );
2) Or, calling with an extended list of arguments, in which case the compiler will automatically wrap the list of arguments in a temporary array and pass that to the method:

In my case, as I was already passing an Object[] to MethodInfo.Invoke, this was not getting an extra wrap, and then, the Invoke method was passing the items in that array as individual parameters to a method that in this case also expected an array of parameters... (I guess that the Invoke method does not do any checking to see if the target method expects a params object[], remember that it's something done by the compiler).
so the Solution is just doing the extra wrapping myself when calling Invoke

//we  have to wrap the array into another array
  method.Invoke(null, new object[]{myValues});

We find the same problem with other dynamic invocation scenarios, like calling Delegate.DynamicInvoke.

Delegate deleg = new Action (WriteParams);
  try
  {
   deleg.DynamicInvoke(myValues);
  }
  catch (Exception ex)
  {
  }
  deleg.DynamicInvoke(new object[]{myValues});

All this brings to my mind another issue with params with a similar response. What happens if we have a method that expects a params object[], and we want to pass to it an only parameter that happens to be an object[] ?
By defaul, the method will be treating that object[] that we're passing as a params object[], so it would be as if we were passing n parameters to it, instead of an only parameter.
Again the solution is to wrap our Array in another Array. We can do it ourselves, or cast our Array to Object, so that the compiler itself does the extra wrapping for us.

You can check the source here