Tuesday 26 February 2013

Existing undefined properties

Well, it seems like there's always some new JavaScript trick to learn, probably that's why I love this language so much. I'll start with a question, and will answer it with what I learned today.

The access to a property can return undefined both when the property has the undefined value assigned (well, seems pretty obvious...) and when that property does not exist. I mean:

var obj = {
   name: undefined
}
obj.name === undefined; //true
obj.age === undefined; //true

So, how can we distinguish one case from the other?

Well, the first option that comes to my mind is using Object.getOwnProperty:

Object.getOwnProperty(obj, "name"); //returns a descriptor object
Object.getOwnProperty(obj, "age"); //returns undefined

and one second option would be using that new thing that I (mainly) found out today, the in operator. Yes, sure we all have used tons of times the for - in loop, but this in operator is quite unknown to most developers. Well, it returns true if one property exists (either in the object or in its prototype chain), irrespective of whether its value is null, undefined or whatever... and will return false if the property does not exist

>>> "name" in obj;
true
>>> "age" in obj;
false

Thursday 21 February 2013

A JSONP and CORS odissey

I guess we should all have a basic idea of what the Same Origin Policy restriction and JSONP are. Let's say we have an application hosted on application.org, and it tries to gather data via ajax from dataprovider.org. We're trying to fetch data from a different domain, so the Same Origin Policy comes into play and the Ajax call won't work. In order to workaround this problem (yes, it's a security feature, but it's also a limitation), some clever guy came up with JSONP. In short, instead of doing an Ajax call to the "foreign" domain, you request a script from it by adding on the fly a <script> element to your document with its source pointing to the "foreign" domain. This domain will return a piece of javascript code, with the expected data wrapped in a function call. That function call should be a function already defined in your code, a function that will receive as parameter the data that you expect from the server and will deal with them (like you do with your typical Ajax success,error callbacks). The idea is pretty cool, but has its limitations:

  • it only works for GET requests, if your original Ajax call was posting a complex JSON object in the Request body, you're hopeless, cause you would need to serialize it some way into the url, and the classical Form encoded data won't be enough.
  • your server side code needs to be adapted to wrap the response data into the javascript function call (which name you should also have passed over from the client).

I'm doing some "research/prototyping/head scratching/hair-loss promoting" experiments at work for a new project, and it involves some JSONP calls. With the current set up, both servers are running on the localhost listening on different ports. I had read that different ports already break the Same Origin Policy, but just as a matter of curiosity I wanted to test it myself and see the normal Ajax call fail. Well, to my complete surprise, the call seemed to be making it to the server (my server side debugger was being hit):

and indeed Firebug was showing a 200 OK response, but somehow the jQuery Ajax error callback was being invoked, with the jqXHR object with status set to 0. Well, I was feeling quite bewildered, to my understanding, the browser should be raising an error before calling into the server, just when realizing that the Ajax call was being addressed to a different domain (port in this case), and indeed, that's also what Wikipedia expects
The W3C recommends that browsers should raise an error and not allow the request of a URL with either a different port or ihost URI component from the current document
So, WTF?

Well, it's here where CORS seems to pop its head and combined with the development server that I'm using (IIS Express) make this mess. This document in MDN appears to explain CORS quite well.
So, my understanding (part of it is mainly a guess) is that modern browsers (like Firefox, that is what I loyally use for all my development) seem not to automatically raise an error when trying to do a cross domain ajax request. They do the request, adding the Origin header to it, and see whether the Server accepts the response. I guess that a proper production server should see this Origin header and if it doesn't agree with the request, just respond with an error, and if on the contrary, it agrees with it, should do the normal processing, and then add a Access-Control-Allow-Origin header to the response (so that the browser can be sure that the server was OK with this call. In my case, I guess IIS Express is just letting the request go through and sending a response as if it had been a call from the same domain (most likely IIS Express is not CORS aware). So, though the call was correctly processed, Firefox can't find the response header and thus it decides to throw an error.

Something that I've also found slightly confusing is the jQuery JSONP magic and its documentation. Long in short, if you're using $.ajax, all you should need to make your same domain calls work when addressing a different domain is setting the dataType property to "jsonp" instead of "json" (and as explained above, bear in mind that it will only work for GET requests, and therefore the data property will not work). For the most part, you can forget about the jsonp and jsonpCallback properties that are also available in the options object.
Having a basic understanding of JSONP this seems both magical and confusing (indeed I thought I was missing something). I was wondering if I didn't need to declare my JSONP callback function on the client side, and furthermore the success-error ajax callbacks seemed confusing to me, after all this was not a real ajax call...
Well, jQuery will take care not only of creating the script element for you, but also of creating a callback function on the client side that will invoke the success-error callbacks accordingly, and then will add it to the querystring to let your server know. So, where the documentation says:

Adds an extra "?callback=?" to the end of your URL to specify the callback

it means that the url for the script's src will be something like this:

http://localhost:55579/rpc/Geographic/GetCitiesJsonp?callback=jQuery1710571054381039827_1361460447827&_=1361460450663

and your server should be returning something like this:

jQuery1710571054381039827_1361460447827(["Berlin","Leipzig","Dresden"]);

OK, I have to say that I don't fully agree with using Ajax semantics for something that is not Ajax, I would have preferred a separate $.jsonp method.

Wednesday 20 February 2013

Fun with Canvas and Web Workers

I was asked to do a short presentation/introduction about HTML5 to some colleagues at work. Apart from mentioning some of the most interesting features that have been encompassed under the HTML5 umbrella, modernizr.js, polyfills and so on, I also gave some notes about ES5. I didn't feel much comfortable with just reading over a mash-up of paragraphs taken from Dive into HTML5 and HTML5 rocks, and as I didn't have any significant code samples of my own apart from this basic experiment with the Canvas from 2 years ago, I decided to prepare a couple of basic pieces (which was pretty fun by the way).

I ended up pairing up together the Canvas and Web Workers in the same sample. Long in short, I have an image on the left and want to paint its gray scale version on the right. We'll leverage the canvas element twice for this, along with the ImageData object. The ImageData object is powerful one, as it gives us raw access to the image pixels. First, we create a temporal canvas in memory (we won't append it to the DOM) that we'll use to obtain the ImageData object corresponding to an existing Image (<img>).

getPixels: function(img) {
		function createCanvas(w, h){
			var c = document.createElement('canvas');
			c.width = w;
			c.height = h;
			return c;
		};
		
		var c = createCanvas(img.width, img.height);
		var ctx = c.getContext('2d');
		ctx.drawImage(img, 0, 0);
		return ctx.getImageData(0,0,c.width,c.height);
	}

As I've said, the ImageData object gives us access to the pixels through its data property (it's a UInt8ClampedArray), so we'll manipulate its RGB components to obtain the Gray Scale version, and then we'll paint the modified ImageData into a Canvas (this time a normal Canvas appended to the DOM).

Probably you'll be wondering how can I justify bringing Web Workers into this scene? Well a Gray Scale filter is pretty fast, but if we repeat it 5000 times we get to that ugly moment when after a long while with the browser unresponsive, we get presented with a "do you want to close this script" window:

The single thread of the JavaScript engine is more than busy with the number crunching necessary for this repeated calculation, and so it can't keep the UI responsive. This warrants launching the filter into a separate thread via Web Workers.

You can check it here

Even when I seriously doubt it can be of much help to anyone, I've also uploaded here the notes that I used for the presentation.

The idea for obtaining an ImageData object from an Image, and the code for the Gray Scale filter were taken from this cute article

By the way, the band in the picture are the almighty Downfall of Gaia, Gods of Blackened Crust

Sunday 17 February 2013

Questions about the Black Death

I've just watched this fascinating documentary (The Mistery of the Black Death) and thought I should recommend it here. I'd already watched some other pretty good [1] and [2] documentaries about this terrible disease that claimed the lives of between 1/3 and 1/2 of the European population of the time, so we could say that I had some basic common knowledge about it. This means that to me The Black Death was the same as Bubonic Plague, was caused by the Yersinia Pestis bacteria and transmitted by rat's fleas.
Well, if that's also what you think be ready to be shocked by this documentary.

The correlation between the Bubonic Plague and the Black Death was somehow established in the late XIX century after Alexander Yersin studied a Plague outbreak in Hong Kong and pinpointed Yersinia Pestis as the cause and rat's fleas as the vector. Scientists saw similarities enough between the symptoms and spread of both epidemics to link them together. This link would be quite comforting for the XX century men, cause as Bubonic Plague could be cured by antibiotics, there was no reason to be concerned about the mysterious Black Death making a come back.
However, this link seems to be fading away in the eyes of several scientists and historians. First of all, it does not seem possible that the common European rats of that time could transmit the illness to Northern Europe, second, the transmission speed of the Black Plague is way faster than that of the Bubonic Plague epidemics, and third, even the symptoms of both illnesses don't seem to be that similar as initially thought.

An old nightmare as it seems, the Black Death has been in the news in the last years for another surprising findings regarding it. A few years ago I watched this excellent documentary Secrets of the Great Plague where they talk about the relationship between immunity to the Black Death and immunity to HIV. While trying to find the link to the documentary, I found to my complete surprise that there's a quite older documentary, Secrets of the Death: Mystery of the Black Death that already called attention to the relationship between both diseases.

I can't finish off this post without mentioning this excellent film using the Black Death to undertake a brilliant attack on the religious establishment.

Sunday 10 February 2013

Object.watch Polyfill

For years Mozilla has kept adding extra features to the JavaScript dialect supported by their JS engines while good part of the community was busy in the ES4 wars... Some of these features have done it to ES5, others will do it to ES6 and others will slip away into nothingness (like EX4, xml was still cool 10 years ago, but seriously who gives a damn about it now?). Every now and then while checking the excellent MDN documentation, some of those additions shows up, and that's what happened the other day with Object.watch.

It provides you with "Observable Properties", which means that you can watch the assignments to a property and interfere with it. I guess this functionality was developed prior to the almighty ES proxies idea, which clearly supersedes it. Anyway, as of today Object.watch still seems useful to me, so writing a polyfill for it seemed like a fun way to spend my time in this nicely rainy winter (I'm not joking, I love rainy weather).

The solution is quite obvious: the accessor properties (get/set) added to ES5. As demonstrated in my intercept.js utility, we can intercept the access to an existing property by redefining it with a new accessor property. Indeed, the MDN page on Object.watch points to this existing polyfill. The problem I found with it is that it will only watch data properties, if the property to be observed is already an accessor property, it'll fail.

So, I've rolled my own polyfill and uploaded it here.
You also have a test here

There's not much to explain, it will redefine the property with a new accessor property. The get/set function used for the accessor are closures that trap the old descriptor if it also was an accessor one, or the value itself if it was a data descriptor. This new set will take care of invoking the handler provided to the watch function, and later on setting the returned value. Probably the most interesting part is the trick used to enable the unwatch functionality. We "decorate" the setter with a _watchHelper object, that will store the "type" of the initial property, and the initial descriptor if it was an accessor one, so that we can restore it when requested by unwatch.

Thursday 7 February 2013

32 vs 64 bits again and again

Well, I already posted about the 32 vs 64 bits here and here. The thing is that I've come up today with a couple of new doubts with their ensuing answers, and I thought I should invoke a SerializeToBlog on them.

I've run into these issues while doing some .Net prototyping of a Windows Service doing P/Invoke calls. Admittedly, it's been a long while since the last time I needed to resort to P/Invoke, and that was a time when I still did not have any 64 bits machine. I ran into some stupid problems with the number of bits and installation of the output executable (that I won't further explain here to avoid public shame). I was using some functions in kernel32.dll, and one folk at work mentioned that using kernel32 could be forcing my .exe to be compiled as a 32 bits one. It seemed to make sense, as it would seem natural that the 64 bits of the dll would be called kernel64 or something alike. Well, pretty wrong assumption. Some googling brought up that the 64 bits version of the dll continues to be called, kernel32.dll, and to add up to the confusion it's located on: c:\Windows\System32!. The 32 bits version is located on c:\Windows\sysWOW64.

In the process of shedding some light on the above, I was playing around with Process Explorer to check the different dll's loaded by a 32 bits and a 64 bits version of the same .exe.
A 32 bits version will load the 32 bits versions of clr.dll and clrjit.dll (located on C:\Windows\Microsoft.NET\Framework\v4.0.30319 for .Net 4.5) and the 64 bit versions (located on C:\Windows\Microsoft.NET\Framework64\v4.0.30319) when it's a 64 bits .exe. So far so good, as clr.dll and clrjit.dll are native dll's and therefore they've been compiled for the different architectures.
Another natural finding was that for normal assemblies part of the BCL, something similar was happening. For the 32 bits process, the NGen'ed assemblies in C:\Windows\assembly\NativeImages_v4.0.30319_32 were being used, and the ones in C:\Windows\assembly\NativeImages_v4.0.30319_64 for the 64 bits version. This makes sense, cause as assemblies are jitted to 32 or 64 bits at execution time depending on the process, the same is done when compiled ahead of time.

The odd thing for me comes when we check the locations of the non precompiled BCL assemblies, again folders:
C:\Windows\Microsoft.NET\Framework and C:\Windows\Microsoft.NET\Framework64.
We have many native applications and dll's there, but for the .Net assemblies, they seem to be repeated on both folders. Hey, these assemblies contain just bytecodes, they should be architecture independent as mentioned in countless excellent articles like this, so why 2 versions?

Well, I sort of understand that some of the more low level assemblies there can contain some internal calls into the CLR that make necessary these different versions, but seems very odd to me that all the assemblies there need this. I haven't found any explanation about this on the net, but my tests show me that for some assemblies (like for example System.ServiceProcess.dll) the copies on both folders are just the same file (I did a binary diff), but on the other hand, the copies of mscorlib.dll are different files. I guess they duplicate the files to keep the thing more homogeneous.

If we run the corflags tool on these assemblies we'll see that while System.ServiceProcess.dll was compiled as AnyCPU, mscorlib.dll was compiled as 32 or 64 depending on the folder.

Saturday 2 February 2013

Nested Async Loops

When working on one more of my "fun useless projects" involving repeating a set of animations (hope I'll post about it when I have time to complete it) I found myself writing a "Nested Async Loop", that is, an async function running inside 2 nested loops. The idea is simple and beautiful, as with normal async loops, the async function has to use its callback function to invoke the next iteration of the loop, the only addtions here is that when the loop is complete, it has to invoke the next iteration of the outer loop, so that it continues with the whole thing. This means we have 2 functions, one for each loop, that keep calling each other.

This example will try to make it clearer:

var outerLoop = (function(n){
 var i = 0; 
 return function(){
  if (i < n){
   console.log("- start i: " + i);
   innerLoop(i);
   i++;
  }
 };
}(5));

var innerLoop = (function(m){
 var j = 0; 
 return function(i){
  if (j < m){
   console.log("start i-j: " + i + j);
   setTimeout(function(){
    console.log("end i-j: " + i + j);
    j++;    
    innerLoop(i);
   }, 200);
  }
  else{
   j = 0;   
   outerLoop();
  }
 };
}(3));

outerLoop();

the above code is the async equivalent of these much more familiar lines :-)

for (var i=0;i<5; i++){
 for (var j=0;j<=3; j++){
  console.log("start i-j: " + i + j);
  console.log("end i-j: " + i + j);
 }
}

This nested async loop case prompted me to add a repeat method to my asyncLoop.js project. You can read more here.