My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Friday, December 30, 2011

Learning From 2011

It's time to summarize at least few mistakes or epic fails of the year, hoping the next one will try to follow a better direction.
I won't mention anything related to war, politic, Berlusconi, or fashion related stuff, other more prepared than me will do via pictures or posts, all I gonna talk about is the field I am concerned about: web and mobile web oriented technologies and facts.
The order is completely random, so grab a mug of coffee, take few minutes, and read 'till the end, forgetting for once tl;dr philosophy ;)

The Partially (Multi)? Touch Device Case

This has been the most annoying fact of the year: vendors going out with freaking powerful mobile devices with touch or multi touch capable screen/hardware not exposed through the browser and only available for native apps.
We are not talking about potentially "dangerous" technologies as WebGL could be, we are talking about the most basic thing able to break 140% the user experience.
If a user plays a native game or use a native map, as classic Map application could be, she will naturally use at least a finger, or more, to interact with the screen. Interaction does not mean scroll the page and nothing else, interaction means full control of what the user is doing with the screen.
As soon as the user surf the web, the browser pretends to know what the user would like to do on the screen without giving web developers any control over those actions.
The inconsistent behavior comes when the viewport meta tag is used to block, as example, the zooming in and out gesture because the whole layout has been developed for a static viewport that should not change.
In this case, only few vendors ( Apple ) or browsers ( Opera Mobile ) got it right, any other vendor with a WebKit based browser got it wrong. Here a few examples:

IE9 Mobile Epic Fail

IE9 is a great browser, compared with all other previous version, and it's freaking fast. This is true for both desktop and mobile phones but in latter case, IE9 is an epic fail when it comes to User Interaction.
There is no bloody way to intercept a user moving a finger on the screen, and standard W3C Touch Events are not supported.
You may think: oh well, I gonna use mouse events then ... and you are wrong, these are not fired until the user release her finger so ... forget about it, no "scroll" hacks will work as expected there ... also why on earth when everybody else is disabling JS actions during scrolling, IE9 Mobile fires it all the time?


Sad part is that Microsoft is investing a lot into HTML5, but if the most basic thing as Touch Events are, is not there, no reason to have the fastest mobile Canvas implementation because no game will be interactive, and no web app can be created as desired.
Hopefully IE10 will simply follow W3C exposing both Touch and TouchList in a meaningful way ... please don't screw this or few developers will even consider to write software for this browser, thanks.

webOS 2 Sad Fail

I have upgraded few minutes ago my Pre 2 to webOS 2.4.4 ... and I was expecting more from this update. My Palm Pre 2 exposes correctly multi touches through the system, but the web browser does not expose them and this is freaking annoying. Same IE9 behavior and at least at the end of 2011 I was expecting such simple browser update able to bring my Palm to the next level: User Interaction Through The Web ... FAIL.
The good old native Google Maps application for webOS 2.0.X is gone. This app was working like a charm and it was exposing multi touches through the multi touches capable screen ... now I have a slower and flicking version of Bing maps that is not even able to render pinch/spread without loosing the position on the screen since everything gets stuck truly easily.
Now, I have no idea why this happened and why somebody wasted time to do this ... it's just worse than before and the browser still does not expose multi touches ... come on guys, this phone did not deserve to die like this.

Ultra Powerful ... "broken" Androids

It does not matter if we have 16 cores in our expensive smart phone when developers life is made harder for no reason ... with most recent Android phones we could create any sort of web application but here again, no-f*cking-body exposes TouchList via browser except few tablets out there.
OK, still better than nothing, as it is for IE9 Mobile, FF Mobile ( I will come back later to this one ), or webOS browser, but how comes 3rd parties browsers such Opera Mobile are able to bring TouchList and expose multi touches through the multi touches capable screen and native browsers are all lacking of this basic feature?
You don't even want to know how we managed to bring multiple touches in android devices, all you need to know is that is basically a hack and it requires a native application wrapper and this is, in my opinion, ridiculous!
Nowadays, all users with a friend that uses an iPhone, will consider the multi touch web app broken in its android: come on, I use two fingers in other things why I cannot do the same in this web site/application?

FireFox Mobile Fail

I don't know where to start with this browser ... I mean, guys, this is not Desktop, this is a touch screen. If you decide that the user cannot even scroll a paragraph horizontally because your freaking cool settings or bookmark view has to show up instead, you should create your own device and make the phone 3 times larger: 1/3 for the actual screen, 2/3 for your things nobody expect to appear during navigation.
About touches, at least most recent version implements touch events but performances are still too bad compared with native browsers or Opera Mobile, and the fact settings or bookmarks show up when we touch close to the edge of the screen, plus the fact AFAIK there is no way to avoid this, make this browser not suitable for full screen web games/apps based on canvas: slow and unusable ... please fix/drop this!

UpdateIt must be said that latest FF mobile brings better canvas performances and apparently it exposes a not so fast WebGL. I dd not know this the moment I wrote this post so FF Mobile is definitively already going to the right direction.

iOS Gesture Events VS Others

These are not standard and nobody should care. The moment we can rely in TouchList and multi touches exposes through this interface, the problem is solved 'cause we can implement by our self any gesture we want.
No need to wait 'till on spread/pinch are implemted and documented, just please give us TouchList and take your time after to do things properly. TouchList has been already defined ... that's all we need so far, thanks!

The Curious Case of Opera Mobile

Opera did weird things this year ... it switched from partial ES3 support to full ES 5.1 support ... what the hell guys, congratulations! Still there is no way to understand numbers of this browser (again: Opera Mobile, NOT Mini) .
Today, I can say Opera Mobile is one of the fastest and most capable browsers I have been dealing with but the lack of many CSS3 features have been the arrow in the knee together with the fact Opera could have been the Phone Gap Like software of the year while they focused on ... I guess something else.
Don't give up guys, Opera Mobile is amazing and people may chose to download it as soon as most common HTML5/CSS3 things are working as expected and also you may expose freaking cool stuff without necessarily follow W3C, giving web developers the possibility to distribute applications directly through your browser as native wrapper ( and fallback to Webkit for iOS ) .
Opera has always been pioneer, and it's time to be pioneer on mobile too because on desktop I am afraid to tell you it's hard to compete. Firefox, Chrome, and Webkit Nightly, are really advanced and I believe bigger teams, what we are missing is a not so fragmented browser that works as expected in as many devices as possible because what I have seen with all webkit versions I have worked with this year was painful, hard, inconsistent, and WTF development oriented.
Last, but not least, If there is not SQL Database, don't put a native useless Database function on the global scope or shims are screwed and if you don't expose SQL Database, at least expose Indexed DB or shims are screwed.

Android Whatever

While many developers think that Android is an Operating System, I am every day more convinced that Android is a Super Kernel that has almost nothing to do with the Operating System itself.
There are too many damn versions of Android, and even same major versions do not guarantee anything.
Every device must be tested a part because every device, with every vendor specific gotcha, software, or extra cool stuff, may behave differently. Funny enough, while I was thinking this is true for the webkit based browser only, there are many exceptions device specific that do not work as expected even natively ... oh, come on, don't call it OS!
If we think about Windows, almost every piece of hardware is supported. Then we have the classic crapware every single vendor puts by default in its own distributed piece of hardware ... fair enough, at least most basic things gonna work as expected in all of them ... right?
With Android, the most basic free version has almost nothing. Cyanogenmod is the basic example, once you install it, you gonna scream for apps you expect to be there and it comes up you need the specific Gapp package ( Google Applications ).
Same thing with Kindle Fire, based on the very basic version of Android so common apps available for other tablets may not work or may not be there.
This means we can trust only a truly basic set of functionalities that are hopefully working in all distributions out there but you know, this is IT, expectations are rarely matching reality.
The performances tuning per each device is another problem, as problematic are performances of the same app through the browser compared with those through the native layer.
If you wrap your single touch full page canvas game into phone Gap, as example, you will realize how freaking fast it is compared with the same exact code through the browser ... why that? Security layer is off plus bound stuff is limited compared with the whole browser application ... fair enough? No it's not. The moment I realize it is possible to force HW accelerated rendering via native wrapper but it's not possible to do the same via browser I start screaming ... let us develop through the web or stop telling us you support standards because if these are limited intentionally then those not supported are developers themselves.

Mobile Web Apps Too Limited

I really hope I won't need to write a similar post at the end of 2012 but I am not sure there will be many progresses here ...
W3C is slow, plus it may waste their/our time. Web SQL Database is an example, something I loved screwed by dependencies free purist.
I have talked about Web SQL many times and right now I don't care about this specific topic, all I care is about moving forward.
Time to agree and promote such interface to drop it after 2 years is not the way to go.
Time to replace such interface with another one limited, slower, and not widely adopted as IndexedDB is, still not the way to go.
We need a database, please put it there one ASAP and drop 50Mb per domain/tab limit.
No fools guys, if the logic behind the Web is that user must explicitly agree give the user full control.
If a web application requires more than 50Mb give the user the possibility to increase the database.
All native apps do not care about how much space is used, these simply informs the user through system settings how much stored data there is. Google Earth, as example, even maps, these are heavy memory applications that do not ask every 5 megabytes if more space can be allowed ... this was another epic fail.
Once the user accepts the fact this web application would like to use the storage, create a way to simplify the task able to verify how much data is being stored and stop asking!
From UX perspective is like accepting an explicit file upload/manipulation and while the file is being loaded into ram asking every 200Kb if the user would like to increase the memory allocation for that file ...


What If Every Web Page Asks For Space

Nothing, same is for "what if every native app takes 4 gigs of space" ... either space is available or it's not possible ... is that easy ... but again, with an easy to access indicator and a limit that goes far beyond 50Mb the problem should not exist. User can clean the cache, or enable more space but it must be easy, as easy should be for web apps developers to understand the available space because if the device is full already, the 50 Mb limits is pointless in any case plus there are ways to trigger memory consumption behind the scene and fill up gigs of disk space/SSD ... once again, this ask on demand every time is an epic fail, imho.
FYI, the behavior I am talking about is in iOS, where Android asks only once if the disk access is allowed, then it silently fails after we reach the 50Mb limit.

Native VS Web App

Every browser is based on a native web view or similar concept, which means every browser could expose anything. I would like to be able to record user voice via web, to decode it via audio tag, to create real time video chat, to replace all native applications and to forget Android, iOS, webOS, blackberry, Symbian, MeeGo, Windows, whatever SDK ... I want to leave the SDK power to much more complicated things and I would like to be able to do most basic things via browser, all those things that the user could do with native apps, performances a part.
The gap between native and web app should be performances and nothing else indeed ... right now is: all possible nasty access to everything on native side, things that few users are rarely aware of, and almost no access to any functionality through the browser.
No idea if Chrome OS solved this but if it did, I am pretty sure it did with in-house solution, same Windows 8 will do, and this is annoying at least until a jDevice library will come out able to make all these private API consistent across all platforms ... as summary, why Mobile Web App development has to be so hard is a complete mystery to me: is it about your own markets? Fine, then stop selling us your browser is promoting Standard Mobile Web development because this sounds, so far, like a big lie.

Patent Oriented Development

When I heard that Apple patented a slide to unlock the screen I did not feel like "they are so advanced" ... I rather felt like "is anyone trying to patent my finger" ?
I move horizontally my thumb since ever and nowadays I know there is a patent for that and this is bad.
A natural movement should never be patented ... the graphic used to guide the movement could eventually be a patent but the way I do things not.
I don't want a patent to hold a fork, I don't want a patent for the position I use to sleep in a mattress, whatever weird sofa, or a chair, I don't want to do unnatural movements to unlock a touch screen.
What the hell is going on here ... if I use my nose to unlock my old android device I don't have to pay the patent indirectly, right?
Is this what is patent about? Yes, it is.
Patents are about intellectual property of whatever but the problem is that patents are expensive, are not necessarily predictable, and all these patents are doing is to block future development on top of it, unless is not the same company.
Patents are a sort of creativity barrier because if a company invents a generic something, nobody will even try to improve that invention due basic patent behind the original.
Example, I use my right hand to hold the phone and when I am talking with someone I put my right thumb on the right side of the phone.
If there is a button there, it's annoying, while if I have to swipe from left to right ... well, it's still annoying.
Will I ever implement a swiping gesture from right to left to unlock the screen because usually the way I hold the phone is with my right finger on the right side and it's more convenient to me to swipe the other way round as it is convenient for all left hand users? Nope, I don't even want to remember that thing exist, I have to create something completely new being careful that what I am doing does not touch the other patent ... cool, uh?

Hardware Patents

Same concept could be applied for hardware ... was that thing perfect? Yes, I pay the patent. Was that thing not perfect? No, I have to create something new ... even if based on laser(disc), the reason we all waited so much for new standards after the CD.
This happened with Compact Discs, and it brought us DVD first and Blu Ray after. In the meanwhile, another standard better than CDs but a bit less capable than Blu Ray died ... fair enough, we have new hardware for all these technologies but how comes in the hardware field the patent problem is not so strong as it is for the Software one?
3D TV, all vendors are making one, as well as CD players, DVD players, and Blu Ray players ... nobody cares about patents here, at least not the way we all do when it comes to Software.
I don't have any answer for this but I believe patents should be re-evaluated in their meaning because in the fastest field ever, as technology/software is, something that last for 10 years is too much and price to pay is too much as well as too much is the time between the patent pending and the patent applied status.

ES6 Focus

I have talked about this as well ... it's about what's truly needed today, at least in the Mobile Web Development, and the JavaScript.Next focus, often completely abstract over not-so-needed topics as syntax sugar and boring classes.
iOS5 brought massive performances boost to all Apple devices but even having the fastest and complete browser out there, Safari Mobile cannot do miracles when it comes to performances.
The side effect of not having performances oriented techniques in JavaScript and DOM world is that average web developers are not able to create cool stuff without draining the battery or being slightly fast only with most recent mobile devices.
The need for performances in this Mobile era should be priority number one for W3C and a possible ES 5.2 milestone, after that take even 5 years to improve syntax with all sugar you want, if necessary, at least we can push Mobile Web Development to its limit in the meanwhile while many new proposals won't bring anything interesting, as far as I can read, performances speaking.
We can deal with the fact protected and private variables are absent or different in JS, what we cannot deal with anymore is that a proper and fast 3D API is not available, that Parallels are not available, that WebWorkers are useless when it comes to move data back and forward, that postMessage requires a window.open which require a click which is too slow but cannot be faked ontouchend or window.open won't work anymore because not explicit ... that IndexedDB does not provide native way to join data and speed up queries, that DOM manipulation is slow, that we don't control repaints and/or reflows, that binary Arrays are still slow, that Structs like classes/objects are missing, that CSS3 is bringing logic in the view with lack of control ... etc etc ...
I don't care if Proxy are in the browser, if these are 2X slower than good old JS on mobile, you know what I mean? We need more performances now and more APIs finalized to have access to hardware layers only SDK can expose without granting any better usage than an experienced web developer.

WebGL Partial Fail

First of all this is not yet a standard, regardless the effort of the kronos group, which means browsers like Internet Explorer will never adopt this ... fair enough if we can shim through Silverlight but here comes the second point: WebGL, as well as Silverlight, are not available for mobile.
Apple exposed it for Adds only, where adds are most likely those responsible for bad Web performances, generally speaking, due massive usage of Flash or arguable usage of automations to bring canvas Adds able to slow down all netbooks and tables out there .... congratulations, I'll never get this choices ... WebGL not for developers but for designers of adds only ... can I say brilliant?
Too few mobile devices are exposing WebGL if not none of them while basically every mobile device has a GPU compatible with Open GL 2.0 ES
Still experimental, true, and already available mainly for Chrome browser where Webkit, as well as Firefox, could do the same or truly similar. The direction WebGL is taking is not good at all and I can't wait to see cool demonstrations able to run with all WebGL capable browsers ... so far, not many of them ... and this is bad.
Sony enabled WebGL in its latest version of Android, still about fragmentation, so I am expecting at least every other vendor to bring WebGL in our current devices already capable ... it was a massive fail to drop it for the whole 2011, cool things may have been experimented already on devices plus typed Arrays could have been used more even for non WebGL related tasks.

CSS3 Partial Fail

As mentioned before, CSS3 are becoming the most messed part of the whole Web stack, mobile and/or desktop. Examples with counters to change dynamically text via :after selector ... is it just me? I mean ... dafuq is that?
Internet Explorer dropped the most misunderstood proprietary feature as expression was and webkit is introducing any sort of unmanageable logic inside CSS3? ... counters? webkitTransitionEnd withou webkitTransitionStart? No way to trigger properly transitions if not without asynchronous rendering? Lots of hacks to make them work cross platform with control through JS for something written in CSS?
CSS3 is bringing MVC in Web world without the C and if we would like to create robust behaviors/interactions/applications that make sense, programming speaking, we should forget the case where the user has no JavaScript because CSS3 is becoming indirectly strongly dependent on JavaScript for many reasons ... why not bring standard DOM method to control states via JS, rather than control partially what is supposed to happen via CSS3 only? I don't get it, I have seen cool stuff, but I have not seen cool stuff adopted cross browser and I am not seeing cool stuff easy to handle via JS and/or vice-versa.
CSS3 needs notification into the JS world or we gonna have Photoshop like power without tools to control what we are doing.

As Summary

I have more things to say, and more I forgot but I thought a rant on 2011 was needed at least to check, in one year, what happened, what improved, and what did not change at all.
I hope all people that are doing hard work on daily basis to promote a better web won't take anything I wrote personal and will simply consider the point of view of somebody that works with all these things on daily basis so ... it's my personal feedback, on what I think should change ASAP and specially for mobile web development.

Happy New Year Everybody

Wednesday, December 21, 2011

Coercion Performances

There are cases where JS coercion may be wanted/needed/necessary, at least logically speaking.

A classic case is a list of primitives, e.g. strings, or numbers, and a check we would like to perform without creating a new function each time.

// recycled function
function alreadyThere(value) {
// wanted coercion, no cast needed
return value == this;
// shitty code, logically speaking
// one cast per iteration
return value === "" + this;
}

var listOfNames = ["Me", "You", "Others"];

// the pointless check
listOfNames.some(
// the recycled callback
alreadyThere,
// the *passed as object* this
"You"
); // will be true

Now, for above specific case anyone would use an indexOf but this is not the point.
The point is that in some case we may want to do more complicated stuff and compare the result with this

// know if word was already in the dictionary
function alreadyThere(value) {
return value.toLowerCase() == this;
}

// convert the check once per iteration
listOfNames.some(
alreadyThere,
"yoU".toLowerCase()
); // still true


It's About Optimizations

The whole point is to try to perform as less runtime computations as possible.
Following this logic, we may decide to lower-case all items in the list once, and never again, but unfortunately this will require duplicated amount of RAM used per each collection.
The String#toLowerCase() is fast enough for a not so frequent check so why bother the RAM?
The optimization is also to avoid this.toLowerCase() per each entry of the dictionary.
The concept here is again simple, avoid duplicated entries in a generic list of chars, but things may be similar with numbers.

Performances

I have created a specific test able to understand performances gap between coercion and cast per iteration.
Surprising Google Chrome seems to be able to optimize in core once per iteration the cast, resulting almost twice as fast as competitors.
Specially on mobile, the in core coercion seems to be faster, sometimes almost twice as fast as the cast per iteration is.

JIT Oriented Development ?

I believe we should code logically, rather than trust JIT optimization. What I mean is that performances test are always welcome but we all know that better logic and algorithms are always winning in therms of performances and maintainability. In this specific case, specially where mobiles suffer the most, I would never suggest to do the cast per iteration: first of all because only one engine seems to be able to optimize such cast, but we don't know if a more complex object/scenario would perform that well, secondly because if I see a cast per each loop iteration I start smelling laziness all over the place ... the boxing/unboxing has always been a well known problem performances speaking, so how can a developer approve a logic similar to (String)alwaysSameObject per each iteration?

As Summary

Tools such JSLint should just do their business in these cases ... coercion is wanted/needed/logical, and unless every browser will show such gap in common tasks against coercion, I will rarely promote a cast per iteration ... and you know what? I believe that gap in Chrome, is a missed optimization in Chrome itself.

Tuesday, December 13, 2011

Please, Give Us Back __noSuchMethod__ !

For those who don't know what __noSuchMethod__ is here the quick summary: it was a bloody handy non-standard method able to provide a fallback whenever we invoked an object method that did not exist.

var o = {};
o.__noSuchMethod__(function (name, args) {
alert(name); // "iDoNotExist"
alert([].slice.call(args)); // 1,2,3
});
o.iDoNotExist(1, 2, 3); // will produce above alerts


A Bit Of Background

Well, if you are patient enough, you may consider to read this never-ending post in Mozilla mailing list.
The reason that post is called Proxies: get+fn vs. invoke is because Proxy supposes to be the new way to go able to bring us much more power than we probably ever need ... but hey, this is welcome, while what is not welcome, is that Proxy may not be implemented first, which means browsers vendors should have waited to remove __noSuchMethod__ 'cause right now we may not have a pseudo equivalent, and second, but surely not less important, , Proxy does not provide the same functionality.

The Minified Theory Against The Practice

The main argument from @BrendanEich is that JavaScript has properties only so that o.fn() is the equivalent of o.fn.apply(o).
While this is true with any normal object, this is totally different with __noSuchMethod__.
The equivalent of __noSuchMethod__ for that operation, and behind the scene, is:
  • is there a property in o or its __proto__ called fn ?
    • yes, proceed as usual as if it was o.fn.call(o) and throw error if that property was not callable
    • no, is there a __noSuchMethod__ callback to burn instead of throwing an error due undefined property?
      • yes, perform the current operation: nsm_callback.apply(o, arguments)
      • no, throw an error since property was undefined and obviously not callable
Got it? The equivalent of o.fn() in an environment where __noSuchMethod__ was supported is potentially different from o.fn.apply(o) ... I am 100% sure Brendan knows this before and better than me and this is the reason I don't really get his strongest point.
Once again, o.fn() may be the equivalent of nsm_callback.apply(o, arguments) and not o.fn.apply(o).

The Inexistent Theory Against The Practice

If above "reason" was not enough, I have read even worst in the same thread. I am sorry guys, but sometimes you must be realistic and understand that if a developer does, as example, this:

// de-context fn from o and invoke it
(o.fn)();

// exact equivalent of
var fn = o.fn; // GETTER, no invokation
// and after ...
fn()
// this is a problem? we have same with missing bind then ...

rather than this:

// invoke fn through o as default context
o.fn();
// can we see the difference?

it means that developer has much bigger problems than __noSuchMethod__ inexistent ambiguity, that developer does not even know that __noSuchMethod__ exist ... come on!
Going on and on, another point is that get should be all we need to simulate the __noSuchMethod__ behavior through proxies ... but this is completely misleading!

A Getter IS A Getter

Is that trivial ... if we access a generic object property we are doing nothing different from invoking a getter with such object as property context.

o.whatever;
// look for "whatever" property name in o
// if found returns the "whatever" associated value


o.whatever.call
// nothing change, THIS IS NOT AN INVOKE
// look for "whatever" property name in o
// if found returns the "whatever" associated value
// since the value was a function, the call method is usable
// if whatever was not defined, the call method won't exist

Nobody should ever even consider to use property accessor and expect a __noSuchMethod__ behavior ... that property did not exist, what kind of method would you expect to look for?
call is a property of the Function.prototype so following the accessor/getter logic, nothing is ambiguous here.
Accordingly, lattest example is simply an inexistent mistake that hopefully no developer would ever do ... but you know, shit happens, then we learn, then hopefully we don't repeat same shit.

Other Programming Languages

When it comes to PHP, they perfectly managed to make the behavior not ambiguous through the __call magic keyword in classes definition but no, we decided that in JavaScript we cannot even think to put an invoke to make the life easier and completely NOT ambiguous for all of us ... do we?
I still cannot understand where and what is the ambiguous part if we have an explicit invoke declaration ... maybe something a bit harder to solve behind the scene for these poor JS engines? It could be ... should we all limit JS because of this? I don't think so.

Think About Libraries APIs Migrations

I give you the most basic example, the most used JS library with a fake getter and setter behavior: jQuery.

// jQuery simulation of getters and setters behaviors

// the getter
$("body").html(); // return string with content

// the setter
$("body").html("
whatever
");
// set the string with content

If you want, specially for chainability reasons through the simulated setter, the fact html is a method is convenient for the library but this library is stuck forever behind these two methods.
jQuery, at current ECMAScript status, will never be able to switch gradually to real getters and setters ... why that? A simple example:

$("body").html; // returns the string in jQuery 3000

$("body").html(); // shows a "deprecated warning"
// ... and returns the "html" getter

// implementation example
function setInnerHTML(node) {
node.innerHTML = this;
}
Object.defineProperty($.fn, "html", {
get: function () {
return this[0].innerHTML;
},
set: function (html) {
this.each(setInnerHTML, html);
}
});

// the deprecation warning
$.fn.__noSuchMethod__(function (property, args) {
if (property in this) {
console.log("Warning: " + property + " is not a method anymore");
if (args.length) {
// invoke the setter
this[property] = args[0];
// preserve behavior
return this;
} else {
// invoke the getter
return this[property];
}
} else {
throw "Y U NO READ DOCUMENTATION";
}
});

That's it, we can migrate from two different APIs implementing getters and setters whenever we had a similar behavior and bringing gracefully users to the new usage ... no wa can't!

It Is Not About jQuery

I don't even use jQuery so don't get me wrong, this is not my battle here ... the point is that for another private project I am working on I would like to educate developers to use properties correctly but I understand developers may already got use to invoke methods as if it is normal, even when they are simply looking for a getter behavior.

var o = {
whatever:"cool bro",
__noSuchMethod__: function (property, args) {
console.log("we got a bro-blem here, " +
"don't invoke if you want a getter");
return this[property];
}
};

// so that
o.whatever === o.whatever();

Secially last line of code is apparently impossible to reproduce with Proxies, those that suppose to be the new and best way to go, those that give us control on things rarely needed until now, those that made JS.Next group decide that __noSuchMethod__ was evil and it had to be abandoned.
I really hope that JS.Next will not be non-developer expectations behaviors driven because guys, somebody tries to do cool things with this cool language, and if the reason you drop something is because we are all morons, as example misunderstanding the difference of a referenced property through parenthesis ... oh well ... good luck kkfuture JavaScript ...

How To Solve This

Please put an invoke or even better an invokeProperty, preserving invoke for when the object itself is used as if it was callable, in the current Proxy specifications so that who knows what is doing, can keep doing it and who never even bothered with this stuff, won't be affected at all.
Thank you for listening.

Monday, December 12, 2011

Create a JS builder with node.js

A very good and common practice with JS projects bigger than 100 lines of code is to split code in different files.
Benefits are clear:
  • smaller pieces of code to maintain
  • swappable portions for experiments and/or improvements or new features, as example including for a build magic2.js and get it, rather than change drastically magic.js and follow the repository logs
  • better organization of the code, and I'll come back on this in this post
  • possibility to distribute bigger closures, as example the jQuery approach
  • create ad hoc builds including or excluding portion of the library, specially suitable for specific version of the code that must be compatible with IE only


Solutions All Over The Place

There are really tons of solutions able to make the described build process easy to use and easy to go. As example, I have created my own one and I am using it with basically every project I am working with: the JavaScript Builder.
However, this builder requires a couple of extra technologies such Python and Java ... but aren't we using simply JavaScript?
So why not an easy to create guide on how to build your code via JS only?
This is what this post is about, and I hope you'll find useful.

How To Structure Your Project

If all files are in the same directory is not easy to find the right file immediately since these could be many. A good solution I came up with is folder related structure with both namespaces and private keywords paths.
Here an example on how I would structure this library ( and please ignore the library itself )

var myLib = (function (global, undefined) {"use strict";

// private scope function
function query(selector) {
return document.querySelectorAll(selector);
}

function Wrapper(nodeList) {
this.length = nodeList.length;
this._list = nodeList;
}

// a prototype method of the Wrapper "class"
Wrapper.prototype.item = function item(i) {
return this._list[i];
};

// public static query method
query.asWrapper = function (selector) {
return new Wrapper(query(selector));
};

var // private scope variables
document = global.document,
slice = [].slice
;

// the actual object/namespace
return {
query: query,
internals: {
Wrapper: Wrapper
}
};

}(this));

The code should be easy enough to understand. The object used as namespace for myLib has a couple of methods, few private variables and functions and something exposed through the internals namespace.
It does not matter what the library does or how good/badly is structured, what matters is that our folder structure should be smart enough to be able to scale with any sort of allowed JS pattern ... OK?

The Folder

Well, to start with, let's say our source code should be inside an src folder so we can add other folders for tests or builds beside in the same hierarchy.

dist
src
tests
builder.js

We'll see the builder.js later, in the meanwhile, let's have a look into the src folder:

dist
src
intro.js
outro.js
var.js
function
Wrapper.js
query.js
Wrapper
prototype
item.js
query
asWrapper.js
tests
builder.js

The distinction will be much cleaner once you read above list through your editor or even your shell ... query and files are well distributed but bear in mind this is only the first example.
Let's see what we are going to write into each file ?

src/intro.js

var myLib = (function (global, undefined) {"use strict";


src/function/query.js

// private scope function
function query(selector) {
return document.querySelectorAll(selector);
}


src/function/Wrapper.js

function Wrapper(nodeList) {
this.length = nodeList.length;
this._list = nodeList;
}


src/function/Wrapper/prototype/item.js

// a prototype method of the Wrapper "class"
Wrapper.prototype.item = function item(i) {
return this._list[i];
};


src/function/query/asWrapper.js

// public static query method
query.asWrapper = function (selector) {
return new Wrapper(query(selector));
};


src/var.js

var // private scope variables
document = global.document,
slice = [].slice
;


src/outro.js

// the actual object/namespace
return {
query: query,
internals: {
Wrapper: Wrapper
}
};

}(this));

Got it?

Structure Rules

  • every part of the scope can be distributed
  • each file can or cannot be compatible as stand alone with a parser because to test the library we need to build it first ( eventually with automations )
  • function declarations should be included in a dedicated folder called function accordingly with the nested level
  • var declaration per scope could be included in a folder var accordingly with the nested level. Do not create a var folder per each function where you define variables 'cause if you need it it means the function is too complex. Split it in sub task and do not define 100 variables per a single function: closures are the only exception.
  • nested closure must be named in order to be able to define nested closure structure following previous rules. Every minifier will be able to remove function expression names, included named closures, while not every developer would like to deeply understand the whole code to recognize why the nested closure was useful. A classic example is the inclusion inside our own closure of an external library that uses its own closure. in this case name that closure so you know were to look for the library inside your folder structure.
  • function prototypes should be placed inside a prototype folder, inside the function folder.
    We don't need to reassign an object when we want to pollute the function prototype so please stop this awkward common practice ASAP: MyFunction.prototype = { /* THIS IS WRONG */ } and use the already available prototype object defined by default in every ECMAScript standard and per each function declaration or expression.
    If your argument is that the code will be bigger, use the outer scoped variables definition to address the prototype once and reuse this reference within the prototype folder. This approach will make your life easier once you get use to work with structured and distributed JavaScript files.

Specially about last example, we could have set a shortcut to the Wrapper.prototype object in the var.js file and reuse the reference inside Wrapper.
The structured folders will always help you to find references in the library thanks to the lookup that you, as well as the code, have to do.

// in the var.js file
WrapperPrototype = Wrapper.prototype,

// in the Wrapper/prototype/item.js file
WrapperPrototype.item = function item(i) { ... };


The Order Partially Matters

In ECMAScript 3rd or higher edition function declarations are always available at the very beginning of the scope. I really don't know why these are so much underrated in daily basis code ... the fact these are always available means we can reference their prototype at any moment in our code:

var internalProto = (function () {

// address any declaration made in this scope
var WhateverPrototype = Whatever.prototype;
return WhateverPrototype;

// even if defined after a return!!!
function Whatever() {}
}());

alert(internalProto); // [object Object]

Now, the above code is simply a demonstration about how function declarations work ... I am not suggesting a return in the middle, and declarations after, all I am saying is that the order of things in JavaScript may not be relevant, and function declarations are a perfect example.
Another example is the usage of variables ... if a function, as declaration or as expression, reference a variable defined in the outer scope nothing will break unless we are invoking that function before the referenced variable has been defined.

This are really ABC concepts we all should know about JS before even claiming that we know JavaScript ... OK?
Is really important to get these points because to simplify ASAP the builder file we need to rely in these assumptions.

The builder.js File

It's time to create the magic file that will do the job for us in possibly a smart way so that we can cover all edge cases we could think of.
This is the content of builder.js file, in the root of our project

// @name builder.js
// @author Andrea Giammarchi
// @license Mit Style License

// list of files to include
var
scriptName = "myLib", // the namespace/object.project name
fileList = [
"intro.js", // beginning of the closure
"var.js", // all declared variables
"function/*", // all declared functions
"function/Wrapper/prototype/*", // all methods
"function/query/*", // all public statics
"outro.js" // end of the library
],
fs = require("fs"), // file system module
out = [], // output
alreadyParsed = [] // parsed files for visual feedback
;

// per each file in the list ...
fileList.forEach(function addFile(file) {
// if the file contains a wild char ...
if (file.charAt(file.length - 1) == "*") {
// read the directory and per each file found there ..
fs.readdirSync(
__dirname + "/src/" + file.slice(0, -2)
).forEach(function (file) {
// if the file type is js
// and the file has not been defined explicitly
// in the original list
if (
file.slice(-3) == ".js" &&
fileList.indexOf(file) < 0
) {
// call this same function providing the whole path
addFile(this + file);
}
// the path is passed as context to simplify the logic
}, file.slice(0, -1));
// if the file has not been included yet
} else if (alreadyParsed.indexOf(file) < 0){
// put it into the list of already included files
alreadyParsed.push(file);
// add the file content to the output
out.push(fs.readFileSync(__dirname + "/src/" + file));
} else {
// if here, we are messing up with inclusion order
// or files ... it's a nice to know in console
try {
console.log("duplicated entry: " + file);
} catch(e) {
// shenanigans
}
}
});

// put all ordered content into the destination file inside the dist folder
fs.writeFileSync(__dirname + "/dist/" + scriptName + ".js", out.join("\n"));

// that's it

The reason there are so many checks if a wild char is encountered is quite simple ... the order may not matter but in some case the order matters.
If as example a prototype property is used runtime to define other prototype methods or properties, this cannot be pushed in the output randomly but at the very beginning, example

// src/function/Wrapper/prototype/behavior.js
WrapperPrototype.behavior = "forEach" in [];

// src/function/Wrapper/prototype/forEach.js
WrapperPrototype.forEach = WrapperPrototype.behavior ?
function (callback) {[].forEach.call(this._list, callback, this)} :
function (callback) { /* a shim for non ES5 compatible browsers */ }
;

Being file 2 strongly dependent on file 1, the list of files could be written as this:

fileList = [
"intro.js", // beginning of the closure
"var.js", // all declared variables
"function/*", // all declared functions
"function/Wrapper/prototype/behavior.js", // precedence
"function/Wrapper/prototype/*", // all methods
"function/query/*", // all public statics
"outro.js" // end of the library
],

When the wild char will be encountered and the behavior passed to the forEach, this will be simply ignored since it has been pushed already in the previous call.
Same concept could happen if a specific file must be parsed runtime at the end:

fileList = [
"function/Wrapper/prototype/behavior.js", // precedence
"function/Wrapper/prototype/*", // all methods
"function/Wrapper/prototype/doStuff.js" // after all
],

I believe these are already edge cases most of the time but at least now we can better understand what the builder will do.

How To Use The Builder

In console, inside the project folder where the builder.js is:

node builder.js

That's pretty much it ... if you try to open dist/myLib.js after above call you will find your beautiful library all in one piece and ready to be minified, debugged, and tested.
If the process does not take long time you may bind the builder to the Constrol+S action with a potential sentinel able to inform you if any problem occurred, as example checking if the output has been polluted with some redundant file logged through the process.

As Summary

All these techniques may be handy for many reasons. First of all it's always good to maintain a structure, rather than a single file with thousands of lines of code, and secondly once we understand how the process work, nothing can stop us to improve, change, make it ad-hoc for anything we may need such regular expressions to strip out some code before the output push or whatever else could come up for some reason at some point.
The minification can be done the way you prefer, as example adding this single line of code at the end of the process assuming you have a jar folder with, as example, google closure compiler.

require('child_process').exec(
['java -jar "',
__dirname + "/jar/compiler.jar",
'" --compilation_level=SIMPLE_OPTIMIZATIONS --language_in ECMASCRIPT5_STRICT --js "',
__dirname + "/dist/" + scriptName + ".js",
'" --js_output_file "',
__dirname + "/dist/" + scriptName + ".min.js",
'"'].join(""),
function (error, stdout, stderr) {
if (error) console.log(stderr);
}
);

Enjoy your new builder :)

Tuesday, December 06, 2011

On JSON Comments

Another active exchange with @getify about JSON comments and here my take because tweets are cool but sometimes is hard to tell everything you think in 140 bytes ...

Kyle Facts

JSON is used on daily basis for billion of things and configuration files are one, surely common, way to use JSON ( just think about npm packages for node.js ).
His frustration about the fact JSON does not allow comments is comprehensible, and he even created an online petition about allowing comments in JSON specs ... but are comments really what we need?

Just A Side Effect

JSON is extremely attractive as standard, first of all because it's available and widely adopted by basically any programming language in this world, even those that never had to deal with a single JavaScript interlocutor, secondly because it's both simple to parse, and easy to read for humans.
After all, what can be so wrong about comments inside such common serialization standard?
Aren't comments just as easy to parse as white spaces?
The problem is, in my opinion, that we are mixing up an easy process to serialize data, much easier compared to what PHP serialize and unserialize functions do, with the possibility to describe it.
I have always seen JSON as a protocol, rather than a YAML substitute, and as a protocol I expect to be as compact as possible and as cross platform as possible.
Specially about the latter point:
that's annoying to port to every single language
Precisely, what we should understand is that if JSON became so popular without needing comments, maybe the fact today we would like to use it as a "descriptive markup" does not reflect anymore the success, the adoption, and possibilities this standard brought to all these languages?

Improve The Standard

Thanks gosh software is not always stuck behind immutable standards or patents ... and neither is JSON.
If the need for comments is such big topic, define another standard able to combine the good old one with a new one.
If this new standard is truly what developers need, every JSON implementor will spend few hours to test and optimize the well defined standard in order to accept comments, isn't it?
I mean ... RFC 4627 was not meant to be the final solution, that was a Crockford proposal universally adopted.
To create a new standard able to extend RFC 4627 should not be a big problem ... or maybe ...

JSON Is Not A JavaScript Thing

The JSON serialization looks just like JavaScript ... but not JavaScript only.
Other programming languages use curly brackets and squared brackets to define lists and objects ( Python and others ) ... the fact JSON has been accepted so well is probably because the design of the format was indeed already widely adopted, it was not a JS thing, and never should be.
What's the deal here, is define a standard for JSON comments.
Let me better explain this point ...

I am a JS developer and I edit my JSON files via my JS editor ... fair enough ... I want to communicate my data to a server side service, let's say Python.
Python would like to be able to parse my data and produce a file compatible with ... Python, of course.
Does it mean that Python at that point should keep comments in a JavaScript standard? And why that, since the format used to exchange data was already somehow evaluable via Python and right now not anymore due some double slash in the file?

# this is PYTHOOOOOOOOOON
o = {"test": "data"}
o.get("test") # data

Will the renewed JSON work as well?


# still python
o = {"test": "data"} // and some WTF
o.get("test")

>> SyntaxError: invalid syntax

Well done ... making JSON with comments a JS thing python, as other languages, need extra effort to parse data.
What will happen once Python uses the json library?

import json

jsonh.loads(theReceivedData);

Should it print data with python compatible comments or with JS compatible comments so that data is not directly usable anymore for any python application, storing it as example in a file.py or a database ?
And once the renewed JSON has been transformed into Python valid syntax, wouldn't this file not be usable anymore from all others programming languages due possible syntax errors ?

Not That Easy

I like JSON as it is, except few broken implementations quite common even on browsers, because it's about creating a bloody piece of text many other languages can understand basically on the fly.
No need to remember which comment style has been saved with that version of JSON, no need to parse back and finally, no need to ask every single programming language that is using JSON as protocol to update their legacy ... that just worked, and it will aways work for what it was meant: transfer data, not transfer JS like code without functions and/or functions calls only ...
What we are doing, we as JavaScript developer, is to abuse JSON as if it's a piece of our JS code, polluting today with comments, and who knows what else tomorrow.
The right way to go, still in my opinion, would be, once again, to enrich, propose, create, a new standard that allows comments and why not, other features.
As example, what I always found annoying is that in PHP we can unserialize preserving the class, so that we can serialize objects states ... where is this in JSON?
Nowhere, indeed I have spent in my past few hours trying to enrich this protocol ... did it work? Was it interesting ? Probably not, except for my last attempt that is 100% based on current JSON and it's about optimizing bandwidth and performances ... that worked better, accordingly with JSONH github status, still I was not expecting every other that never had this problem to adopt that approach ... you know what I mean?

Still Valid JSON

If it's about writing comments, next snippet is perfectly valid JSON string. All we need to do is to use the replacer in a proper way:

{
"@description": "something meaningful",
"property": "value",

"@description": "something else",
"other": "other value"
}

// parse above text via JSON
console.log(JSON.parse(aboveText, function (key, value){
if (key.charAt(0) != "@")
return value;
}));

Here we are with our object, comments manually written on the original JSON, and every language able to parse them

Wednesday, November 30, 2011

Array extras and Objects

When Array extras landed in JavaScript 1.6 I had, probably together with other developers, one of those HOORRAYYY moment ...
What many libraries and frameworks out there still implement, is this sort of universal each method that supposes to be compatible with both Arrays and Objects.

A Bit Messed Up

What I have never liked that much about these each methods is that we have to know in advance in any case if the object we are passing is an Array, an ArrayLike one, or an Object.
In latter case, the callback passed as second argument will receive as second argument the key, and not the index, which simply means we cannot trust a generic callback unless this does not check per each iterated item the second argument type, or unless we don't care at all about the second argument.
In any case I always found this a bad design. If we think about events, as example, it's totally natural to expect a single argument as event object and then we can act accordingly.
This let us reuse callbacks for similar purpose and maintain a DRY code.

Need For An Object#forEach

All implementation of each, and as far as I know with the only exception of jQuery which makes things even more complicated since we generally have to completely ignore the first argument in this case, have some natural confusion inside the method.
If you take the underscore.js library, as example, you will note that there are two aliases for the each method, each itself and forEach, so it's more than clear for me that JS developers are clearly missing an Array#forEach like method in order to iterate with objects, rather than lists.
It must be also underlined that all these methods are somehow error prone: what if the object we are passing has a length property that does not necessary mean it points to the length of items stored via index as if it was an Array?
You may consider this an edge case, or an anti pattern, then you have to remember that functions in JavaScript are first class objects.
Probably all these methods will nicely fail indeed with functions, passed as objects, whenever you decide that your function can be used as object too.

var whyNot = function (obj) {
/* marvelous stuff here */
this.calls++;
return this.doStuff(obj);
};
whyNot.calls = 0;
whyNot.doStuff = function (obj) {
/* kick-ass method */
};

// the unexpected but allowed
whyNot = whyNot.bind(whyNot);

whyNot.length; // 1
whyNot[0]; // undefined

By design, the length of any function in JavaScript is read-only and means nothing, in therms of Array iteration, it simply means the number of arguments the function defined during its declaration/definition as expression.

WTF

Whenever above example makes sense or not, I am pros patterns exploration and when a common method is not compatible with all scenarios, I simply think something went wrong or is missing in the language.
Thanks gosh JS is freaking flexible and with ES5 we can define some prototype without affecting for( in ) loops but hopefully simplifying our daily basis stuff.
Remember? With underscore or others we still have to know in advance if the passed object is an Array, an ArrayLike, or a generic object ... so what would stop us to simply chose accordingly?

// Array or ArrayLike
[].forEach.call(genericArrayLike, callbackForArrays);

// generic object to iterate
{}.forEach.call(object, callbackForObjects);

An explicit choice in above case is the fastest and most reliable way we have to do things properly. A DOM collection, as well as any array or arrayLike object will use the native forEach, but we can still recycle callbacks designed to deal with value, key and objects, rather than value, index, and this is the little experiment:

Object extras

The concept of each callback is exactly the same of original, native, Array callbacks, except things are based on native functions available in all ES5 compatible desktop and basically all mobile browsers, and easy to shim with all others too old to deal with JS 1.6 or higher.



Here a couple of examples:

var o = {a:"a", b:"b", c:""};

// know if all values are strings
o.every(function (value, key, object) {
return typeof value == "string";
}); // true

// filter by content, no empty strings
var filtered = o.filter(function (value, key, object) {
return value.length;
}); // {a:"a",b:"b"} // original object preserved

// loop through all values (plus checks)
o.forEach(function (value, key, object) {
object === o; // true
this === o; // true
if (key.charAt(0) != "_") {
doSomethingWithThisValue(value);
}
}, o); // NOTE: all these methods respect Array extras signatures

// map a new object
var mapped = o.map(function (value, key, object) {
return value + 1;
}); // {a:"a1",b:"b1"} // original object preserved

// know if a value contains "a"
o.some(function (value, key, object) {
return value === "a";
}); // true

The reason reduce and reduceRight are not in the list is simple: which one would be the key to preserve, the first of the list? There is no such thing as "predefined for/in order" in JavaScript plus these methods are more Array related so out of this experiment.

As Summary

Once minified and minzipped the gist weights about 296 bytes which is ridiculous size compared with any application we are dealing with on daily basis.
Specially forEach, but probably others too, may become extremely handy and ... of course, using the Object.keys method internally, this is gonna be compatible with Arrays too but hey, the whole point was to make a clear distinction ;)


[edited]

The Misleading Signature

I don't know how many times I have spoken with jQuery developers, just because they are common, convinced that native Array#forEach was accepting the value as second argument.
I always considered inverted signatures, whatever API it is, bad for both performances, no possibility to fallback into some native method, and learning curve, where new comers learn than a generic each method must have the index as first argument.
Bear in mind whenever we loop we are most likely interested into the value of that index or key, so this value should be the first, and if you need the only one, argument passed through the procedure.
A completely ignored first argument is, once again and in my opinion, a bad design for an API: stuck without native power, teaching arguments order is not relevant.
Well, specially latter point is true if we have named arguments, but in JS nothing have been planned so far, and in ES6 the way we gonna name arguments is still under discussion.

Have fun with JS

Sunday, November 27, 2011

About Felix's Style Guide

This was a long sarcastic post that didn’t produce the desired effect.

I have decided to remove it from the Internet and summarize what I think about style guides in few points:

  • read them if you think you need to
  • keep in mind meaningful points
  • use common sense instead of stubborn mechanic rules
  • don’t impose rules to your team; discuss what’s worth discussing

Use a linter at least to catch obvious mistakes and write code as relaxed and reasonable as you can.

We really don’t have to make programming that boring, neither to stress ourselves with stubborn rules that at the end of the day won’t improve the overall code quality at all.

The end.

Saturday, November 26, 2011

JSONH New schema Argument

The freaking fast and bandwidth saver JSONH Project has finally a new schema argument added at the end of every method in order to make nested Homogeneous Collections automatically "packable", here an example:

var
// nested objects b property
// have same homogeneous collections
// in properties c and d
schema = ["b.c", "b.d"],

// test case
test = [
{ // homogeneous collections in c and d
b: {
c: [
{a: 1},
{a: 2}
],
d: [
{a: 3},
{a: 4}
]
}
}, {
a: 1,
// same homogeneous collections in c and d
b: {
c: [
{a: 5},
{a: 6}
],
d: [
{a: 7},
{a: 8}
]
}
}
]
;

The JSONH.pack(test, schema) output will be the equivalent of this string:

[{"b":{"c":[1,"a",1,2],"d":[1,"a",3,4]}},{"a":1,"b":{"c":[1,"a",5,6],"d":[1,"a",7,8]}}]


How Schema Works

It does not matter if the output is an object or a list of objects, as well as it does not matter if the output has nested properties.
As soon as there is an homogeneous collection somewhere deep in the nested chain and common for all other levels, the schema is able to reach that property and optimize it directly.
Objects inside objects do not need to be the same or homogeneous, these can simply have a unique property which is common for all items and this is enough to take advantage of the schema argument that could be one string, or an array of strings.

Experimental

Not because it does not work, I have added tests indeed, simply because I am not sure 100% this implementation covers all possible cases but I would rather keep it simple and let developers deal with more complex scenario via manual parsing through the JSONH.pack/unpack and without schema ... this is still possible as it has always been.
Let me know what you think about the schema, if accepted, I will implement it in Python and PHP too, thanks.

Friday, November 25, 2011

On Complex Getters And Setters

A common use case for getters and setters is via scalar values rather than complex data.
Well, this is just a programmer mind limit since data we could set, or get, can be of course much more complex: here an example

function Person() {}
Person.prototype.toString = function () {
return this._name + " is " + this._age;
};
// the magic identity configuration object
Object.defineProperty(Person.prototype, "identity", {
set: function (identity) {
// do something meaningful
this._name = identity.name;
this._age = identity.age;
// store identity for the getter
this._identity = identity;
},
get: function () {
return this._identity;
}
});

With above pattern we can automagically update a Person instance name and age through a single identity assignment.

var me = new Person;
me.identity = {
name: "WebReflection",
age: 33
};

alert(me); // WebReflection is 33

While the example may not make much sense, the concept behind could be extended to any sort of property of any sort of class/instance/object.

What's Wrong

The problem is that the setter does something in order to keep the object updated in all its parts but the getter does nothing different than returning just the identity reference.
As summary, the problem is with the getter and the reason is simple: from user/coder perspective this may not make sense!

me.identity.name = "Andrea";

alert(me); // WebReflection is 33

In few words we are changing an object property, the one used as identity for the me variable, leaving the instance of Person untouched ... and semantically speaking this looks wrong!

A Better Approach

If the setter aim is to update a state there will be only a well known amount of properties to re-assign before this status can be updated.
In this example we would like to be sure that as soon as the identity has been set, the instance toString method will produce the expected result and without relying an external reference, the identity object itself.

// listOfNames and listOfAges are external Arrays
// with same length
for (var
identity = {},
population = [],
i = 0,
length = listOfNames.length;
i < length; ++i
) {
// reuse a single object to define identities
identity.name = listOfNames[i];
identity.age = listOfAges[i];
population[i] = new Person;
population[i].identity = identity;
// out of this loop we want that each
// instanceof Person prints out
// the right name and surname
}

// we cannot do it lazily in the toString method ...
// this will fail indeed
Person.prototype.toString = function () {
return this._identity.name + " is " + this.identity._age;
};

Got it? It's basically what happens when we use an object to define a property, more properties, or to create inheriting from another one: properties and values are parsed and assigned at that moment, not after!

var commonDefinition = {
enumerable: true,
writable: true,
configurable: false
};

var
name = (commonDefinition.value = "a"),
a = Object.defineProperty({}, "name", commonDefinition),
name = (commonDefinition.value = "b"),
b = Object.defineProperty({}, "name", commonDefinition)
;
alert([
a.name, // "a"
b.name // "b"
]);

As we can see if the property assignment would have been lazy the result of the latest alert would have been "b", "b" since the object used to define these properties has been recycled ... I hope you are still following ...

A Costy Solution

There is one approach we may consider in order to make this identity consistent ... the one that stores an identity with getters and setters.

// the magic identity configuration object
Object.defineProperty(Person.prototype, "identity", {
set: function (identity) {
var self = this;

// do something meaningful
self._name = identity.name;
self._age = identity.age;

// store identity as fresh new object with bound behavior
this._identity = Object.defineProperties({}, {
name: {
get: function () {
return self._name;
},
set: function (name) {
self._name = name;
}
},
age: {
get: function () {
return self._age;
},
set: function (age) {
self._age = age;
}
}
});
},
get: function () {
return this._identity;
}
});

What changed ? The fact that now we are able to pass through the instance and operate through the identity:


me.identity.name = "Andrea";

alert(me); // Andrea is 33

Cool hu ? ... well, it could be better ...

A Faster Solution

If for each Person and each identity we have to create a fresh new object plus at least 4 functions, 2 for getters and 2 for relative setters, our memory will fill up so quickly that we won't be able to define any other person identity soon and here comes the fun part: use the knowledge gained by this pattern internally!
Yes, what we could do to make things slightly better is to recycle the internal identity setter object definition in order to borrow maximum 4 functions rather than 4 extra per each Person instance ... sounds cool. uh?

function Person() {}
Person.prototype.toString = function () {
return this._name + " is " + this._age;
};

(function () {

var identityProperties = {
name: {
get: function () {
return this.reference._name;
},
set: function (name) {
this.reference._name = name;
},
configurable: true
},
age: {
get: function () {
return this.reference._age;
},
set: function (age) {
this.reference._age = age;
},
configurable: true
},
reference: {
value: null,
writable: true,
configurable: true
}
};

Object.defineProperty(Person.prototype, "identity", {
get: function () {
return this._identity;
},
set: function (identity) {
// something meaningful
this._name = identity.name;
this._age = identity.age;

// set the reference to the recycled object
identityProperties.reference.value = this;

// define the _identity property
if (!this._identity) {
Object.defineProperty(
this, "_identity", {value: {}}
);
}
Object.defineProperties(
this._identity,
identityProperties
);
}
});

}());

var
identity = {},
a = new Person,
b = new Person
;

identity.name = "a";
identity.age = 30;
a.identity = identity;

identity.name = "b";
identity.age = 31;
b.identity = identity;

alert([
a, // "a is 30"
b // "b is 31"
]);

So what we have there? A lazy _identity definition, good for those scenario where some getter or setter may never been invoked, plus a smart recycled property definition through a single object descriptor, and performances boosted up N times per each instance since no multiple different functions are assigned per identity properties getters and setters and no extra objects are created runtime ... arf, arf .. are you still with me ?

As Summary

Some JS developer keep asking for standard ways to do these kind of crazy stuff without realizing that few other programming languages are that flexible as JavaScript is ... it's maybe not that simple to find better, optimized, in therms of both memory consumption and raw performances, patterns to cover weird scenario but what we should appreciate is that with JS we almost have, always, a way to simulate something we did not even think about until the day before.
Have fun with JS ;)

Monday, November 21, 2011

Differential Background Scrolling

A quick one about this technique quite common in Flash sites but rarely seen on Web.
Have a look at the example first so you can understand what I am talking about ... got it ?

What Is This About

Let's say we have a background, a big massive graphic background surely not suitable for mobile phones, due data roaming, but maybe cool for desktop and fast ADSL.
The background-size CSS property is able to let us decide if the image used as background should fit the whole element or only a portion of it.
In this case the image should fit, by default, the whole height of the document with an auto width in order to let the browser adjust the scale.
A differential scrolling is visible the moment we scroll the page ... please resize the window into a smaller one if you are in an HD monitor and start scrolling the page.
At the very beginning, the default height of the image is 100%, as body background, and with some padding in order to let space for few important image parts such the header, with clouds and enough space for a H1 tag, and the bottom, with the stylish logo of this game from elderscrolls.com, the one which page inspired this little experiment. Bear in mind no code has been even read from that website ... I have seen the effect, I have used it many times ages ago via ActionScript, I decided to do something similar for most advanced browsers, and here is ...

The Code


(function (document) {
// (C) WebReflection - Mit Style Licence
var
ratio = .85, // 0 to 1 where 1 is 100%

// shortcuts
styleSheets = document.styleSheets,
documentElement = document.documentElement,
ceil = Math.ceil,
scroll = "scroll",
scrollHeight = scroll + "Height",
scrollTop = scroll + "Top",
body, sHeight, sTop, y, last
;
styleSheets = styleSheets[styleSheets.length - 1];
// redefine the rule for the height
styleSheets.insertRule(
"body{background-size:auto " + ceil(
ratio * 100
) + "%;}",
styleSheets.cssRules.length
);
// get the rest of the ratio
ratio = 1 - ratio;
// attach a scroll listener
addEventListener(scroll, function (e) {
if (body || (body = document.body)) {
sHeight = documentElement[scrollHeight] ||
body[scrollHeight];
sTop = documentElement[scrollTop] ||
body[scrollTop];
y = ceil(
ratio * sHeight * sTop / (sHeight - innerHeight)
);
// this avoid some redundant assignment
// hopefully creating less flicking effect
if (last != y) {
body.style.backgroundPosition = "center " + (last = y) + "px";
}
}
}, false);
// you may want to try this for Chrome Browsers
//documentElement.style.WebkitTransform = "translateZ(0)";
}(document));


The Problem

Many of them ... to start with the fact this technique does not scale as showed in this example since for mobile phones, or generally speaking smaller screens, it does not make sense to use such big image: use media queries for this.
Opera 12 is almost there but something goes terribly wrong during background reposition ... it's screwed up by N pixels even if the rest of the logic works and no error is shown on console.
Firefox Nightly goes quite well but it is still flicking a bit while Safari, and even better WebKit Nightly, are the smoothest in this Mac.
The disaster is Chrome Canary, which is not able to handle this background repositioning.
You can see the effect if you scroll fast in both inspiration site and my experiment and, as commented out in the code, the only way to make it better is to force HW acceleration in the whole document 'cause in the body only the background looks broken ... it's really cool to see how DOM is able to mess up with GPUs, isn't it?

As Summary

Nothing much to add to this post, it was just a quick example over a cool effect but as it is, since ever, in this Web field, almost everything went terribly wrong :D
Have fun with CSS and graceful JS enhancements!

Saturday, November 12, 2011

Few JavaScript Patterns

Just to be clear and once again, JavaScript can emulate:
  • classes
  • public and public static methods or properties
  • private and private static methods or properties
  • public and private constants
  • protected methods
  • ... you name it ...

// duck typing ( maybe all you need )
var me = {name: "WebReflection"};

// basic class
function Person() {}
Person.prototype.getName = function () {
return this.name;
};
Person.prototype.setName = function (name) {
this.name = name;
};

// module pattern + private properties / methods
function Person(_name) {
function _getName() {
return _name;
}
return {
getName: function () {
// redundant, example only
return _getName();
},
setName: function (name) {
_name = name;
}
};
}

// private shared methods via this
var Person = (function () {
function Person() {}
function _getName () {
return this.name;
}
function _setName (name) {
this.name = name;
}
Person.prototype.getName = function () {
return _getName.call(this);
};
Person.prototype.setName = function (name) {
_setName.call(this, name);
};
return Person;
}());

// private shared method Python style
var Person = (function () {
function Person() {}
function _getName (self) {
return self.name;
}
function _setName (self, name) {
self.name = name;
}
Person.prototype.getName = function () {
return _getName(this);
};
Person.prototype.setName = function (name) {
_setName(this, name);
};
return Person;
}());

// public static
function Person() {
Person.TOTAL++;
}
Person.TOTAL = 0;

// private static / constant
var Person = (function () {
var TOTAL = 0;
return function Person() {
TOTAL++;
};
}());

// public constant
function Person() {}
Object.defineProperty(Person, "RACE", {
writable: false, // default
configurable: false,// default
enumerable: true,
value: "HUMAN"
});

// public inherited constant
function Person() {}
Object.defineProperty(Person.prototype, "RACE", {
writable: false, // default
configurable: false,// default
enumerable: true,
value: "HUMAN"
});

// protected method
function Person() {}
Person.prototype.getName = function () {
return this instanceof Person ?
this.name :
throw "protected method violation"
;
};
Person.prototype.setName = function (name) {
this instanceof Person ?
this.name = name :
throw "protected method violation"
;
};

// generic protected methods
Function.prototype.protectedVia = function (Class) {
var method = this;
Class || (Class = Object);
return function () {
if (this instanceof Class) {
return method.apply(this, arguments);
}
throw "protected method violation on " + (
Class.name || Class
);
};
};
function Person() {}
Person.prototype.getName = function () {
return this.name;
}.protectedVia(Person);
Person.prototype.setName = function (name) {
this.name = name
}.protectedVia(Person);

// private shared variables
function Person() {}
Person.prototype.getName = function () {
return this.name;
};
(function (PersonPrototype) {
var changes = 0;
PersonPrototype.setName = function (name) {
changes++;
this.name = name;
};
PersonPrototype.howManyChangedName = function () {
return changes;
};
}(Person.prototype));

// getters / setters on public property
var person = Object.defineProperty({}, "name", {
get: function () {
return this._name;
},
set: function (_name) {
this._name = _name;
}
});

// getters setters on public property via prototype
function Person() {}
Object.defineProperty(Person.prototype, "name", {
get: function () {
return this._name;
},
set: function (_name) {
this._name = _name;
}
});

// getters / setters on private property
var person = Object.defineProperty({}, "name", (function () {
var _name;
return {
get: function () {
return _name;
},
set: function (name) {
_name = name;
}
};
}()));

// singleton
var me = {name: "WebReflection"};

// singleton via anonymous __proto__
// plus private properties
var me = new function () {
var _name;
this.getName = function () {
return _name;
};
this.setName = function (name) {
_name = name;
};
};

// generic singleton cross browser
Function.prototype.singleton = (function () {
function anonymous(){};
function create(Class, args) {
anonymous.prototype = Class.prototype;
Class.apply(
Class.__singleton__ = new anonymous,
args
);
return Class.__singleton__;
}
return function singleton() {
return this.__singleton__ || create(this, arguments);
};
}());

// generic singleton ES5
Function.prototype.singleton = (function () {
function create(Class, args) {
Class.apply(
Class.__singleton__ = Object.create(
Class.prototype
), args
);
return Class.__singleton__;
}
return function singleton() {
returh this.__singleton__ || create(this, arguments);
};
}());

// per function private singleton
var Person = (function () {
var instance;
return function Person () {
return instance || (instance = this);
// or ... ot be more than sure ...
return instance || (
(instance = true) && // avoid infinite recursion
(instance = new Person)
);
};
}());

// generic factory
Function.prototype.factory = (function () {
function anonymous() {};
return function factory() {
anonymous.prototype = this.prototype;
var instance = new anonymous;
this.apply(instance, arguments);
return instance;
};
}());

// generic factory ES5 + ruby style
Function.prototype.new = function factory() {
var instance = Object.create(this.prototype);
this.apply(instance, arguments);
return instance;
};

Tuesday, November 08, 2011

Function.prototype.notifier

There are way too many ways to stub functions or methods, but at the end of the day all we want to know is always the same:
  • has that function been invoked ?
  • has that function received the expected context ?
  • which argument has been passed to that function ?
  • what was the output of the function ?

Update thanks to @bga_ hint about the output property in after notification, it made perfect sense

The Concept

For fun and no profit I have created a prototype which aim is to bring a DOM like interface to any sort of function or method in order to monitor its lifecycle:
  • the "before" event, able to preventDefault() and avoid the original function call at all
  • the "after" event, in order to understand if the function did those expected changes to the environment or to a generic input object, or simply to analyze the output of the previous call
  • the "error" event, in case we want to be notified if something went wrong during function execution
  • the "handlererror" event, just in case we are the cause of an error while we are monitoring the original function
The reason I have chosen an addEventListener like interface, called in this case addListener, is simple: JavaScript works pretty well with event driven applications so what else could be better than an event driven approach?

Basic Example


var nFromCharcode = String.fromCharCode.notifier({
before: function (e) {
if (e.arguments.length > 2048) {
throw "too many arguments";
e.preventDefault(); // won't even try to execute it
}
// in case you want to remove this listener ...
e.notifier.removeListener("before", e.handler);
},
after: function (e) {
if (e.output !== "PQR") {
throw "expected PQR got " + e.output + " instead";
}
},
handlererror: function (e) {
testFramework.failBecause("" + e.error);
}
});

// run the test ...
nFromCharcode(80, 81, 82); // "PQR"
nFromCharcode.apply(null, arrayOf2049Codes); // testFramework will fail

The notifier itself is a function, precisely the original function wrapper with enriched API in order to monitor almost every aspect of a method or a function.
The event object passed through each listener has these properties:
  • notifier: the object create to monitor the function and notify all listeners
  • handler: the current handler to make the notifier remove listener easier
  • callback: the original function that has been wrapped by the notifier
  • type: the event type such before, error, after, handlererror
  • arguments: passed arguments transformed already into array
  • context: the "this" reference used as callback context
  • error: the optional error object for events error and handlererror
  • preventDefault: the method able to avoid function execution if called in the before listener
  • output: assigned only during "after" notification and if no error occurred, handy to compare expected results

I guess there is really nothing else we could possibly know about a notifier, and its callback, lifecycle, what do you think?

The Code




As Summary


I have also a full test coverage for this notifier and I hope someone will use it and will come back to provide some feedback, cheers!

Monday, October 31, 2011

On User Agent Sniffing

Oh well, who was following me on twitter today is already bored about this topic (I guess) but probably other developers would like to read this too so ...

What Is UA Sniffing

UserAgent sniffing means that a generic software is relying into a generic string representation of the underlying system. The User Agent is basically considered a unique identifier of "the current software or hardware that is running the app".
In native applications world UA could be simply the platform name ... where if it's "Darwin" it means we are in a Mac platform, while if it's Win32 or any other "/^Win.*$/" environment, the app reacts, compile, execute, as if it is in a Windows machine ... and so on with Linux and relative distributions.

The "Native" Software Behavior

If you have an hybrid solution, example those solutions not allowed anymore but called Hachintosh not long time ago, your machine has most likely Windows Starter capable Hardware but it's running compiled Objective-C language. How reliable you think this machine is? Every software will consider it a Mac Hardware capable machine.
Should these applications optimize for non Mac hardware? I don't think so .... I mean, that machine was not classified in first place as Mac capable machine, it was the user/hacker that decided to do something "nasty" so that if something does not work ... who does really care?
Do you really want to provide support for that "random machine in the system"?
I still don't think so ... also, if you know performances and provided hardware has reached certain levels in that environment, do you even want to waste time optimizing things for a Netbook user?
I think reality is that you just create software for the target, or those targets, you want to support and nothing else, isn't it? ... but of course new unexpected comers are, hopefully, welcome ...

The Old Web Behavior

UA sniffing has historically been a bad practice in the world wide web (internet). At the very beginning there was only a major and supported browser, Internet Explorer, and this had something like 80% or more of market share. All developers, browsers vendors, and users with a different browser where most likely redirected into a page that was saying something like: "Your browser is not supported. Please come back with IE!"
Even worst, this was happening on the server side level ... "why that"? Because websites where created, and tested, entirely in Internet Explorer as unique target for any sort of online business.
Was that a good choice? Today we can say it wasn't but back at that time it was making sense on business level.
How many apps we know that work only on Windows or only on Mac? Many of them, and we are talking about only two platforms.
At least at that point we had a server side degradation into a non service completely useless for not targeted browsers but ... hey, that was their business, and if they wanted to use ActiveXObject because many things where not possible in other browsers, how can we blame these companies? "Everywhere or nothing"? A nice utopia that won't bring you that far in real world .... nothing, I repeat, nothing works 100% as expected everywhere.
The dream is to reach that point but stories like Java, .NET VS Mono, Python itself, and of course JavaScript, should ring a little bell in our developers mind ... we can still go close though, at least on the Web side!

The Modern Web Behavior

Recently things changed quite a lot on web side and only few companies are redirecting via server side User Agent sniffing. We have now something called runtime features detections, something that supposes to test indeed runtime browser capabilities and understand, still runtime, if the browser should be redirected or not into a hopefully meaningful fallback or degraded service.

Features Detections Good Because

Well, specially because the browsers fragmentation is massive, FD can tell us what we need from the current one, without penalizing in advance anybody.
The potential redirection or message only if necessary, informing the user his/her browser is not capable of features required to grant a decent experience in the current online application/service.
FDs are also widely suggested for future compatibility with new browsers we may not be able to test, or recognize, with any sort of list present in our server side logic, the one that is not directly able to understand if the current browser may run the application/service or not.
Of course to be automatically compatible with newer browsers is both business value, as "there before we know", and simplified maintenance of the application/logic itself, since if it was working accordingly with certain features, of course it's going to work accordingly with newer or improved features we need.
As summary, runtime features detections can be extremely valuable for our business ... but

Features Detections Bad Because

Not sure I have to tell you that the first browser with disabled JavaScript support will fail all detections even if theoretically capable ... but lets ignore these cases for now, right?
Well, it's kinda right, 'cause we may have detected browsers with JS disabled already in the server side thanks to user headers or specific agent ... should I mention Lynx browser ? Try to detect that one via JavaScript ...
Back to "real world cases", all techniques used today for runtime features detections are kinda weak ... or better, extremely weak!
I give you an example:

// the "shimmable"
if (!("forEach" in []) || !Array.prototype.forEach) {
// you wish this gonna fix everything, uh? ...
Array.prototype.forEach = function () { ... };
}

// the unshimmable
if (!document.createElement("canvas").getContext("2d")) {
// no canvas support ... you wish to know here ...
}
Not because I want to disappoint you but you gonna be potentially wrong in both cases ... why that?
Even if Array.prototype.forEach is exposed and this is the only Array extra you need, things may go wrong. As example, the first shim will never be executed in a case where "forEach" in [] is true, even if that shim would have solved our problem.
That bug I have filed few days ago demonstrated that we cannot really trust the fact a method is somewhere since we should write a whole test suite for a single method in order to be sure everything will work as expected OR we gonna write unit, acceptance, integration, and functional tests to be sure that a bloody browser works as expected in our application.
Same is valid for classic canvas capability ... once we have that, do we really test that every method works as expected? And if we need only a single method out of the canvas, how can we understand that method is there and is working as expected without involving, for the single test, part of the API that may not work but even though we don't care since we need only the very first one?
I am talking about drawImage, as example, in old Symbian browsers, where canvas is exposed but drawImage does not visually draw anything on the element ... nice, isn't it?

You Cannot Detect Everything Runtime

... or better, if you do, most likely any user has to wait few minutes before the whole test suite becomes green, specially in mobile browsers where any of these tests take ages burning battery life, CPU clocks, RAM, and everything else before the page can be even visualized since we would like to redirect the user before he can see the experience is already broken, isn't it?

IT Is Not Black Or White

... you think so? I think IT is more about "what's the most convenient solution for this problem", assuming there is, generally speaking, no best solution to a specific problem, since every problem can be solved differently and in a better way, accordingly with the surrounding environment.
So how do we brainstorm all these possible edge cases that cannot obviously be solved runtime in a meaningful, reliable way?

I want provide same experience to as many users as possible but thanks to my tests I have already found user X, Y, and Z, that cannot possibly be compatible with the application/service I am trying to offer.
If I detect runtime everything I need for my app, assuming this is possible, every browser I already know has no problems there will be penalized for non updated, low market share, problematic alternatives.
If I sniff the User Agent with a list of browsers I already know I cannot possibly support due lack of unshimmable features, how faster will be on startup time every other browser I am interested on?


Best Solution Now

If you ask me, today and specially on mobile side, we have 3 categories of browsers:
  1. those almost there
  2. those not there yet
  3. those will never be there

In a business logic you don't even want to waste time for the third category ... "money for nothing", as Mark Knopfler would say.
You also do not want to penalize most interesting browsers categories due massive amount, size and computation logic speaking, of features detections ... I mean, we know those browsers are crap and a minority, the server side User Agent sniffing would be the most suitable solution ever providing any sort of possible fallback or info, if there is no budget for that fallback.
But what about the second category?
Well, it depends ... if the second category has a decent market share you may try to support it and let it pass all your tests but at which price?
If the whole application has to be different for that single browser, and it shares less than 10% of the global market share, reflected into 1% of your users, do you really want to spend all possible effort to make it work?
I would say it makes sense only if this browser has few, shimmable, problems ... otherwise the best place for this browser would be directly the server side, don't you think?
About the first category ... well, it's still about guessing, hoping, praying, that things go as expected but at least for these browsers we can run all our tests against them and be sure that things are at least similar.
I am not talking about pixel perfection, that is bad as well in most of the Web related cases, I am talking about providing a decent experience in your Web application/software/page that strongly relies into JavaScript and that without it cannot possibly work.

As Summary

Few things must be re-considered in the current Web era. Kangax already explained that things today are different, regarding native prototype pollutions and specially via Object.defineProperty and the non enumerable flag but for years we have been all convinced that extending those proto was absolutely something to avoid.
Well, as I agree with Juriy about latter topic, I am still a problem solver that does not exclude any possibility, including User Agent sniffing, when it comes to solve a real world problem, rather than have fantasies about ideals that unfortunately do not reflect reality on our daily basis web development role.

Just think about it ;)