My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Saturday, February 23, 2013

An In House, Yep Nope Like, JS Loader

Two days ago I've submitted a JavaScript loader to since this loader, once minified, fits into 136 bytes. The reason the entry is 140 in the site is that I've left on purpose /**/ to easily spot where you should put JS files to load.

And That's Not It

Once we have an Array, we have also .concat(), and once we have concat, we have the ability to decide, inline, what should be in the menu:
  menu = [].concat(
    // starter
   'onrefusedbeef' in window ?
      'caprese' : 'salami',
    // main course
      return !!window.bigAppetite;
    }(this)) ?
      ] :
    // dessert
      this.extraCalories ?
        ['chocolate', 'sour cream'] :
        [] // nothing to add

// check the menu, chef
You can play with different kind of menus, simply polluting the case with whatever i needed.
this.bigAppetite = true;
this. onrefusedbeef = null;
this.extraCalories = true;
The cute part about .concat() is the ability to pass empty arrays, when no extra value should be added to the list, just one element at the time, as example strings, or a list of elements, with the ability to create nested list thanks to nested [].concat().
In few words, Array#concat() suites perfectly with a list of files to include

Real JS Use Cases

Assuming we don't have partial shims in every single file, assuming we want to bootstrap whatever intakes to have a normalized environment, and do nothing otherwise, here an example:
// to loadvar
  OK = [],
  toLoad = OK.concat(
    Object.create ? OK : 'es5-sham.js',
    OK.forEach ? OK : 'es5-shim.js',
    Function.bind ? OK : 'bindOnlyShim.js',
    window.JSON ? OK : 'json2.js',
    [ // library list

// check this out
In most modern browsers above snippet will alert only main files while in many Android browsers, the list will have Function#bind() only shim included, since everything else is there so Array extras are not needed. In older browsers we'll have Array extras, only after Object extras so that Array extras could use ES5 way to define extras ... in few words, the key here is to have scripts aware of their own dependencies instead of include partial shims all over and over ... this is because this little loader loads one script per time and this is ideal when scripts are ordered by dependencies.

All Together

Here how the script will look like, as last tag in your document body, in order to have exactly what you need, in exactly the order you need it.
<!DOCTYPE html>
<!-- everything you need -->

// extra functions, test, variables here
// it's a closure !

function c(){if(d=e.shift())a=b.body.appendChild(b.createElement("SCRIPT")),a.onload=c,a.src=d}var K=[],e=K.concat(

  Object.create ? K : 'es5-sham.js',
  K.forEach ? K : 'es5-shim.js',
  Function.bind ? K : 'bindOnlyShim.js',
  window.JSON ? K : 'json2.js',
  [ // library list


Best Practices For Best Performance

  • do not repeat mini/partial/shims/polyfills per each file, do not try to make every file stand alone. In this way you serve, and load, only what's needed
  • decide statically, if necessary, dependencies in a logical order. If a file is shimming everything, and you want to trust that file, put that before any other that might try to put a shim if the library is not under your control so that file will use the already shimmed function you trust. As example, if any library contains a tiny JSON parser, put the official JSON polyfill before that file if needed. Same is for any library that tries to shim Array extras, put es5-shims before, and of course, only if needed
  • aggregate everything that cannot be left out, unless your library is 2 Mb gzipped, or unless you have lazy loaded stuff ... but for everything else, really, there's no need to serve 100 tiny files, put them together as part of your library
  • put main app logic a part and forget DOMContentLoaded or $(window).ready(), since this tiny loader works when the body and everything else is on the DOM ... did I tell you to put the loader at the very end of the page? :-)

Wednesday, February 20, 2013

My Personal Github Flow

I've created many repositories in my programming history, starting from the good old Google Code, passing through Mercurial HG and svn, ending up using on daily basis the awesome Github.
There is something I've spotted every time I've created a new repo, I needed a way to do always the same thing ... but better each time!
The very latest case is the callerOf utility, something that small and already demanding usual/common stuff such:
  • a meaningful and organized structure, instead of files in the wild
  • an easy way to test for browsers and often nodejs too
  • a simple build process per each target, able to combine them all at speed light
  • a linter for those projects that could be widely adopted and linters are so annoying ... where was I ... right ...
  • thanks to the same folder structure, an already prepared .gitignore, together with the .npmignore
  • a LICENSE.txt file, in my personal experiments and libraries always Mit Style
  • a Makefile able to help me combining all these tasks
  • last, but not least, an almost fully prepared, and basic, package.json file with main info to publish
  • optionally, the usage of .travis.yaml for the awesome Travis CI service

About Travis

Today I've maden a donation to help those guys maintaining the project. The email as soon as something goes wrong is a great way to be notified about problems. I've worked in many enterprise environment where this is the default, most basic configuration, to be instantly notified and be able to fix ASAP or revert instantly and a free service working all the time doing this for all those Open Source projects cannot be ignored, and applauded from Developers indeed.
All major programming languages are supported plus it's not that difficult to configure and it's based, if node.js is supported, to the simple npm test command: awesome!
I won't tell you how much I've donated 'cause it does not matter as long as you donate something to these good fellas, right?

I Might Not Need Travis, But ...

The basic way I've configured my gitstrap, this is the silly name I've chosen for the repo with most basic structure, forces me to build and run tests before being able to push. OK, OK ... is not that if I don't build and run tests I cannot push, I mean, it could simply be the edited and not necessarily code, but to know if my code/library is working, I necessarily need to make and, in that case, be sure that everything is green.
In few words, the moment I push some code I am pretty sure Travis CI will be still green but if something goes wrong with one of the node.js versions, the test, the server, whatever, really, I'll be still notified and able to react.
The classic scenario is a pull request proposed without testing, mybe looks good, only Travis will tell you if it really does, right? ... oh well, feel free to enforce whoever sble to edit even online to run tests ... if you manage :)

What I Think Is Essential

In my cases these are the most basic dependencies, able to make my workflow freaking fast and robust enough too.
  • wru testing framework for node.js and web via make test, already configured in latter case inside a handy index.html eventually published via gh-pages through make pages
  • polpetta, in order to be able to automate the inclusion of the same test for both web and node.js in a ctrl+click and make web shortcut
  • JSHint to eventually enforce the usage of a linter, through make hint shortcut
  • UglifyJS
All above projects can be simply included in the current folder via make dependencies, not distributed via npm since these are not really part of the project, except the test, where in this case a tiny overhead of 130Kb for wru testing library isn't really a problem for anyone, right? :)

This Is gitstrap

Really a sort of github boilerplate for JS related projects, something already organized and ready to go, something you can simply:
curl -s >~/gitstrap && bash ~/gitstrap && rm ~/gitstrap
Following instructions here, or if you prefer a manual installation:
git clone git:// project-name
cd project-name
rm -rf .git
make dependencies
git init
git add .
git commit -m "gitstrap in"
git remote add origin
git push -u origin master
After this, don't forget to update Makefile and package.json with the right name, specially if you are planning to push to npm, as well as the
I might decide to automate this procedure too pretty soon and any extra task or contribution will be more than welcome. Right now I think this is enough.

About Files And Folders

Right, this is where I explain what the hack is that structure ... let's start from the top:


This folder will contain all versions of the same projects, if the project would like to be compatible with nodejs, web or generic JS engine, AMD loader based on define logic and stuff.

Yes, A Different Automated File!

After all discussion on "how should a file be to be compatible with all the mess out there" I've realized that exporting JS is really a matter of env, so if everything else is more or less the same, why on earth pollute all possible projects with that exporting nonsense? You need AMD? You get AMD ... You need node.js? You get node.js ... and same is for generic env, is really that easy!
Please note that all builds generate a .max version of the file, those I've just linked, and a minified version too, with the exception of node.js, since I don't believe in packed server side code that much :-)


This is used to generate the test page in gh-pages, so that once the pages have been generated, the test folder will contain those tests and launched through the index.html file.


The most important thing to edit here is the name of the project and the main file, or the list of files to use. These can be different per build, if needed, or just the same for all of them. You decide!

LICENSE.txt and other files

Have been explained already :-) Modify the name in the license, modify the license too, if necessary, and that's pretty much it. You need to edit where necessary to go with your own project.


Here is where your source files should go. If the build shoul dhave different targets, as example for node or amd, I think is good to prefix or suffix them with these names. The main used as example is what the project will export, just an empty object.


This might look weird but it's actually what will be used, as file, before and after each build. These files could be empty too, it does not really matter, but it's handy to have them to easily generate AMD, node.js exports, or generic closures to use before and after the generic code reused across targets.


Here the big deal, where the .test.js file will be in charge of running wru against whatever test is present in the folder, as long as there is a counter part in the src too. This should make the possibility to test in isolation easier. Bear in mind node.js needs to require() but the browser can load things in pieces so both built version and other files will be included and tested, if tests are in place.

As Summary

The aim of this repo is to make at least my life easier, and you can see already in all my github repos that the structure is already like this, and every project is a partial clone of the other with tiny improvements over the Makefile and some automation maybe not necessary anymore, as it was this good old builder once in python, then in nodejs, and finally obsolete in latter repos :)
Have fun with you extra ultra cool best library ever!

Sunday, February 10, 2013

Jokes A Part ...

... is really that easy to start from the scratch or abandon everything, but that's not, by any meaning, an evolution, is rather a reboot.
Unfortunately, written software cannot reboot that easily, and we all know that, except few exceptions where is really needed and we call that refactoring!
Refactoring is needed when everything is not under reasonable control or performance anymore, refactoring puts everything on hold until it's completed ... you know that ;)

Focus On Reality

We really should never loose focus on what we are really trying to do, really trying to improve, and for who, if needed, and beside our own self thoughts.
If the rest of the world is doing something in a way, we have really few chances to change that way quickly and easily because we decide that way is wrong, right?
We need to be able to propose the best change able to improve that de-facto reality rather than thinking that we are able to improve everything simply imposing our own superior reality ... right ? The moment we'll impose blindly our own meaning of "best way ever", without even analyzing what's good out there, we are doing everything wrong, IMHO!

Graceful Enhance ... Everything!

Really, I think this is generally speaking the best way to go on and, probably, the only way to go too, since everything else has historically failed already so ... why try again?
Understand developers needs inside their libraries too, and not only patterns they used, is, as example, a good starting point.
I am expecting this to be a sort of JS improvements constitution for the most used programming language in the world, accordingly with the biggest open source community, at least in github ...

For A Better JS Future

  1. do not break what has been widely adopted already, unless that's really bad in terms of security
  2. try to stick with the already available and standardized syntax, allowing partial or full polyfills because of graceful OS, Environment, Browsers, Engines, whatever! migration
  3. involve as many developers as possible (public survey over internal survey) rather than provide already decided internal decisions based in already decided internal pools nobody ever heard about out there
Three points, since everything else is reasonable already and done in a good way ... still!

Why Is This Important

I think these points are more or less everything I wished following es-discuss mailing list, really ... from time to time, I have experienced these situations, absolutely unexpected:
  1. ES4 failed because it was braking the web, we have transpilers now, so everyone should use them instead of JS because of new unsupported syntax (transpilers break the web!)
  2. if that library does that, and everyone likes that, and that library is not the old Prototype.js or another one nobody here heard about, that library is wrong and that behavior should be different
  3. we don't want internals/private pools saying that what the rest of the world thinks is needed is wrong, we can have a much bigger audience through public surveys.
Latter one was the most frustrating experience, personally speaking, trying to follow and contribute in that mailing list with parallels, private, things behind the scene, I could not stand by since either you are public, being the ML public and telling the world you are, or you are not, and you can have, in that case, all pointless, useless, irrelevant, pools you can think about, without bothering the rest of the world with your results!

About That, I Apology because I know that specific case had, again, best intentions, but my point is that surveys should be public too because if 3 developers cannot represent the entire community, neither can 300 behind the same company, or just a couple. There are many more of us out there, I'd love to see the possibility to participate every time a decision about an API should be made!

Thank you for listening!

JavaScript Modules, Maybe

So, you might know already, but ES guys are talking these days about modules and things, as usual, went out of control since everyone wants its own best module version, ever!

Current Status

Synchronous, asynchronous, AMD, require() ... apparently these are all right for some use case, but wrong for some other.
It looks like JS cannot do synchronous modules ... wait what?
why browsers can't since browser have synchronous require since the very beginning? <script> tag anybody?
This was me after reading few times JS cannot do sync. Turned out, sync script is in HTML specs, not JS one, but wasn't this about JS indeed, where JS on browser never had this real problem and is simply envious of node.js module loader simplicity?

How About Facing Reality

... where in every language, requiring dependencies has always been synchronous because nobody ever cared about that latency, right? And did anyone even bothered using asynchronous file reading to include modules?
Not even node.js does that, the most async-centric env I personally know!
As summary, since this is a browser only problem, what I would expect is the ability from browsers engine to pause in a non blocking way the client until the file has been loaded. You know what I mean? F# does that so that's not an impossible reality ...
How cool would that be and how "free from browsers limits" the specification of the next module loader in a programming language would be?
The answer seems to be that JavaScript is not an HTML/W3C matter but is limited because of HTML/W3C implementors, those browsers ...

It Doesn't Matter, Had Module!

Developers are concerned that TC39 might not have real use cases and the funny part is that an evangelist of everything you know about the web as @paul_irish is had his self some concern about TC39 choices in term of real-world cases.
Meanwhile, AMD does not seem to be an answer, neither the preferred choice, but regardless we have these scenarios:
  • those who write for the web and go AMD
  • those who write for node.js and go require
  • those who will probably add all this crap regardless, and with all due respect for the dev who wrote that with best intentions, even if testing only on web or only node.js (and again, the point is not about the snippet but the fact it should be everywhere in the JS world, you got what I mean, I am sure!)
In few words, Domenic's suspects are already a reality: nobody is even caring/following about what's going on in this discussion!
Lazy developers will simply realize at some point nothing works anymore and will go strike blaming the corrupted system, the conspiracy agains the World Wide Web, the fact nobody told them it was going to disappear or change even after 5 years of warnings about deprecations in console, etc etc ... right?
Wrong, they'll just use what worked for them 'till that day without problems and they will still think that you should not break what's in already, which is one of my favorite parts about ES5, the best update ever, if only every browser was there already, it would be a better JS world for everyone, isn't it?

Decoupling Import From Loading

How insane would that be? A semantic syntax that works in any platform, no matter how the platform loads stuff, the build process behind, or the fact you might wrap this way and do what you think is best, even improving upfront your static analysis ... right?
So here a beta repository called remodel, something you might want to git clone git:// to run eventually node node_modules/wru/node/program.js test/remodule.js and see that all tests are passing already.

Wait, What

So that project is about having import like syntax available even in ES3
// modules syntax
import {a} from "something"

// remodule
You are following, right?
// modules export
module.exports.a = "whatever";

// remodule
modules("something", {
  a: "whatever"
So, kinda yes, the missing part of all this mess is that a module should be able to register itself if we would like to static analyze it and make the logic work everywhere in both sync and async environments, right?
// modules export
module.exports = {
  what: "ever"

// remodule
modules("exports", {
  what: "ever"

// remodule backward compatible
modules("exports", module.exports = {
  what: "ever"
Latter example is about loading that file with current require or without it ... unfortunately modules function should be there but that's easy to fix, right?

What Else

modules("test", {
  a: "this is a",
  b: "this is b",
  c: "this is c"
With above code, we might be able to export the test module, regardless the position in the filesystem or the package manager, and do funny things such:
var test = imports('*').from('test');
// {"a":"this is a","b":"this is b","c":"this is c"}
Cool? we just imported whatever the module exported itself ... there's really nothing to worry about, as @benvie might spot out, the this is safe too, is the exported object.
var a = imports('a').from('test');
a; // "this is a"
Well, it's straight forward to get that we can import just one thing from a module, right? Behind the scene, the module must be imported once, and never again, but we can grab a property as needed instead of the whole thing, right? ... and more!
var arr = imports('a', 'c').from('test');
// same as
var arr = imports(['a', 'c']).from('test');

// same as, once Array comprehension is in ...
var [a, b] = imports('a', 'c').from('test');
We can specify multiple properties out of a single module ... cool, uh? And more!
var aliased = imports({
  a: 'A',
  c: 'b'

// in a real world scenario ...
var $ = imports({
  jQuery: '$'
Yep, aliases are there too, so that you can actually test everything for real in this file.

So ...

what I've done there, is not even in charge of loading synchronously or asynchronously anything ... I mean, that should be your build process in charge of making things work and instantly available once needed, right? I mean, the Web/DOM part, I get it, modal spinners all over instead of a frozen tab so annoying for the user, but why nobody simply came out with a library in charge of this? Why ES6 modules will break browsers, using all those reserved words unusable in older browsers, will break node.js logic, being incompatible with the exported module, and will probably be never adopted in fact by anybody?
I am still dreaming about web improvements where stuff that gets out is what's really needed and most likely already used out there. Improved? Better? Sure! Pointless? No, thank you!

Thursday, February 07, 2013

JavaScript EventTarget

This is about the W3C EventTarget interface, something standard in the DOM side, but still confusing in the JavaScript one where EventEmitter in node.js, or many other kind of constructors, are simply simulating what has been there for years, and standardized across all browser engines.

Now In JavaScript Too

Correct, I have implemented, written, and re-writtem this shit so many times that I have decided to "unofficialize" the already, well described, interface.

So, here the repository, something you can install in node via npm install event-target and checking examples on how to use it. Cool? I hope so :-)

Only thing that does not make much sense in a non DOM environment is the capture extra argument, something ignored from 90% of the web, something that can be really useful with the DOM but I could not think about any concrete utility in a pure JS world way. Cool thing is: anyone can extend, wrap, or improve, this EventTarget library: enjoy!

Sunday, February 03, 2013

Opera Mobile Is The Best Browser!

I really do not understand why the Web keeps ignoring this browser which is able to provide the best browsing experience out of old hardware too!
It's not me saying that, there are all test you might want to double check or try by yourself.
  • Opera Mobile VS Chrome mobile, a browser available only in latest hardware, and Safari Mobile for iOS 6, and we all know how good is the hardware here. Here the result, with Opera Mobile scoring more than anything else
  • Hardware Acceleration, something possible on canvas you might want to test in this old prototype of a map, the same I have presented at Front Trends in 2010
  • multi touches, so that an interaction with more modern UI based on gestures could work without problems. Here the tesla experiment

In Android 2.X Too

This is the part I love the most about this browser ... I mean, if you assume you have best hardware ever under the hood it's easy to be cool, right?
I am looking at you Chrome and Safari Mobile, and I am leaving Firefox Mobile outside this challenge since, unfortunately, it never competed against stock browsers, in terms of performance. They are getting better, and have to, with Firefox OS, but in an Android 2.3 ... not sure where they are :(
I have a Galaxy Ace, an Android 2.3 phone, that scores with Opera Mobile 406 plus 12 bonus points.
Basically, if all web apps out there would support Opera Mobile, people should not spend more to update their hardware because there is a browser that is kicking every other browser asses in term of performance!

Symbian Too

Correct, good old NOKIA phones could be up to date without problems simply using Opera Mobile, no need to spend that much to get a Windows Phone there, if the problem is the browser you can have touches and multi touches plus extreme performance boost simply downloading and using by default Opera Mobile: as easy as that!

Definition Of Best Browser

A browser that is able to bring to the user every possible modern feature, without requiring HardWare or Operating Systems updates. This would be, in my opinion, the best browser in the world, the missing piece we all have and somehow keep ignoring, in this web scenario.

Why Is That

I start thinking Opera Mobile team has a really bad marketing support. I cannot believe my stock browser scores 200 against 406 in Opera Mobile and there's no usage percentage in global stats about this browser if not about the Opera Mini version, a completely different beast?
Wha the hell is going on? Why aren't we all developing for this browser too? It's also the easiest to test since available in many platforms ... so, as summary, when I think about any HTML5 product that does not support Opera Mobile is kinda lame, while if it does not support Opera Mobile at its best, using all features that are available, usually twice as those available by default stock browser and in a really performant way, we should rethink our priorities, also because once again, this browsers is available in multiple platform so it should be the preferred target, rather than the least considered one. This usually means profit too so ... I am just saying, and thank you for listening!

Friday, February 01, 2013

The Difficult Road To Vine Via Web

One of the coolest and most rumored app of these days looked so fun, and conceptually simple, that I could not resist to challenge myself trying to reproduce it via HTML5 and all possible experimental things I know that are working these days for both desktop and mobile.


This is the name I have chosen for this experiment, and this is the very first warning: it does not work as it should, is not the equivalent, it cannot substitute the native App: too bad, but probably the reason vine team didn't even try to propose such broken experience.

Well, Something Is Working!

Guess who's this idiot with a Koala hat in a leaving room:

That's correct, "it's a me" through the wine experimental project and a Chrome browser. But let's talk a little bit about technologies I have used, OK? If you want to know how to install the environment and play with the project, once again, the repository explaining how to :-)

How Does It Work

Well, you launch the server, you connect to that page, you press the video up to 6 seconds. If you release your finger or your pointer, it stops recording. when the top bar is filled up, each frame will be rendered as image and sent to the server, together with the audio, where some magic happens and the result is a video in mp4, ogv, and webm formats, plus a nice fallback as animated gif so that every single body can see again those 6 seconds, nice? Now, time to talk about all possible problems I had during its development ...

Everything You Know About getUserMedia() Is A Lie

This has been the biggest headache during the creation of this prototype.
All articles I have read, included this excellent one in HTML5 Rocks, which is marking the article valid for Opera and Firefox too, do not work ... really, as simple as that: that stuff does not work!

The Right Way To Attach A Stream

In my proudly created spaghetti code I ended up with a Frankenstein monster such:
function attachStream(media, stream) {
  try {
    // Canary likes like this
    media.src = window.URL.createObjectURL(stream);
  } catch(_) {
    // FF and Opera prefer this
    // I actually prefer this too
    media.src = stream;
  try {
    // FF prefers this
    // I think it should not be needed if the video is autoplay
    // ... never mind;
  } catch(_) {}
So, the most advanced browser is apparently behind the schedule because Firefox Nightly and Opera Next just refuse to work through the URL.createObject() approach.
However, I found Firefox behavior a non-sense because of the required play(), completely against the logic behind the autoplay video attribute.

Bad News Is ...

my code ain't gonna work for long time neither, things are changing, so keep reading and smile ^_^

AudioContext and AudioStream Is Nowhere!

Correct, another myth of these HTML5 days is the audio stream. Nightly is able to expose it inside the stream but I could not manage to retrieve it and handle buffers in and out. Nightly has also another really annoying problem, the redundancy of the microphone recording the audio itself ... a noise you'll spot if you don't mute manually your computer speakers.
I had to video.muted = true in order to avoid such disturbing noise, something present in Canary too but if the volume is not 100% is much less easy to reach that point. Canary seems to be more clever here! Opera does not seem to work neither with this audio stuff.
The best one seems to be Apple Safari browser: nothing works but they have the best documentation!

Surely It's Me

I might have done something wrong, but if browser vendors keep implementing and changing standards behind the scene how can this be developers fault?
Here an example, I ask how come getUserMedia() is so much NOT available?
The answer was this one!
Because Microsoft recently scuttled ongoing standardization efforts with a surprisingly valid counterproposal I'm afraid.


I love all efforts from Microsoft, Firefox OS, Chrome OS, if any, and all others trying to propose standards ... but I don't understand any vendor that is trying to kill a reasonable one until the best proposed one ever will be rolled out ... I mean, can I haz that meow?

Until Things Are Usable

I feel like somebody is having fun screwing standards from time to time and every abandoned proposal that looks good to developers has the same destiny:
  • stick forever in some library because in those days, that was the behavior
  • make the new proposal not powerful regardless, since it came out of a hybrid, not perfect one, that everyone probably already adopted, as it is for localStorage and WebSQL, things that just work as developers need to do more, things still there, just randomly there
  • fragmented on the Desktop, fragmented on the mobile more than ever and where on mobile, updates do not basically exist. On mobile, we keep changing Hardware generations, and not software!
    iOS here is a partially lovely exception, able to update longer, but my iPad 1 is stuck behind iOS 5.X, you know what I mean ... right ?
I really feel Christian Heilmann when he says that what matters is reachability and everything else is a futile discussion.
I probably have same feelings, better summarized, as web developer, in this personal thoughts:
It does not matter if it's touchstart or pointerEventDown event guys, what's important is that it f*#$!(in works when a person put a finger in his device screen and this person bought that device thinking is touch-able ... YES, THE BROWSER TOO!
Also, because people don't, and should, ever care about software, that's our problem, and should never be people limitation with the hardware and the software they like, they use, they need, they want ... but we keep smiling, right? ^_^

The Web Has Never Been This Broken

And this is the beautiful lie behind HTML5: it's utopia that never worked in reality!
Articles and examples that work only for this browser, the thing we have complained about for years about IE thinking "dude, if you don't know how to create a site don't write it works only for IE you lamer"!
Problem is, we are not going anywhere even on mobile, where guess what, platform fragmentation is growing much more than desktop one. On Desktop we have 3 OS Families, Windows, OSX, and Linux generic distro (I feel you Gentoo, Fedora, Redhat, Ubuntu, Debian, Kubuntu guys ... sorry to group you there).
On mobile, we have newcomers all over so iOS, Android 2.2/1.3/3.0/3.1/4.0/4.0.1/4.1/4.2 and here you have the coolest device ever, and Firefox OS plus that sneaky Opera Mobile, I mean mobile, not mini, the best, fastest, most updated, browser ever for both Symbian and Android 2.X!

Wasn't This About Wine?

Right ... you are right, I stop here wining about the fact that indeed, Opera Mobile is the only mobile browser able to work, even in Android 2.3, regardless low performance it just look and feel OK there, so if you want to try this you can try with Opera Mobile and enjoy the project.
Chrome Mobile doesn't getUserMedia(), so doesn't FirefoxOS, neither anything else I could try (come on, you are not trying with a windows phone, right? They killed the current standard proposing something else ... cooler, but more to wait for!).
So, the end of this story is that I have created a project which aim is to simulate a native App, and I miserably failed. Not because performance were not good, since once again it works via Opera Mobile in my Galaxy Ace, an Android 2.3 smartphone really simple, really functional, really usable thanks to a decent battery life due to low hardware specs, so ... again, it's not a performance issue, and you can test it, it's more about mistakes, rush, and wrongly accepted proposal from those that are deciding standards ... for good, sure, but if WebSQL was universally available, cross browser/platform speaking, how much more we, web developers, could have we done?
Think about it, that could have been the best thing ever to build No-SQL concept on top, but never something like this about IndexedDB:
Because this technology's specification has not stabilized, check the compatibility table for the proper prefixes to use in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future version of browsers as the spec changes.
And this after at least 2 years ... now ask yourself honestly if current getUserMedia() was already available cross browser, how many creative things could have been created already?

At Least These!

  • web based alarm systems, video can be captured into canvas, canvas can scan images, canvas can detect suspicious movements comparing diffs between previous image and current one in a place that supposes to be quite
  • no need to call the specialist that will install the expensive hardware, the cable, the camera, and everything else, if we can program quadcopters via node.js, JavaScript is good and fast enough to monitor the house, the garage, the entrance, and tell you everywhere you are in the world, what's going on plus, if you need to, it can send you pictures while quadcopting around :D
  • create a Skype like application without needing Skype at all ... OK, Skype offers an amazing service and we cannot even think to compete on web, but still ...

End Of The Rant

I am pretty sure John Resig, who's already using Vine since the very beginning, would actually agree with this Web situation .. or maybe not, since his story-rock API main goal was to uniform all this mess ... but should we keep relying third parts API rather than awesome, ultra skilled, exceptional people, in charge of the future of the Web?

Anyway, Wine Works With...

So once you have installed everything, all you have to do is to start polpetta in that folder and connect through these browser to that address: Chrome Canary, Firefox Nightly, Opera Next, Opera Mobile, or any other browser you think should support this app, and it will not ... keep smiling!!!! ^_^

Thanks for your understanding, I am developing web mobile since 2009, since Android 1.5, and the thing is: it never got truly better, it just kept changing and fragmenting!
Probably the reason I love my job, and the constant challenge it offers on daily basis but I'd like to do more there ...