all bits considered data to information to knowledge


Where the wild data are

There were times when all data were wild... and if it was stored at all, it was committed to a memory of an individual which tended to fade away.  To facilitate this transient storage the data was wrapped in protocols of rhymes and vivid symbolic images; the pictures were drawn, stories were told. Then, about 5,000 years ago the writing systems begun develop – ideographic, symbols and, finally, letters. The data was tamed. The letters made up words, the words made up a sentence, and the sentence, hopefully, made sense. The data was written on clay tablets, animal skins, recorded on papyrus, vellum, paper, magnetic and laser disks… We got quite skillful at butchering the data into neatly organized chunks, and devising ever more sophisticated structures to hold it – scrolls, books, databases.

And then Internet happened. They say that we are creating more data in a year than in the previous thousand years, that 90% of the world’s data have been created in the past two years (though, according to one interpretation of the Law of Information Conservation, we only engage in recycling the information redistributed from existing sources) We are swamped with information, and the old, tried, trusted and true approach of organizing information into a palatable chunks is no longer working. Facing information deluge, we are forced to go back to basics – raw data – and find ways to make use of it without forcing it into a Procrustean bed  of some structure that might have seemed like a bright idea once. Hence the resurrection of an old idea of hierarchical databases – the ones before the advent of SQL – under the guise of NoSQL movement, and much hyped Big Data... In a sense, Big Data is nothing new, it’s been around us as long as humanity itself but just as with the proverbial iceberg, most of it was hidden from our conscious use – which by no means does not mean that we haven’t used it! No, it was always there for us, seeping in from traditions, proverbs, legends – something that we use without consciously thinking about it, the gut feeling, the social norms. The modern Big Data but extends this concept to the computers.

And I believe that the data can take care of itself, if only humans stopped telling it what to do – but we do have to arrange the meeting J

Instead of thinking how to accommodate the new data source or new data formats (e.g  video, mp3 files, text of various degrees of structural and semantic complexity), the humans can let the data figure out how to interpret it by itself.

The new data format could be analyzed, its structure inferred from background information/metadata, its usage from countless examples of similar (or not) data… This will require enormous computing power but we are getting close to it with likes of crowdsourcing, probability scores and machine learning, Hadoop infrastructure, variety of NoSQL and RDBMS data working together to produce insights from the data in the wild, the data over which we have no control, unreliable, inherently “dirty” data  ( and the degree of “dirtiness” itself is a valuable piece of information!)

It is nice to have smart ontology all figured out for the information we are using but it would be hundred times nicer not to pay any attention to any given ontology, and still being able to make meaningful use of the data!