Πριν από τρία περίπου χρόνια εξετάζαμε το ενδεχόμενο ανάπτυξης του AthensBook ως ένα ‘φυσικό’ αντικείμενο (κιόσκι) το οποίο θα βρισκόταν σε συγκεκριμένα σημεία στην πόλη (π.χ. στο lobby ενός ξενοδοχείου, ενός δημόσιου κτηρίου ή την σάλα ενός καταστήματος) και θα επέτρεπε σε περαστικούς αλλά και τακτικούς χρήστες της εφαρμογής να λάβουν υπερ-τοπικές πληροφορίες ακόμη και όταν δεν είχαν μαζί τους κάποια συσκευή που υποστηρίζουμε (η διείσδυση των smartphones το 2011 ήταν αρκετά — πιο — περιορισμένη απ’ότι σήμερα). Στα πλαίσια της αρχικής έρευνας έκανα μια αίτηση στην Leap Motion για την, ιδιαίτερα καινούργια τότε, συσκευή τους το Leap Motion Controller. Η ιδέα ήταν απλή: να αντικαταστήσουμε το ‘άγγιγμα’ με ‘gestures’ στον αέρα. Να δώσουμε τη δυνατότητα χρήσης της ‘εφαρμογής’ δηλαδή χωρίς την υποχρέωση του αγγίγματος μιας οθόνης (κάτι που είναι, δυνητικά, δύσκολο σε κάποιες περιπτώσεις, όπως π.χ. όταν συναντώνται capacitive touchscreens και γάντια). Λίγο καιρό αργότερα η αίτηση έγινε δεκτή και λάβαμε το Leap Motion SDK και ένα prerelease unit του Leap.
Δυστυχώς η εμπειρία χρήσης ήταν κατώτερη των προσδοκιών. Αφήνοντας τα προβλήματα στη πλευρά του λογισμικού (το οποίο συνεχώς ενημερώνεται και βελτιώνεται) η εμπειρία που είχαμε ήταν αντίστοιχη αυτής που έχει καταγραφεί από αρκετούς, κάποιους περισσότερο και άλλους λιγότερο διάσημους, καθώς και διάφορα μέσα: το Leap είναι ένας πολλά υποσχόμενος, ίσως και επαναστατικός τρόπος χρήσης του υπολογιστή σου αλλά σήμερα δεν είναι ούτε κατ’ελάχιστον έτοιμος για χρήση από το ευρύ κοινό. Τα προβλήματα του είναι αρκετά και πολυδιάστατα: αφενός πρέπει να κρατάς σε σταθερή σχετικά απόσταση το χέρι σου από τον αισθητήρα. Και όσο γυμνασμένο χέρι, μπράτσο και ώμο κι αν έχεις, αυτό γίνεται πολύ γρήγορα κουραστικό, αν όχι αδύνατο. Αφετέρου, δεν έχει καθόλου εύκολο τρόπο να ‘κλείσεις’ τη συσκευή προσωρινά. Τα ‘όρια’ μεταξύ λειτουργίας και παύσης είναι εντελώς αυθαίρετα. Υπάρχουν κάποια gestures που τείνουν να γίνουν πρότυπο αλλά αφενός δεν είναι αποδεκτά απ’όλους ακόμη, αφετέρου πολλές φορές στο μεσοδιάστημα μερικών δεκάτων του δευτερολέπτου που μεσολαβεί ενδέχεται να εκληφθούν κινήσεις και εντολές από τη κίνηση του καρπού, κάτι που σε περιπτώσεις μπορεί να είναι καταστροφικό. Το τελευταίο μέρος σταδιακά βελτιώνεται με καλύτερες βιβλιοθήκες και καλύτερους ‘πελάτες’ αυτών, όμως τέτοια ζητήματα διεπαφής είναι πολύ βασικά και, όπως φαίνεται, δεν έχουν αντιμετωπισθεί συστημικά από το Leap μέχρι στιγμής.
Κατά συνέπεια το Leap είναι μια συσκευή όπου πρέπει διαρκώς να σκέφτεσαι που βρίσκεται το χέρι σου, τι θέση έχει σε τρεις διαστάσεις, κάτι αρκετά κουραστικό πέραν από φυσικά και νοητικά. Τέλος, οι σημερινές υπολογιστικές συσκευές κατά κανόνα δεν ταιριάζουν με το Leap. Ίσως μια μελλοντική συσκευή, απόγονος ενδεχομένως του ‘παραδοσιακού’ υπολογιστή εργασίας (desktop, laptop κλπ), με αισθητήρα μεγαλύτερης διακριτικής ικανότητας, όχι χρονικά (το Leap παίρνει δείγματα στα 290Hz) αλλά χωρικά, που θα απευθύνεται δηλαδή σε άτομα που κάθονται π.χ. πάνω σε ένα ‘έξυπνο’ τραπέζι (βλ. Microsoft Surface — το αυθεντικό, πλέον γνωστό ως PixelSense — όχι η αποτυχημένη-εμπορικά-και-φρανκενστάιν-χρηστικά-ταμπλέτα) να ήταν τελικώς πολύ εύχρηστο. Και φυσικά φαίνεται πως έχει μεγαλύτερο πεδίο εφαρμογής σε ειδικόυς τομείς, όπως π.χ. την μετάφραση νοηματικής γλώσσας σε ‘πραγματικό χρόνο’, κάτι που ήδη γίνεται από προσπάθειες όπως το MotionSavvy, ή την διάγνωση της νόσου του Πάρκινσον κλπ. Λίγους μόνο μήνες μετά την παραλαβή του Leap έμαθα για το Myo της καναδικής Thalmic Labs. Μια συσκευή που επίσης υπόσχεται πολλά, όμως παίρνει έναν εντελώς διαφορετικό δρόμο για να το επιτύχει.
Τι είναι το Myo
This latest leak details how the NSA accessed targets by inserting tiny circuit boards or USB cards into computers and using radio waves to transmit data without the need for the machine to be connected to a wider network.
It is a significant revelation in that it undermines what was seen to be one of the simplest but most effective methods of making a system secure: isolating it from the internet.
In other words: the NSA planted tranmitters (or tranceivers) and effectively turned air-gapped machines into machines transmitting to (/receiving from) their systems. Somewhat different from actually snooping on ‘offline’ machines, ala Tempest, as what many ‘news’ organizations hinted at by using inaccurate titles (the BBC, quoted above from this article, included).
Unless all your offices are room-sized Faraday cages, with physical security and extensive background checks of the machine operators, the NSA just invalidated your airgap policy. But then again, your security was probably flawed anyway, especially against an adversary that competent/determined/resourceful.
It is almost 6 years since Apple announced and released the iPhone. I still remember Steve Jobs mentioning that his goal for the first year was to get 10M iPhones shipped; at the time almost 1% of the global mobile telephony market share. The goal seemed totally unrealistic to anyone involved in the industry as that would amount to dozens of millions of units sold. The iPhone came out, and despite having significantly inferior technical specifications in some of the most crucial benchmarks, such as the quality of its camera, the lack of 3G, the extremely slow CPU, the lack of MMS-support (a relatively obscure, yet somewhat ubiquitous feature of ‘feature’ phones, especially in Europe) and others, managed to exceed the 1% goal that Steve Jobs had set a year earlier. It soon became that the reference state-of-the-art device that exemplified everything that Apple had to offer in its nascent post-iPod era, where mass market was apparently successfully coupled with premium quality design and manufacturing and extremely high margins.
At the same time Google had already bought and was preparing for the launch of the Android Platform, an open source new generation smartphone platform based on linux and a slew of open-source libraries and APIs (including Java running on Google’s Dalvik VM) with a large ecosystem of vendors and supporters and Google at its centre. Google originally hoped to create a large ecosystem of OEMs, carriers and application developers all working for it and not against it. I had high hopes for Android in 2007, the same kind of high hopes you’d find developers, engineers, and ‘geeks’ worldwide having about ‘desktop linux’ around ten years earlier.
Contrary to desktop linux — and similarly to Microsoft Windows — Android gradually prevailed in the early smartphone wars, now commanding around 80% of the market share. But Android did not turn out what I (or Google, for totally different reasons) hoped it would; instead it evolved into a sprawling, chaotic, in some ways brilliant and others completely backward platform, combining the best of new technology, and geeky, specification based computing metrics and the worst of the technology industry compromises that accompanied computing since its early days. Fundamental concepts of mobile computing were butchered, like basic navigation, consistency, to manually controlling the power saving, managing tasks, having well-thought out, stable APIs, coupled with mediocre devices, widely varying user experiences and a generally poor roster of applications, as different device manufacturers created their own “skins” — as well as their own set of poorly designed and implemented software to accompany them — resulted in a desperate effort to differentiate their offerings from those found in the stock version of the operating system and an ever increasing pool of mediocrity. The irony, of course, was that the stock operating system was practically nowhere to be found except for Google’s own Nexus series of devices, a showcase of Google’s vision that permeated the developer community and diffused into the wider smartphone-toting populace. Devices cost just a small fraction less than Apple’s ‘closed’ iPhone, but demonstrated horrific deficiencies in performance and quality; the software stack was not optimized, power efficiency was poor, even with batteries much larger than those found on iOS devices. The hardware also lacked in some cases, like the response of the touchscreen, often blamed purely on the sub-par performance of Android, but apparently also caused by inferior hardware. Yet android was improving.
In a couple of years the number of android devices sold surpassed that of iPhones. Coupled with the global financial crisis, the iPhone failed to become a commodity device (at least outside of the large metropolises of the West, where salaries did not reach, let alone exceed, tens of thousands of $ or €) in the same way that the iPod had succeeded in doing a few years earlier. It was still the leading device, both from the design and technology perspective, but it was rapidly losing ground in terms of sales as people chose cheaper android devices. Apple was unfazed: it’s margins were still high, it still had the mind share. Above all, it still produced the definitive smartphone, the reference device that everybody else copied in one way or another.
So Android announced Android 4.1 (Jelly Bean), the next version of the operating system with much awaited performance improvements and some new (marginal) features, available to Galaxy Nexus users in mid-July and the remaining 99% of the Android ecosystem sometime between a year and never. Along with the new version of Android, Google announced several other products and services, including their Nexus 7″ tablet, which I won’t cover in this post. What I am going to focus on is Nexus Q, the first product designed exclusively by Google, a ‘social’ media player that is, intriguingly, manufactured in the U.S and costs almost $300. This is not representative of a new class of devices, it’s not even functionally all that impressive either: it’s a device that connects to speakers and TVs, like countless before it. The difference in the case of the Nexus Q lies in its impressive industrial design and the fact that 1. it can stream data from Google Play and 2) it can allow local Android devices to ‘share’ the content they hold in their local memory. Google, borrowing from the Apple cue-card of shoving features into products without really caring about what people really want or need, created a product aimed solely at increasing the impact and the revenue of their online media store (Google Play) and added a small cherry on top that makes use of the networked nature, ubiquity and storage capacity of modern smartphones. Of course, even the ‘social’ aspect of the device is not exactly groundbreaking either. The functionality has been available, one way or another, albeit in more ‘technically challenging’ forms for years. But what’s important to note about the Q is that it doesn’t really cover what is probably the number one request (or, if your prefer, the number one pet peeve) people have about for other locked down devices of this kind (yes, Apple TV is a prime example): the ability to stream content stored in existing home networked devices, such as another computers or NAS devices on the local network, using standard or widely-used protocols such as DLNA/uPNP, DAAP/Airplay or even SMB/NFS. With the Nexus Q, Google demonstrates the same denial it’s got with that of the success, or rather lack thereof, of Google+. And in their attempt to convince people that the botched Nexus Q is worth paying the huge premium they’re asking (other devices of the same kind cost 2-3 times less), their enthusiasm for silly features like overlay mustaches in Hangouts, their desire to marry profits with openness and openness, they lose the mark. They fail to deliver the extremely seductive walled-garden, locked-down experience that allows Apple to thrive at the cost of openness and empowering products and services and, at the same time, they lose the geekcred that made Google so affable in the first place. Google largely knows this and they have been performing a balancing act for years: the ‘openness’ of Android, that of ‘the social graph’, their contribution to the public via open source software. The company is cognizant of its differences to other players of the market; yet, there with today’s announcement of the Nexus Q there is a paradox: today we have so much technology available to us, often of high quality and free, in the form of open source software, powerful and affordable hardware. Google, like Apple and everyone else in between, realise this and have been struggling to trap the cat inside their new shiny new boxes. The Nexus Q could have been a great little device, and maybe it will be one as soon as people start ‘tinkering with its internals’. Yet it would have been much better if Google had shipped it like that instead of ‘tolerating’ hackability by the community. By blatantly promoting its own cloud based offerings instead of trying to marry them with locally stored content, Google is crippling its product.
In the end, like Microsoft a few days ago, Google is copying Steve Jobs’s style and strategy; provide a platform and tools, but focus on content and lock-down as a monetisation technique to counter technology commoditisation. Gundotra’s presentation of Google+ Events reeks of vintage early 2000s Jobsian technique, with the preset graphics and the sildeshow features. With everyone and their dog showcasing complete lack of tech culture in the face of a dominant resurgent Apple, the world treads in dangerous territory: a technology monoculture in the corporate space that bases its profitability on polished jailhouses instead of innovation and increased freedoms. In this context, the cloud remains a key technology that can help liberate us from the burdens of backups and local storage, but also imprison us in a world where a few corporations control all of our information.
Which brings us to Glass. The Sergey Brin, Project Glass segment was super fun and very impressive. If anything, it shows that Google is well-humoured and daring and works on interesting technology. Of course the demo and the features they showed are not exactly representative of the product, its potential or its usefulness. The segment which followed the breathtaking landing on Moscone and Brin’s introduction was stale and unconvincing: I doubt a mom would want to ‘make contact’ with her baby looking like the Borg, or that everyone would think twice before relegating their ‘Glass’ to a drawer and never using it, its novelty quickly overshadowed by its unnatural and intruding appearance. It’d be a shame if Google really thought that people want to look like futuristic soldiers from a second tier sci-fi movie. Sure, the concept is nice and perhaps as technology progresses it will be easier to integrate such functionality in more human-friendly prodcuts, like contact lenses or integrated with optical glasses. As it stands, Project Glass is only an unappealing curiosity that serves better as a tech demo than a product right now.
Seeing the Microsoft Surface [really Microsoft? You guys couldn’t find a new, unique name?] Keynote reinforces my belief that the company has long lost the capacity of creating and projecting a genuine, unique and interesting image, products and services.
When Steve Jobs returned to Apple, it quickly did away with most of the product lines the company was making that weren’t very successful. It ended printers, clones, the Newton and many other products and services and focused on creating a few, exceptional products. In the early 2000s Apple had started gaining mindshare, both in the computing world with OS X and generally with the wildly successful iPod. At the time, given Microsoft’s tendency to copy features, ideas and æsthetics from Apple, I thought that Apple, being a much smaller company, was serving as some sort of research facility for Microsoft, which then took the successful ideas and commoditised them. Even though Apple is now much larger than Microsoft, the trend continues; the Surface Keynote event was a cheap copy of Apple’s events, down to the ‘How we made it’ interlude videos, the speaker rotation and style, while the products — still — better designed and refined, oozing with much needed quality in a ever-cheaper industry, sadly fail to go beyond marginal improvements to existing, commonplace technology, a few technical features most people don’t know or care about (MIMO antennæ, optical bonded display, etc) and lacklustre features and presentation. Sure, Surface is new, it introduces what Windows 8 is all about: a tablet form-factor with a full-featured OS. But Surface didn’t really create excitement in the audience, people didn’t seem enthused, the presenters tried too hard to convince everyone of “how you’ll fall in love with it” etc, when the device didn’t seem all that great. No matter how much this company tries, it tries too hard to ‘copy’, instead of ‘creating’. To ‘replicate’ concepts, features and products, instead of cultivating their own culture, their own vision. The result shows that; by trying so hard to adopt the Apple mantra of style, quality and innovation, when they don’t really believe it, they come out with mediocre products, like the Zune player and, now, seemingly, the Surface tablet. That doesn’t mean, of course, that Microsoft doesn’t ever innovate or that it inherently could not. It is a company that has both resources and technology to revolutionise computing, a company that has introduced countless innovations over the years, but on average it suffices to releasing second-rate products, it cares more about the economics of doing business than the passion, quality and enjoyment of creating technology and products.
Surface, a tablet that wants to replace not only devices primarily aimed for consuming content, like the iPad, but, eventually, your laptop probably tries to do too much at once, while being mediocre at both. With a schizophrenic cover/keyboard device, it’s presented as a notebook replacement; with Metro, it competes with the iPad. But Surface Pro will also be able to run classic Windows programs, where the plain model won’t (it’s ARM-based). How about battery life? I think that Google/Apple got it right when they separated PC and post-PC devices. In terms of productivity nothing, let alone a tablet, beats a workstation, with its massive real-estate, ergonomic size and positioning and greater power, but Microsoft desperate attempt to differentiate the Surface from the iPad and Android tablets by invoking ‘creation’ rather than ‘consumption’ and pointing to the keyboard and higher performance is ludicrous. I’d take a brand new MacBook Air anytime, even running Windows 8, instead of the just-released Surface. Sure there’ll be cases when the full power of a notebook might be useful in a tablet form-factor, and there’ll be cases where the presence of a touch-based keyboard will be better than its absence, but in general this is not anyone’s vision for a productive portable consuming experience.
I found this article on EFF to be a very concise summary of many of the issues I’ve written (and often talked about) in the past, pertaining to the freedom to use the devices you have paid for and own as you see fit, and the increasingly worrying trend of manufacturer lockdowns that largely define what you can and cannot do with them. While Apple with its popular iOS may be the most well-known (and most successful) ambassador of the lock-down platform, the trend has been on the radar well before Apple managed to escape the threat of extinction in the late 1990s; Microsoft, with Windows RT and the Secure Boot flag in UEFI only manages to actually implement all those technologies they initially developed, studied and proposed more than ten years ago with Palladium/TCPA.
The cat is still out of the box, but technology ages quickly and the threat is quite real: a combination of a cloud abused by the Valley oligopoly, lack of the computing storage ubiquity and locked down devices would be a nightmare scenario that would strip the computer of its fundamental differentiating quality from appliances of yore: its malleability, the power derived from its programmability and its ability to solve countless problems, to achieve infinite different tasks and not perform a single function, as manufacturers would most likely want.
Check out this table. A bunch of modern, high-quality, high-performing codecs. AAC+, AAC LC, enhanced AAC+, MP3. All decodable by Android, on all devices. Sadly, Android devices can only encode on AMR-NB at the sad sampling rate of 8KHz. At the miserable bitrate of 4.75 to 12.2kbps. At qualities unheard of since the early days of the telegraph (ok, I’m kidding — AMR-NB is the voice codec most GSM and UMTS phonecalls are carried over).
Now, you may be asking: Couldn’t the manufacturer add encoding support for more audio codecs? Sure, and some do. Others, like HTC for example, don’t. Even on high-end devices like the Desire. Devices with Qualcomm Snapdragon CPUs clocked at 1GHz. With hardware support for stereo AAC encoding. No, really, what on earth is wrong with these people.
At the same time, HTC went into the hassle of adding encoding support for h.264 and 720p (using MPEG4). And it makes me wonder: that they added h.264 encoding support means they are at least clued up with respect to paying royalties, adding the codec to the system and making use of it. That they introduced 720p using MPEG4 on the other hand makes no sense: how useful is 720p video recording — recently introduced with HTC’s Froyo build for the Desire — or the capability to record audio as a whole come to think of it, when the recorded audio on this phone sounds like a wax record from the 1880s, not least because of the totally backwards codec they use throughout, while one of the most powerful mobile device CPUs in the market today just sits there idling. Idiots.