So Android announced Android 4.1 (Jelly Bean), the next version of the operating system with much awaited performance improvements and some new (marginal) features, available to Galaxy Nexus users in mid-July and the remaining 99% of the Android ecosystem sometime between a year and never. Along with the new version of Android, Google announced several other products and services, including their Nexus 7″ tablet, which I won’t cover in this post. What I am going to focus on is Nexus Q, the first product designed exclusively by Google, a ‘social’ media player that is, intriguingly, manufactured in the U.S and costs almost $300. This is not representative of a new class of devices, it’s not even functionally all that impressive either: it’s a device that connects to speakers and TVs, like countless before it. The difference in the case of the Nexus Q lies in its impressive industrial design and the fact that 1. it can stream data from Google Play and 2) it can allow local Android devices to ‘share’ the content they hold in their local memory. Google, borrowing from the Apple cue-card of shoving features into products without really caring about what people really want or need, created a product aimed solely at increasing the impact and the revenue of their online media store (Google Play) and added a small cherry on top that makes use of the networked nature, ubiquity and storage capacity of modern smartphones. Of course, even the ‘social’ aspect of the device is not exactly groundbreaking either. The functionality has been available, one way or another, albeit in more ‘technically challenging’ forms for years. But what’s important to note about the Q is that it doesn’t really cover what is probably the number one request (or, if your prefer, the number one pet peeve) people have about for other locked down devices of this kind (yes, Apple TV is a prime example): the ability to stream content stored in existing home networked devices, such as another computers or NAS devices on the local network, using standard or widely-used protocols such as DLNA/uPNP, DAAP/Airplay or even SMB/NFS. With the Nexus Q, Google demonstrates the same denial it’s got with that of the success, or rather lack thereof, of Google+. And in their attempt to convince people that the botched Nexus Q is worth paying the huge premium they’re asking (other devices of the same kind cost 2-3 times less), their enthusiasm for silly features like overlay mustaches in Hangouts, their desire to marry profits with openness and openness, they lose the mark. They fail to deliver the extremely seductive walled-garden, locked-down experience that allows Apple to thrive at the cost of openness and empowering products and services and, at the same time, they lose the geekcred that made Google so affable in the first place. Google largely knows this and they have been performing a balancing act for years: the ‘openness’ of Android, that of ‘the social graph’, their contribution to the public via open source software. The company is cognizant of its differences to other players of the market; yet, there with today’s announcement of the Nexus Q there is a paradox: today we have so much technology available to us, often of high quality and free, in the form of open source software, powerful and affordable hardware. Google, like Apple and everyone else in between, realise this and have been struggling to trap the cat inside their new shiny new boxes. The Nexus Q could have been a great little device, and maybe it will be one as soon as people start ‘tinkering with its internals’. Yet it would have been much better if Google had shipped it like that instead of ‘tolerating’ hackability by the community. By blatantly promoting its own cloud based offerings instead of trying to marry them with locally stored content, Google is crippling its product.
In the end, like Microsoft a few days ago, Google is copying Steve Jobs’s style and strategy; provide a platform and tools, but focus on content and lock-down as a monetisation technique to counter technology commoditisation. Gundotra’s presentation of Google+ Events reeks of vintage early 2000s Jobsian technique, with the preset graphics and the sildeshow features. With everyone and their dog showcasing complete lack of tech culture in the face of a dominant resurgent Apple, the world treads in dangerous territory: a technology monoculture in the corporate space that bases its profitability on polished jailhouses instead of innovation and increased freedoms. In this context, the cloud remains a key technology that can help liberate us from the burdens of backups and local storage, but also imprison us in a world where a few corporations control all of our information.
Which brings us to Glass. The Sergey Brin, Project Glass segment was super fun and very impressive. If anything, it shows that Google is well-humoured and daring and works on interesting technology. Of course the demo and the features they showed are not exactly representative of the product, its potential or its usefulness. The segment which followed the breathtaking landing on Moscone and Brin’s introduction was stale and unconvincing: I doubt a mom would want to ‘make contact’ with her baby looking like the Borg, or that everyone would think twice before relegating their ‘Glass’ to a drawer and never using it, its novelty quickly overshadowed by its unnatural and intruding appearance. It’d be a shame if Google really thought that people want to look like futuristic soldiers from a second tier sci-fi movie. Sure, the concept is nice and perhaps as technology progresses it will be easier to integrate such functionality in more human-friendly prodcuts, like contact lenses or integrated with optical glasses. As it stands, Project Glass is only an unappealing curiosity that serves better as a tech demo than a product right now.
Seeing the Microsoft Surface [really Microsoft? You guys couldn't find a new, unique name?] Keynote reinforces my belief that the company has long lost the capacity of creating and projecting a genuine, unique and interesting image, products and services.
When Steve Jobs returned to Apple, it quickly did away with most of the product lines the company was making that weren’t very successful. It ended printers, clones, the Newton and many other products and services and focused on creating a few, exceptional products. In the early 2000s Apple had started gaining mindshare, both in the computing world with OS X and generally with the wildly successful iPod. At the time, given Microsoft’s tendency to copy features, ideas and æsthetics from Apple, I thought that Apple, being a much smaller company, was serving as some sort of research facility for Microsoft, which then took the successful ideas and commoditised them. Even though Apple is now much larger than Microsoft, the trend continues; the Surface Keynote event was a cheap copy of Apple’s events, down to the ‘How we made it’ interlude videos, the speaker rotation and style, while the products — still — better designed and refined, oozing with much needed quality in a ever-cheaper industry, sadly fail to go beyond marginal improvements to existing, commonplace technology, a few technical features most people don’t know or care about (MIMO antennæ, optical bonded display, etc) and lacklustre features and presentation. Sure, Surface is new, it introduces what Windows 8 is all about: a tablet form-factor with a full-featured OS. But Surface didn’t really create excitement in the audience, people didn’t seem enthused, the presenters tried too hard to convince everyone of “how you’ll fall in love with it” etc, when the device didn’t seem all that great. No matter how much this company tries, it tries too hard to ‘copy’, instead of ‘creating’. To ‘replicate’ concepts, features and products, instead of cultivating their own culture, their own vision. The result shows that; by trying so hard to adopt the Apple mantra of style, quality and innovation, when they don’t really believe it, they come out with mediocre products, like the Zune player and, now, seemingly, the Surface tablet. That doesn’t mean, of course, that Microsoft doesn’t ever innovate or that it inherently could not. It is a company that has both resources and technology to revolutionise computing, a company that has introduced countless innovations over the years, but on average it suffices to releasing second-rate products, it cares more about the economics of doing business than the passion, quality and enjoyment of creating technology and products.
Surface, a tablet that wants to replace not only devices primarily aimed for consuming content, like the iPad, but, eventually, your laptop probably tries to do too much at once, while being mediocre at both. With a schizophrenic cover/keyboard device, it’s presented as a notebook replacement; with Metro, it competes with the iPad. But Surface Pro will also be able to run classic Windows programs, where the plain model won’t (it’s ARM-based). How about battery life? I think that Google/Apple got it right when they separated PC and post-PC devices. In terms of productivity nothing, let alone a tablet, beats a workstation, with its massive real-estate, ergonomic size and positioning and greater power, but Microsoft desperate attempt to differentiate the Surface from the iPad and Android tablets by invoking ‘creation’ rather than ‘consumption’ and pointing to the keyboard and higher performance is ludicrous. I’d take a brand new MacBook Air anytime, even running Windows 8, instead of the just-released Surface. Sure there’ll be cases when the full power of a notebook might be useful in a tablet form-factor, and there’ll be cases where the presence of a touch-based keyboard will be better than its absence, but in general this is not anyone’s vision for a productive portable consuming experience.
I’ve written about SimCity in the past, in my opinion one of the most intriguing games ever to grace a personal computer. The following videos showcase some of the fundamental changes that have taken place for the upcoming game, SimCity, a reboot of the franchise that features a brand new engine called GlassBox. The engine introduces agent-like behaviour in the objects that inhabit the SimCity universe, thus creating an extremely consistent visual representation of the internal state of the game (something that, in turn, maximises realism). One of the previous concerns of Will Wright (and perhaps the rest of the team at Maxis), especially for SimCity 4, was that it was becoming too complex to be commercially successful. In my opinion, this is completely wrong. SimCity draws its appeal from the fact that it endeavours to be a realistic yet fun city-building simulator. Complexity is not the problem, it’s a benefit and I’m sure that the new agent-based engine will allow for much more complex, yet easily-graspable and consistent game concepts with higher complexity to be playable and fun. To my mind this has always been more than just a game and the concepts behind SimCity (as well as, to my knowledge, the engine) have been used for real city management needs.
I found this article on EFF to be a very concise summary of many of the issues I’ve written (and often talked about) in the past, pertaining to the freedom to use the devices you have paid for and own as you see fit, and the increasingly worrying trend of manufacturer lockdowns that largely define what you can and cannot do with them. While Apple with its popular iOS may be the most well-known (and most successful) ambassador of the lock-down platform, the trend has been on the radar well before Apple managed to escape the threat of extinction in the late 1990s; Microsoft, with Windows RT and the Secure Boot flag in UEFI only manages to actually implement all those technologies they initially developed, studied and proposed more than ten years ago with Palladium/TCPA.
The cat is still out of the box, but technology ages quickly and the threat is quite real: a combination of a cloud abused by the Valley oligopoly, lack of the computing storage ubiquity and locked down devices would be a nightmare scenario that would strip the computer of its fundamental differentiating quality from appliances of yore: its malleability, the power derived from its programmability and its ability to solve countless problems, to achieve infinite different tasks and not perform a single function, as manufacturers would most likely want.
Mere hours after pressing ‘Publish’ on the previous mini-article concerning walled gardens, an article on TechCrunch, this morning, clarified the situation we have more or less been suspecting for a while now: that Apple, after deprecating UDIDs (one of the things they truly did well in iOS from the beginning), they will start rejecting apps after the backlash caused by lawsuits, noise and a few rogue developers that seemed keen to take advantage of their users and use their private information in ways they didn’t agree (and which are illegal in more ways than one).
The situation with unique device identifiers is an important one. On one hand, user privacy should be the number one concern of platform owners/builders like Apple, Google and Microsoft. It isn’t, for their software can do pretty much whatever it wants with the users’ private information, as we have seen several times these past few years. On the other, developers have many uses for an immutable, unique identifier for devices; from providing metrics for their own use, understanding the patterns of use of their applications, improving ad targeting, enforcing proper use of their applications and communities among others. Of course, it can also be a tool aiding in unsolicited tracking and profiling of users, of a range of personal information violations etc.
When Google came out with Android, they failed to provide any sort of unique device identifier of any significance to their developer community. They did provide several ways for developers to get some seemingly unique identifier, but those were easily modifiable, sometimes were not set at all or set to the same value across all devices sold by an OEM. In addition they would get reset after a factory wipe, etc. Developers resorted to DIY identifiers, scoured and composed from several unique component identifiers available to them by the system, such as the IMEI in phone devices, or the MAC address of the WiFi network interface in others. Then Google released Android 2.3 which included a unique identifier which, while better than the previous ones, was still not 100% robust.
Microsoft has belatedly joined the new-walled-garden era, first with Windows Phone 7 and now with Windows 8. The ‘new’ API and model for applications, Metro, goes one step further by not providing any single unique device identification capability to developers (there are some exceptions, but they are truly exceptional and as of right now undocumented). The only thing close to user/device authentication is ‘Microsoft Account’ (formerly Windows Live, Passport etc. etc.) integration which is probably useless for 99% of the cross-platform applications available out there, that have a need for some sort of unique identification of their users/devices.
It’s the permissions stupid.
The whole situation boils down to botched design in terms of permission control, abuse by advertising, analytics and developers and extremely late regulatory and social reaction to the above, perhaps combined with a pretty simple way to raise barriers to entry to the competition while ‘solving’ the issue of privacy. All platforms have some sort of privacy/permission control, but none have a good one. Android has a pretty comprehensive permission system that assumes that before installing an application each user bothers to read a silly list of permissions (many of which they will probably not understand) and once they accept they will perpetually want to grant all those permissions to said application. There is no fine grained permission control post installation, no possibility to grant or revoke individual permissions to applications before they are launched (something like “I would like to allow App X to use my network connection, but not my location or my address book data”). iOS is also similarly badly designed: there is no explicit permission asked or required for using the network connection, a slew of personal data, several APIs, storage etc., except for location, where iOS does a much better job than Android, probably because of the high-profile exposure that their data-collection ‘functionality’ took a few years ago. At the same time, both platforms actively transmit information gathered by your device, be it nearby BSSIDs (the identifiers of wifi networks, akin to ethernet MAC addresses) or Cell IDs (the unique identifiers of nearby cellular transmitter/antennae) so that they improve their ‘network-based’ geolocation service. Google fares better in this respect, as they allow you to disable this; Apple doesn’t, as far as I know.
Then comes Microsoft, the ailing software behemoth that only recently decided that Balmer’s rhetoric about the iPhone’s failings, the iPad not gaining any significant traction etc. was totally wrong after all, and that they should jump on the tablet bandwagon, not in the way they’ve been trying to do for about a decade, but the way Apple did with their own version of a walled garden, doing away with the desktop paradigm and providing a dumbed down, simpler interface that does away with compatibility, file-systems etc and uses a locked down, app store/marketplace based model to ensure software legitimacy and boost profits. So Windows Phone 7 and Windows 8 provide new sets of APIs and a new ‘application environment’ called Metro. In the Windows 8 version, the æsthetics borrow much more than its name from Windows Phone 7, the company’s revamped operating system for mobile phones that, while a decent effort, doesn’t seem to be doing that great on the market. Metro on Windows 8, however, is not a finished product by any means, and probably won’t be ‘finished’ (that is of a sufficiently high quality) until Windows 9 is released in a few years from now. Metro on Windows 8 also has permissions, like Android, but does away with unique device identifiers and any sort of meaningful API to get any sort of replacement of one. It also allows the user to revoke a permission (say, for the location), but only after the application has be executed, which kind of defeats the purpose.
My experience with the ‘next-generation’ platforms I have programmed on until now strongly suggests that the companies and people designing them have no idea about the implications of their work. They are experimenting, releasing APIs, platforms and products without thinking them through, or the impact their software has on the users, developers building applications using them or the overall social effect of their design decisions. In the case of Android, many more developers have access to IMEIs, MAC addresses and other, arguably much more sensitive information about devices and their users than they would have, had Google paid some attention and provided a unique, immutable pseudo-random unique device identifier from day one. It is also surprising how bad their permissions system is, given that they at least went through the trouble of designing one in the first place. In the case of Microsoft, the complete lack of such a mechanism, may eventually play its part in hurting the company’s efforts to enter the game (they already are extremely late). And finally, Apple, the market leader that did so many things right in the first place, is risking pissing off everybody from small independent companies that helped build the platform, to its greatest non-platform owning competitors that can see through the excuse of legal heat from regulators and the government, their hypocrisy on protecting the users’ privacy and who may call their action as an excuse to block them out of their platform. At the end of the day, the three big players in this market still get all your information, and their expansion into advertising, mobile payments, e-commerce and every single part of the software ecosystem possible means that they have the greatest incentive to (ab)use it.
In the end, all of the privacy problems that location, unique device identification and access to other personal information may give rise to are easily solvable by a modern, smart permission system that gives the user the power to deny, revoke or grant permissions to individual applications post installation, including system software/applications, thus creating a level playing field where the user would decide what kind of access to provide to whom. That would be a clear demonstration, on the platform owners’ part, that they truly care about users’ privacy and not just creating barriers to entry to the competition and their bottom line.
WSJ: Before Steve Jobs of Apple Inc. died, he approached you with a buyout offer. Why did you turn it away?
Mr. Ferdowsi: The problem that we’re trying to solve is a problem that only an independent company can solve. We want to let you use a Mac, or Windows PC, or iPad, or Android, without having to think about any of the technical details. It isn’t a problem any of those larger companies is going to be as inclined to solve in the same way we are.
A very very pertinent point, seeing that we’re experiencing a renaissance of massive, vertical closed systems, walled gardens and a childish desire to lock people into proprietary platforms that try to offer everything. Look at how Google, Facebook, Apple and now Microsoft are heavily promoting their respective ‘authentication’ platforms, playing the game of ignoring_the_competition. Facebook would certainly like you to use their APIs to authenticate your users, but they don’t have to try much because they have the most powerful database right now. Microsoft heavily promotes their ‘Microsoft Account’ (previously known by half a dozen names) and will do even more in Windows 8, while Apple makes ever increasing use of their Apple ID, across their products and services. Google, in lieu of their recent privacy terms update, needs no introduction I think with Google+ and every other service tied to a single Google account. The fact that Dropbox fully supports practically every single system platform I can think of using is reason enough for me to prefer it from competing services (Ubuntu One, Microsoft Skydrive, iCloud etc) and a refreshingly sane choice they made contrasted heavily by that of the established market leaders who fear of inadvertently promoting their competition.
According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”
The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”
It’s ironic, how ‘ease’ becomes the noose that chokes innovation and development. AOL, Facebook, iTunes, they all offer closed, proprietary solutions to ‘problems’ that — in more ways than one — are not so hard to solve. Solutions that seem to ‘work’, that ‘succeed’ because the ‘trend’ is to embrace ‘easy’, as opposed to ‘moderately challenging’, because the ‘smart money’ is behind them and because of network effects.
In the last few years, that is after the wave of ‘Web 2.0′ (ironically, yet another ‘trend’ exploited by ‘experts’ that abused it for profit) subsided, Facebook started making serious money. Its real success as an advertising platform is not only arguably minimal, but quite controversial. It took a long time for the advertising industry and the hordes of marketing monkeys to embrace Facebook’s walled garden approach and doing what they do best, counting. Only this time it wasn’t ‘impressions’ or ‘clicks’ or ‘conversions’ they were counting, but ‘likes’, another frivolous metric that doesn’t really mean anything in the real world. Facebook apps, once touted as the next big thing and a threat for the web, were stillborn, largely because Facebook itself made significant steps to expand beyond the confines of its site, by creating interfaces, programmatic and user, for other platform-owners to embed in or integrate with their platforms. So we got a slew of ‘social plugins’, more ‘APIs’, etc. But there were some exceptions, like Zynga, a gaming company living inside Facebook.
Now, Zynga just launched Zynga.com. And it’s a big deal, because this is the first Facebook-dependent business of significant scale that expands beyond the confines of this walled garden du jour.
The whole ‘frenzy’ with Facebook in the ad world is now in its third year. As with AOL’s endeavours fifteen years ago, the Facebook frenzy may be past its prime; as a teenager of the early-to-mid 1990s, AOL ‘keywords’ seemed to me like a pointless exercise, yet another ‘top-down’, force-fed business model that people never cared about.
Clearly people care about Facebook; they care about the platform that connects them to people they love: their friends and their relationships, news from their social circles, people they’d like to know better or simply keep in touch. They could hardly care less about Facebook pages, Facebook ads, the Facebook business. Sadly, marketers and advertisers, typically the last group to perceive change — and perhaps the most dependent on ‘convention’ (make no mistake, Facebook is convention, as is Google), will take a bit longer to ‘wake up’. That Zynga chose to move beyond Facebook is undoubtedly a wake up call and a sign of maturity in an industry that more than often adopts the strategy of others, instead of coming up with its own.