Navigating the World of Digital Mapping

To the cursory observer, online map services like Google Maps or Bing Maps may seem like simple tools, simply placing a searchable compilation of points of interest on a scrolling set of map images. In reality, it’s a very complex business with immense potential going forward, with demand coming from transport electrification, autonomous cars, consumerization of ground logistics (UPS -> Uber/Lyft), and broader use cases for unmanned aerial vehicles, among other areas.

Mapping entails digitizing the physical world, so every map service at its root needs access to mapping data. This consists of the actual imagery – satellite images, aerial photos, and street level photos, for instance – mapped to a digital overlay of roads containing all manner of metadata (e.g. street name, type, traffic direction, speed limit, toll road). Collecting these data is an immensely labor-intensive on-ground task that is never complete (as roads and buildings keep changing), so there are really only a few global players in this space that almost all map-based services ultimately get their map data from – namely HERE (originally Navteq; recently sold by Nokia to Daimler/BMW/VW), Google, and TomTom.

There are a few hybrid players – e.g. Microsoft sources map data from HERE and others, but also had a hundred employees building out their own map data via street vans, aerial imagery, and such (a division recently sold to Uber), and Apple, which recently entered the mapping space with Apple Maps, gets its data from TomTom but is also building out a fleet of its own mapping vans.

On top of map data, you need routing algorithms, address and point-of-interest data, search, and lots more.

Below I will start with an anecdote about my introduction to the world of mapping and then discuss some opportunities in the space today.

TA Maps and Google

After college, I shipped out to India to work at Mahindra, which is India’s largest automaker (and also the world’s largest farm equipment manufacturer, among other things). After moving into my apartment in Mumbai, I realized a few things—one, the Google Maps app, which back home in the U.S. I used quite extensively on my phones at the time, an iPhone and an HTC HD2 (running Windows Mobile 6.5), had incomplete data in some parts of the city, so pretty often I’d be switching between Google and other map apps. Then I upgraded to an HTC HD7, running Microsoft’s rebooted-from-scratch Windows Phone 7 OS (whose story I’ve written about), and there was no Google Maps app in the store at all.

Windows Mobile had earlier conquered the pre-iPhone high-end PDA/smartphone market, crushing Palm OS with a remarkably feature-packed and open OS. So if Google wanted its mapping service in high-end mobile users’ hands, it had to be on Windows Mobile (just as it had to be on iOS later). Yet as many large tech companies often do (e.g. MS ceasing development on Internet Explorer after IE6, having beaten Netscape, only to be woken up later by the upstart Firefox project), Microsoft was busy running a victory lap when the iPhone launched and took a while to respond, by jettisoning Windows Mobile completely in favor of the ground-up Windows Phone 7. Meanwhile Google’s acquisition, Android, launched as a very Windows Mobile 6-like response to the iPhone. By the time Windows Phone launched, Google felt it could forego its biggest rival’s platform entirely and thereby perhaps gain a competitive advantage for Android.

So, with an incredibly smooth Windows Phone 7 device that I wanted to use daily, and no Google Maps in front of me, I sought to fix the problem by writing my own mapping app – TA Maps – that would initially serve as a Google Maps client and then expand to include multiple map sources, thereby solving the constant switching problem I had with Google Maps on iOS and Windows Mobile 6.x. To do this, I sourced map tiles from Google (and later Bing, OpenStreetMap, and others), plugged into their point-of-interest search and directions APIs, and then handled a bunch of curiously complicated tasks like reverse-engineering Google’s compression algorithm for map polylines (e.g. route lines on a map for directions).

With multiple data sources, I solved my own navigation problem and others’ too (e.g. building in OpenCycleMap for bicyclists). In the process, I put the app up on the app store and gained thousands of free and paying customers across the world, learning a ton about mapping in the process (e.g. when customers in China all reported the map as being off by a certain distance, I found that the Chinese government had at some point built a location offset from the (US military-run) GPS system, as a rudimentary security measure ensuring that all non-China-specific maps would be off unless they specifically compensated for the offset).

Then Google began to restrict access to its map data, deprecating old versions of its API and forcing users onto its new API, which required 1) authenticated tokens that identified the particular client requesting map data, and 2) agreeing to ever-narrower usage terms. When the API was updated to essentially ban native third-party navigation clients from using Google Maps, I received a not-so-friendly email from the Google Maps team – not quite a takedown notice yet, but clearly on the way. At that point, I decided to just take down the app (it still had standalone value sans Google, but I was too busy with my actual job to maintain it). Around the same time, another app emerged, as a pure-play Google Maps client that was even (egregiously) called “gMaps” and used a modified version of Google’s own Maps icon as its own. The difference? Those developers were in Russia and had no qualms agreeing to terms that they’d then explicitly violate (and then fight a technical cat-and-mouse war around Google’s API access blocking).

Google clearly saw map services as a tool to gain a competitive advantage in other areas of its business. For instance, when Motorola – then one of the top Android phone manufacturers – decided to use the services of the startup Skyhook Wireless to provide its users better location sensing than Google could provide, Google’s top executives responded with fury to the threat of losing consumer location data, forcing Motorola to switch course on Google’s supposedly “open” Android platform.

A couple years later, in January 2013, I and some others online discovered that Google had begun to specifically block Windows phones from accessing its own Google Maps website—presumably trying to get users to switch to Android. Google somewhat absurdly claimed that this was because Google Maps only worked well on browsers built on Webkit (i.e. Chrome, Safari) – strange, as the site worked fine on desktop Internet Explorer, Firefox, etc.

As I wrote here, if you changed the user agent (UA – a piece of identifying text by which the browser tells websites about itself and the device it’s running on) of Google’s own desktop Chrome browser to pretend that it was running on Windows Phone, it would no longer load Google Maps, and conversely, when a different UA was used on a Windows phone, the site loaded perfectly fine. Eventually the mainstream tech media picked up the story, and having been caught red-handed, Google was forced to re-allow access to its site. (incidentally, so much for “Don’t be evil”)

HERE, Uber, and Waze

Last year, Nokia put its market-leading maps service on the market, by then rebranded from Navteq / Nokia Maps to HERE. This was part of its exit from consumer-facing businesses (selling its best-known mobile phone unit to Microsoft, whose then-CEO Steve Ballmer apparently also wanted to buy HERE, but was turned down by a board so skeptical of any Nokia deal that Ballmer essentially sacrificed his job for it, agreeing to a timetable for stepping down as CEO in exchange for board approval on the Nokia phone deal).

A bidding war ensued for HERE, in which Uber battled a consortium of the German car manufacturers – Daimler, BMW, and Volkswagen. Why would either of these parties be interested in what might seem like off-core-competency offerings for either? The answer is simple – the future of transportation will depend on distributed data collection.

An Israeli startup, Waze, was an early entrant on the consumer side of this space, with the basic premise that if you collected position and speed data via a smartphone app running inside consumers’ cars, and had enough users, you could get a good idea of real-time traffic flows (better than existing sources of traffic data, such as government-installed highway car counters that at best can estimate traffic at particular locations) and use this to provide better traffic-adaptive routing. Waze executed exceedingly well and was acquired by Google for $1 billion.

Waze is dependent on a smartphone running inside a car, though. What if one thought of the car itself as a device—as an increasingly sensor-laden rolling connected device? Every car on the road could provide all of what Waze sees and much more (e.g. road grades, potholes, lane markers, more precise positioning, etc.)? Herein lies the problem for carmakers—platform companies like Google (Android Auto), Apple (CarPlay), Microsoft (Windows Embedded Auto), and BlackBerry (QNX) have designs on moving beyond where they currently play – in-dash infotainment systems – and into the car as a data platform.

Carmakers hate the thought of being reduced to commodity device builders like the no-profit world of Android smartphone/tablet manufacturers. Hence the German automakers’ interest in HERE, to preemptively build out the car as a digital platform and avoid getting marginalized by Google (which is the second largest mapping player and now, with Waze, also the leader in crowdsourced road data). HERE has its own infotainment platform, but more importantly, soon every Mercedes, BMW, and VW (meaning VW, Audi, Porsche, etc.) will provide Waze-like data to HERE, building up a strong, Google-free Waze alternative. HERE’s ambition is to power both tomorrow’s cars and location-based applications of all sorts.

Meanwhile car dispatch apps like Uber, Lyft, Didi Kuaidi, Ola, and such are essentially in the logistics business. The better they can route cars, the faster customers and drivers meet, the more transactions the companies process, and the more they profit, consequently. The business of route optimization, previously limited to delivery companies like UPS (whose in-house routing famously avoids left turns at almost all costs, reducing wait time in turning lanes and avoiding accidents), is now squarely within the sights of Uber and its ilk. Uber’s driver app on Android (but not iOS) currently bounces drivers out to Waze by default for optimized routing. But that’s a ton of useful data that Uber’s feeding to Google instead of itself, and at the same time, Google’s looking to directly encroach on Uber’s terrain (with its own car sharing service), so for Uber, becoming Google-free as quickly as it can is a priority.

One route was for Uber to buy HERE and have a full-fledged mapping business on its hands. With its huge market cap, Uber could probably afford to outbid the German automakers too (which itself is something worth reflecting on). Yet Uber eventually lost that bid and opted for another strategy, which was to make a deal with Microsoft. Under CEO Satya Nadella, Microsoft is focusing heavily on cloud-enabled services and treating everything below that in the stack as a commodity (its own offerings there will eventually just be demand drivers). Part of that is a new strategy for its map services (such as Bing Maps) in which, rather than driving imaging vans around the world, Microsoft will have strategic deals with map vendors like HERE to source imagery while focusing on higher-end services (such as 3D mapping and integrating mapping into other services). So Uber and Microsoft struck a deal by which Microsoft is transferring its surface imaging unit (and the technology entailed) to Uber, and Uber will integrate deeply into Microsoft services like Office and Cortana. With this, Uber can eventually turn its global network of drivers and riders into a huge source of map data that’ll be of value for its own routing but potentially also to others.

Looking Forward

At Mahindra, I eventually headed strategy and tech planning for the electric car venture, Mahindra Reva (a startup in Bangalore that we had acquired). One of my focus areas was building out a vision for the connected car, and as part of that, I looked at areas in which we could build EV-specific experiences. One idea that came to mind was in mapping— electric powertrains are drastically more efficient than internal combustion engines (ICEs), so when looking to improve efficiency and maximize range, one starts to look at things like aerodynamic drag and road grade much earlier than with ICEs (where these things only really matter for racing cars).

Could we create map routing that would optimize energy consumption by, say, sticking to flat or downhill roads? I met with map vendors and realized the idea would be a bit challenging to implement because most navigation apps calculate the crow’s flight distance (i.e. if the land were all flat from a top view), not a 3D-mapped altitude-sensitive true distance. Further, in some regions, grade data were not available at all. We would’ve had to develop grade-sensitive navigation routines in-house, which was beyond our core competence, but the opportunity here remains significant.

There are lots of potential applications in robotic navigation as well – how would an Amazon delivery drone best navigate an urban environment (FAA rules permitting), for instance?

Clearly, much remains to be done in mapping, and it’s quite an exciting field today.

By: Ashish Bakshi


read more

“Mood” is defined as “a pervasive and sustained emotion that colors the perception of the world. Common examples of mood include depression, elation, anger, and anxiety. In contrast to affect, which refers to more fluctuating changes in emotional “weather,” mood refers to a more pervasive and sustained emotional “climate.”[1] While “affect” is an external and observable expression of emotion, “mood” is internal.

Assessing another’s person’s mood, even when physically present with them, can be very difficult. When in person, one can resort to verbal and non-verbal communication to make this type of assessment. One might simply ask someone questions such as “how are you?” or “is something wrong?”, but we all know this rarely produces a ‘verbal answer’ which actually matches a person’s mood. When it comes to non-verbal communication, one might try to draw conclusions about another person’s mood based on observations of his or her affect.

However, even when one can assess someone’s affect – through tone, gestures, and general demeanor – this may be incongruous with their mood. To complicate matters further, what’s considered an appropriate level of affect to display to the exterior world varies across cultures, situations, and personalities. With it already being so challenging to evaluate another person’s mood correctly in person, with direct access to the physical cues which make up ~80% of communication, think about the challenges of evaluating someone’s mood online.

Well, this is precisely what Apple claims it wants to do – assess your mood, for purposes of ‘mood based advertising’ – in the “inferring user mood based on user and group characteristic data” patent application (No. 13/556023) it filed this past January. Online advertisers already use a host of contextual factors – location, age, time of day, types of searches and purchases, and general browsing history – to target individuals, and knowing someone’s mood would add yet another powerful dimension to their arsenal. No one doubts how influential mood is in the way a person processes an ad, and how mood can impact purchasing behavior.

What might this form of advertising look like, though, you may be asking? Imagine if Apple could correctly assess in real-time whether you’re happy or sad – a brand such as Coca-Cola which wants to reinforce psychological association with happiness may only choose to show you ads when you’re happy, while a shoe brand may choose to show a certain ad to a lonely, sad young woman of a certain income category, who may be more susceptible to the ad at that time and engage in some impulsive retail therapy.

How does Apple intend to accurately measure something as intangible as mood, though?

According to its patent application, Apple’s system would collect and analyze a combination of physical, behavioral, and spatial/temporal data over a period of time to build a “baseline mood.”  This “baseline mood” will be used to assess ongoing data collected, to infer real-time moods by comparing against your profile using “mood rules” set along these dimensions. Behavioral data might include engagement with social media (what are you posting, looking at, and how often, for instance), online browsing, and engagement with apps (what and in what sequence), paired with age, gender, and spatial/temporal data such as location and time.

Physical data, and this is where it gets quite interesting, could include the likes of your heart rate, blood pressure, body temperature, or vocal expressions. Below are some of the diagrams from Apple’s patent application, giving a high level overview of their data process.

OE blog post image 1

OE blog post image 2

OE blog post image 3

What does it mean for advertisers?

If you think about traditional media, many companies held the belief that people in a good mood would respond better to advertising. Some psychological research suggests that in fact, people who are feeling low may be most vulnerable to advertising. With access to online advertising that can incorporate ‘mood’ as a metric, advertisers could potentially have more runway to test this hypothesis, considering the lower testing costs of online advertising versus TV advertising, for instance. Regardless of this type of testing, however, advertisers will jump on this type of data to further refine their desired targeting strategies – whether it is effective or not, harmful or not, remains to be seen.

What does it mean for consumers?

With privacy already being invaded in countless ways, who exactly will welcome this with open arms?

What data should be available to advertisers? How nervous will people be about Apple keeping this kind of data safe? What if it gets into the wrong hands? What happens if Big Brother knows my mood, location, age, physical characteristics, and interests – all in real time?

The challenge of recognizing ‘mood’ is something medical professionals have not yet untangled to their satisfaction. This begs the question – would something so potentially innovative not serve the world better in the realm of medicine rather than in advertising?? Maybe I’m in the “mood not to see ads” – what then? What is known of the potential effect such advertising could have on my mood? What will be done to protect against misuse, or even understand the scope of what that means?

The use of biometrics, were mood-based advertising become a reality, also makes me think differently about the iWatch.  Initially, I wondered what was so new about this product – how will it serve customers, who may already have an iPhone, in a truly new way? Is there enough to make someone go out and purchase the watch, in addition to their phone? Now, I see this is a first method for Apple to build its capability in collecting biometric data. It is indeed going to serve them quite differently, compared to any of their other products.

Final thoughts

How will Google respond to this? Some say they are venturing into the realm of incorporating data on our behavior in the physical world through acquisitions such as Nest, but have they done anything in the way of detecting and incorporating mood? Which one makes people more nervous, and is taking matters ‘one step further’? Other players such as Microsoft filed for their own patents for similar ‘mood-based advertising’ systems, which would rely on data collected online, as well as data collected through the Kinect sensor.

While there is no ‘real-time mood based advertising’ currently in use, and its use may be quite far off in time, the tale of The Apple and the Moodreader provides some important food for thought.

Sources:

USPTO – http://www.google.com/patents/US20140025620

Business Insider – http://www.businessinsider.com/apples-mood-based-ad-targeting-patent-2014-1

The American Psychiatric Association – http://bit.ly/1zoDMfk

Apple Insider – http://appleinsider.com/articles/14/01/23/apple-investigating-mood-based-ad-delivery-system

GeekWire – http://www.geekwire.com/2012/happy-sad-microsoft-system-target-ads-based-emotional-state/


[1] Source: The American Psychiatric Association


read more

The Death of the Tech Giant: how the rise of open, flexible, heterogeneous architectures have reshaped the technology stack and distorted winner-take-all dynamics in IT

Traditionally, tech has been characterized by discontinuous innovation – that is, innovation which is not built on top existing standards or infrastructure – giving rise to entirely new markets each supported by unique value chains that standardize and then coalesce around one dominant player. Semiconductors, PCs, relational databases, local area networks (LANs) are all examples of such innovations that spawned, what many refer to as, the modern day tech giant – in the case of the innovations cited above those corresponding giants would be Intel, Microsoft, Oracle and Cisco, respectively. These companies, whose products became the standard around which entire new supply chains were formed, fundamentally built and shaped the technology stack and as a result were able to establish protective moats around their businesses. Accordingly, these companies were afforded tremendous competitive advantages as they were able to erect seemingly insurmountable barriers to entry and enforce punitively high switching costs on the entire technology value chain.

However today, these companies are facing unique sets of challenges which, in turn, are curtailing growth and compressing margins. Indeed, each of the four companies cited above is trading at or near historic lows (on a price/earnings basis). There are myriad secular reasons that help explain why, arguably, the four most dominant tech companies in the last 2-3 decades are struggling, however, I want to put forth a broader, macro-rooted explanation: simply that, as we continue to move closer to an IT model that is characterized by flexible, open, highly heterogeneous architectures, the tech paradigm shifts away from a winner-take-all dynamic to one such that no single player exerts a disproportionate amount of force on a particular market.

Before examining what’s different today, it’s helpful to understand historically how these tech giants came to dominance. Traditionally, a discontinuous innovation would spur a period of hyper-growth that coincided with mass market adoption of the new technology. During this time, several companies would come to market with competing offerings, yet in an effort scale rapidly, market stakeholders generally would standardize around a product from a single vendor, building compatible systems and getting a whole new set of product and service providers up to speed to build a new value chain. This act of standardization, catapulted a single company into a position of overwhelmingly dominant competitive advantage, as seen with Intel’s x86 chip architecture, Microsoft’s Windows operating system, the Oracle Database and Cisco’s TCP/IP network routers.

So, what’s changed recently? I argue that there isn’t one principle catalyst for this shift, but rather many small evolutions in the way technology is developed, procured and deployed that have distorted winner-take-all dynamics. Here are several important factors.

  • Increased complexity: As the tech stack has evolved, new layers have emerged (hypervisor, management, etc.) and new models have been created for developing and deploying different sets of applications – all which is to say the datacenter has grown increasingly more complex. There is no “one-size-fits-all” approach to IT.
  • Prevalence of open source: Open source software, which is highly flexible and customizable (and free!), has proliferated within the datacenter in recent years, lowering reliance on proprietary commercial offerings.
  • Rise of IT-as-a-Service: More and more IT professionals have espoused a service-based, on-demand approach to deploying and consuming IT resources – cloud computing. This, in turn, has necessitated infrastructure that is modular and highly automated. In this approach, many of the underlying IT building blocks (compute, storage and soon network) become commoditized with management and/or infrastructure software becoming the value-additive differentiator.
  • Increased tech fluency: In general, there are more skilled IT professionals and engineers capable of creating complex systems out of disparate IT building blocks. There is less reliance than ever on fully-baked, out-of-the-box solutions from a single vendor.

This all implies that barriers to entry for competitors and switching costs for customers are falling rapidly and the disproportionate weight once-dominant tech players could exert on suppliers is being eroded by new entrants, open-source solutions and even individual engineers working out of their parents’ garage. In many ways, the tech giants of yesterday are victims of their own dominance, as customers today are wary of closed, complex proprietary architectures and are incredibly sensitive to vendor lock-in. Certainly Intel’s, Microsoft’s, Oracle’s and Cisco’s statuses as markets leaders will not disintegrate overnight, but this is all to say that the previous levels of growth and margin expansion are not sustainable in this new IT paradigm, and, moreover, to win status as a “tech giant” is harder than ever, if not impossible – just look at VMware, the company which, in my mind, was closest to reaching near giant status on the back of its virtualization platform but now is in the midst of a difficult product transition. It makes me wonder if we’ll ever see the likes of Microsoft or Intel dominating IT in future generations, or simply have winner-take-all dynamics shifted entirely into the application layer?


read more

How HTML5 will end platform wars.

“We support two platforms at Apple. Two. The first is HTML5 […] and the second is the AppStore”

–Steve Jobs, WWDC 2010

War of the platforms 1.0

A lot has been written about the “platform wars” between Apple and Microsoft. The quick summary is as follows:

Apple has established the dominant position in mass market personal computing in the 1980s. It has been able to establish this position by integrating its hardware and operating system into a single package – the Macintosh. Popular applications such as VisiCalc, the first spreadsheet software have been created for that platform thus forcing consumers seeking to use those applications to chose the Mac platform over its competitors.

Microsoft chose a different strategy. By licensing its Windows operating system to many hardware manufacturers Microsoft managed to establish a wide hardware footprint, which translated into a larger install base, which in turn made independent software vendors prioritize development of Windows applications over Mac ones. Furthermore, this strategy commoditized the hardware market thus enabling Microsoft to extract tremendous profits from the ecosystem that it nurtured.

In the mid 1990s Apple tried to adopt the Microsoft strategy by licensing its operating system to other hardware manufacturers – the so-called clones. However by that point Microsoft market domination was too large and Apple’s market share continued to slide.

Upon his return to Apple, Steve Jobs quickly killed the clone program and started to gradually rebuild the company by focusing on trendy esthetically appealing computers as well as the Mac OS X, the new operating system. Jobs also took care to make sure that Microsoft will continue developing its Office Suite for Mac.

War of the platforms 2.0

With the introduction of its iPhone, which came with the new operating system iOS, Apple kicked off the post-PC era. Google quickly followed with the Android operating system starting the current platform war for dominance in mobile devices.

Seemingly taking the page out of the Microsoft playbook Google opted to license Android to hardware manufacturers. Google succeeded in getting most major phone and tablet manufacturers to use Android. Today the Android market share exceeds that of iOS. The two companies are competing for developers and independent software vendors. While first-mover advantage allowed Apple to establish a sizable ecosystem, Google is gaining. Majority of popular mobile applications today are available on both iOS and Android.

Adobe Flash

Adobe carved out a niche for its Flash technology. Primarily used for creating rich web-based applications especially those with video, Flash is basically a platform within a platform. An application written in flash will work on Mac, Windows, Solaris and other systems as long as the user has downloaded the Flash player. So by choosing the Flash technology developers don’t have to choose development for Windows versus Mac versus another platform thus diminishing the importance of the operating system as far as flash-based applications are concerned. Adobe in turn is able to sell expensive developer tools, which developers are forced to buy in order to be able to reach the Flash install base.

Apple refused to support Flash on its iOS devices. Adobe accused Apple of stifling cross-platform development, while Apple motivated its lack of support for Flash by purely technological choices (See Steve Jobs’ Thoughts On Flash).

HTML 5 and its long-term impact

HTML 5 is going to allow creation of web-based applications by enabling the browser to run more complex processes such as video rendering, complex data operations and others, in effect making the browser the operating system. Just like with Flash, an application developed for HTML 5 will work on any device that has a browser with HTML 5 support. Unlike Flash however HTML 5 will not be controlled by any one company. It will be a completely open standard meaning that anybody will be able to create an HTML 5 browser and anybody will be able to create an HTML 5 application.

If we assume that majority of software applications will in the future become web-based and if we further assume that HTML 5 will become the dominant platform, that means that the majority of software created in the future will be completely operating-system agnostic. This has two important implications:

1. Because developers won’t be choosing between competing platforms, no company will be able to muscle its way to dominance by aggressively signing on developers.

 2.Because most applications will run on most devices, consumers will not be locked into any specific platform.

This will fundamentally shift competitive dynamics between technology platforms. Instead of competing to establish platform dominance and then protecting that dominance the way Microsoft did in late nineties, competition will increasingly be based on hardware. The role of the hardware operating systems will be to optimize the fundamental hardware characteristics such as battery life and usability. This means that hardware will play a more important role in the competition of technology platforms and will cease to be a commodity.

Steve Jobs’ quote in the beginning of this post as well as the fact that Apple is one of the major contributors to the HTML 5 format, suggests that Apple believes in the above turn of events and prefers competition on hardware to that of competition on third-party software ecosystems.

Google’s recent acquisition of Motorola Mobility suggests that Google too believes that that’s where we are headed. Google understands that it needs to create hardware that’s tightly integrated with software and that will be able to stand on its own in competition with Apple and other hardware manufacturers.

Microsoft is late to the race (again). The company recently introduced its touch operating system – Windows Phone. In its marketing materials Microsoft is touting compatibility with popular Microsoft applications such as Office and Xbox Live. Microsoft is also reaching into its developer community to ensure broader software availability as well as partnering with major hardware manufacturers to produce devices that run on Windows Phone. Microsoft is also investing heavily in Silverlight, a proprietary platform that competes with Flash.

What this all means for consumers?

I think that the consumers will benefit from the new competitive dynamics. By not having to worry about whether a particular device will be able to run our favorite applications we will be a lot more free in choosing which devices to buy. I also believe that smaller device manufacturers will proliferate. Who knows, maybe there’s another Apple in the making.


read more

What’s the future of computer applications? Will it be based on the so called “web-apps” accessed through web browsers that run on terminal-like computers/mobile devices or there is still spac

e for the classic model of applications that are actually “installed” into the devices (maybe in a more transparent way)?

A computer application was, for a long time, synonym of something that (i) either came pre-installed in whatever device you acquired or (ii) you had to buy physically (via the old diskettes/cds) and install it by yourself (or, as most users did, ask someone else to do so). The whole process of buying an application and install it was expensive and too complex for the average user, who ended up sticking to the pre-installed package that came with his or her computer.

With the development of the internet and the increasing number of people with broadband access, some companies started to question and evolve this classic model or buying and using computer applications, but in slightly different ways.

One of the first to challenge this concept was Google, who predicted that in the future all applications would run in a server and users would access only the front-end (or the interface) of these applications  directly through their web browsers. As part of its strategy, Google then engaged into developing web applications that would mimic the functionalities of classic applications, but running entirely on the web (Gmail, Google Docs, etc.). The big advantage was that users wouldn’t need to install those applications anymore, all the data would be in the “cloud” and in the future they might pay for using those as a “service”. Google went even further and also developed a web browser (Google Chrome) that would make those applications to look even more like the classic ones (better layout, capacity to run off-line and some in-device storage). However, it seemed strange that a person would need a full powered computer, with lots of software layers, just to run a web browser (since all other web-based applications would run from inside the browser). The missing part of Google strategy appeared with the launch of the ChromeBook (http://www.google.com/chromebook/). In these computers, the Google Chrome browser would assume the role of Operating System (replacing Windows, for instance), making the system cheaper and faster by eliminating all the unnecessary hardware and software. The concept was beautiful but the ChromeBook project is still far away to be considered a success. Most initial reviews classified it as “expensive for what it does” and “a good complement to your laptop”. The fact is that web applications are still far to be considered as good as their offline counterparts (just compare Google Docs with Microsoft Word), and users would still need a normal computer for their daily activities.

Apple took a different approach. With the launch of the AppStore for its mobile devices, Apple found a new way to take advantage of the broadband development to evolve the classic model of selling and delivering applications to users. In an iPhone, anyone could easily go to the AppStore, buy an application (usually at a cheaper price than classic apps), and install it instantaneously without any pain or external help. It seems like a small difference, but think about how many apps the average user buy for their iPhone vs. the number of apps they buy for their PCs. The difference is huge. In fact, the idea worked so well for Apple that it just incorporated the same concept into its new OS X Lion (for full-size computers) – http://www.apple.com/macosx/whats-new/app-store.html. But did apple forget the concept of cloud computing dreamed by Google by not investing into web applications? Apple’s answer is the iCloud – http://www.apple.com/icloud/ – a platform that would, according with Apple, seamlessly integrate the data used by all your apps, in all your devices. Therefore, Apple users could benefit from higher quality Applications (not confined into a browser) and similar benefits of web-based applications.

Other companies are also going into this direction. Microsoft has just released the beta version of the  new Windows 8 OS (http://www.youtube.com/watch?v=p92QfWOw88I), which looks a lot similar to the  Windows Phone  OS (their Operating System for phones) and we can expect that it will provide an easier way for users to install and use applications in the future. Even Google is exploring the same model through its Android OS for smartphones.

Is Apple on the right direction or will classic apps last only until web-apps become more advanced?


read more