Uber in London: The Streisand Effect keeps on giving

Uber Logo

With the same overall theme as yesterday, if you’re looking at your future, step one is to look at what your customers would value, then to work back to the service components to deliver it.

I’ve followed Uber since I first discovered them in San Francisco, and it looks a simple model – to the user. You want to go from where you are to another local destination. You typically see where the closest driver is to you on your smartphone. You ask your handset for a price to go to a specific destination. It tells you. If you accept, the car is ordered and comes to pick you up. When you get dropped off, your credit card is charged, and both you and the taxi driver get the opportunity to rate each other. Job done.

Behind that facade is a model of supply and demand. Taxi drivers that can clock on and off at will. At times of high demand and dwindling available ride capacity, prices are ramped up (to “surge” pricing) to encourage more drivers onto the road. Drivers and customers with voluminous bad ratings removed. Drivers paid well enough to make more money than those in most taxi firms ($80-90,000/year in New York), or the freedom to work part time – even down to a level where your reward is to pay for your car for a few hours per week of work, and have free use of it at other times.

The service is simple and compelling enough that i’d have thought tax firms would have cottoned onto how the service works, and to replicate it before Uber ever appeared on these shores. But, with a wasted five years, they’ve appeared – and Taxi drivers all over Europe decided to run the most effective advertising campaign for an upstart competitor in their history. A one-day 850% subscriber growth; that really takes some doing, even if you were on the same side.

I’m just surprised that whoever called the go-slows all over Europe didn’t take the time out to study what we in the tech industry know as “The Streisand Effect” – Wikipedia reference here. BBC Radio 2 even ran a segment on Uber at lunchtime today, followed by every TV News Bulletin i’ve heard since. I downloaded the app as a result of hearing it on that lunchtime slot, as I guess many others did too (albeit no coverage in my area 50 miles West of London – yet). Given the five years of missed prep time, I think they’ve now lost – or find themselves in fast follower mode to incorporate similar technology into their service before they have a mass exodus to Uber (of customers, then drivers).

London Cabbies do know all the practical use of rat runs that SatNav systems are still learning, but even that is a matter of time now. I suspect appealing for regulation will, at best, only delay the inevitable.

The safest option – given users love the simplicity and lack of surprises in the service – is to get busy quickly. Plenty of mobile phone app prototyping help available on the very patch that London Black Cab drivers serve.

CloudKit – now that’s how to do a secure Database for users

Data Breach Hand Brick Wall Computer

One of the big controversies here relates to the appetite of the current UK government to release personal data with the most basic understanding of what constitutes personal identifiable information. The lessons are there in history, but I fear without knowing the context of the infamous AOL Data Leak, that we are destined to repeat it. With it goes personal information that we typically hold close to our chests, which may otherwise cause personal, social or (in the final analysis) financial prejudice.

When plans were first announced to release NHS records to third parties, and in the absence of what I thought were appropriate controls, I sought (with a heavy heart) to opt out of sharing my medical history with any third party – and instructed my GP accordingly. I’d gladly share everything with satisfactory controls in place (medical research is really important and should be encouraged), but I felt that insufficient care was being exercised. That said, we’re more than happy for my wife’s Genome to be stored in the USA by 23andMe – a company that demonstrably satisfied our privacy concerns.

It therefore came as quite a shock to find that a report, highlighting which third parties had already been granted access to health data with Government mandated approval, ran to a total 459 data releases to 160 organisations (last time I looked, that was 47 pages of PDF). See this and the associated PDFs on that page. Given the level of controls, I felt this was outrageous. Likewise the plans to release HMRC related personal financial data, again with soothing words from ministers in whom, given the NHS data implications, appear to have no empathy for the gross injustices likely to result from their actions.

The simple fact is that what constitutes individual identifiable information needs to be framed not only with what data fields are shared with a third party, but to know the resulting application of that data by the processing party. Not least if there is any suggestion that data is to be combined with other data sources, which could in turn triangulate back to make seemingly “anonymous” records traceable back to a specific individual.Which is precisely what happened in the AOL Data Leak example cited.

With that, and on a somewhat unrelated technical/programmer orientated journey, I set out to learn how Apple had architected it’s new CloudKit API announced this last week. This articulates the way in which applications running on your iPhone handset, iPad or Mac had a trusted way of accessing personal data stored (and synchronised between all of a users Apple devices) “in the Cloud”.

The central identifier that Apple associate with you, as a customer, is your Apple ID – typically an email address. In the Cloud, they give you access to two databases on their cloud infrastructure; one a public one, the other private. However, the second you try to create or access a table in either, the API accepts your iCloud identity and spits back a hash unique to your identity and the application on the iPhone asking to process that data. Different application, different hash. And everyone’s data is in there, so it’s immediately unable to permit any triangulation of disparate data that can trace back to uniquely identify a single user.

Apple take this one stage further, in that any application that asks for any personal identifiable data (like an email address, age, postcode, etc) from any table has to have access to that information specifically approved by the handset owners end user; no explicit permission (on a per application basis), no data.

The data maintained by Apple, besides holding personal information, health data (with HealthKit), details of home automation kit in your house (with HomeKit), and not least your credit card data stored to buy Music, Books and Apps, makes full use of this security model. And they’ve dogfooded it so that third party application providers use exactly the same model, and the same back end infrastructure. Which is also very, very inexpensive (data volumes go into Petabytes before you spend much money).

There are still some nuances I need to work. I’m used to SQL databases and to some NoSQL database structures (i’m MongoDB certified), but it’s not clear, based on looking at the way the database works, which engine is being used behind the scenes. It appears to be a key:value store with some garbage collection mechanics that look like a hybrid file system. It also has the capability to store “subscriptions”, so if specific criteria appear in the data store, specific messages can be dispatched to the users devices over the network automatically. Hence things like new diary appointments in a calendar can be synced across a users iPhone, iPad and Mac transparently, without the need for each to waste battery power polling the large database on the server waiting for events that are likely to arrive infrequently.

The final piece of the puzzle i’ve not worked out yet is, if you have a large database already (say of the calories, carbs, protein, fat and weights of thousands of foods in a nutrition database), how you’d get that loaded into an instance of the public database in Apple’s Cloud. Other that writing custom loading code of course!

That apart, really impressed how Apple have designed the datastore to ensure the security of users personal data, and to ensure an inability to triangulate data between information stored by different applications. And that if any personal identifiable data is requested by an application, that the user of the handset has to specifically authorise it’s disclosure for that application only. And without the app being able to sense if the data is actually present at all ahead of that release permission (so, for example, if a Health App wants to gain access to your blood sampling data, it doesn’t know if that data is even present or not before the permission is given – so the app can’t draw inferences on your probably having diabetes, which would be possible if it could deduce if it knew that you were recording glucose readings at all).

In summary, impressive design and a model that deserves our total respect. The more difficult job will be to get the same mindset in the folks looking to release our most personal data that we shared privately with our public sector servants. They owe us nothing less.

Am I the only one shaking my head at US Net Neutrality?

Internet Open Sign

I’ve always had the view that:

  1. ISPs receive a monthly payment for the speed of connection I have to the Internet
  2. Economics are such that I expect this to be effectively uncapped for almost all “normal” use, though the few edge cases of excessive use would be subject to a speed reduction to ration use of the resources for the good of the ISPs user base as a whole (to avoid a tragedy of the commons)
  3. That a proportion of my monthly costs would track investments needed to ensure peering equipment and the ISPs own infrastructure delivered service to me at the capacity needed to deliver (1) and (2) without any discrimination based on traffic nor its content.

Living in Europe, i’ve been listening to lots of commentary in the USA about both the proposed merger between Comcast and Time Warner Cable on one hand, and of the various ebbs and flows surrounding “Net Neutrality” and the FCC on the other. It’s probably really surprising to know that broadband speeds in the USA are at best mid-table on the world stage, and that Comcast and Time Warner have some of the worst customer satisfaction scores in their respective service areas. There is also the spectacle of seeing the widespread funding of politicians there by industry, and the presence of a far from independent chairman of the FCC (the regulator) whose term is likely to be back through the revolving door to the very industry he currently is charged to regulate and from whence he came.

I’ve read “Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age” by Susan Crawford, which logged what happened as the Bell Telephone Monopoly was deregulated, and the result the US consumer was left with. Mindful of this, there was an excellent blog post that amply demonstrates what happens when the FCC lets go of the steering wheel, and refuses to classify Internet provision being subject to the “common carrier” status. Dancing around this serves no true political purpose, other than to encourage the receipt of Economic rent in ample excess to the cost of service provision in areas of mandated exclusivity of provision.

It appears that the 5 of the major “last mile” ISPs in the USA (there are 6 of them – while unnamed, folks on various forums suspect that Verizon are the only ones not cited) are not investing in equipment at their peering points, leading to an inference that they are double dipping. ie: asking the source of traffic (like Netflix, YouTube, etc) to pay transit costs to their customers for the “last mile”. Equipment costs that are reckoned to be marginal (fractions of a cent to each customer served) to correct. There is one European ISP implicated, though comments i’ve seen around the USA suggest this is most likely to be to Germany.

The blog post is by Mark Taylor, an executive of Level 3 (who provide a lot of the long distance bandwidth in the USA). Entitled “Observations of an Internet Middleman”, it is well worth a read here.

I just thank god we’re in Europe, where we have politicians like Neelie Kroes who works relentlessly, and effectively, to look after the interests of her constituents above all else. With that, a commitment to Net Neutrality, dropping roaming charges for mobile telcos, no software patents and pushing investments consistent with the long term interests of the population in the EC.

We do have our own challenges in the UK. Some organisations still profit handsomely from scientific research we pay for. We fund efforts by organisations to deliver hammer blows to frustrated consumers rather than encouraging producers to make their content accessible in a timely and cost effective fashion. And we have one of the worst cases of misdirected campaigns, with no factual basis and playing on media-fanned fear, to promote government mandated censorship (fascinating parallels to US history in “The Men who open your mail” here – it’ll take around 7 minutes to read). Horrific parallels to this, and conveniently avoiding the fact that wholesale games of “wac-a-mole” have demonstrably never worked.

That all said, our problems will probably trend to disappear, be it with the passing of the current government and longer term trends in media readership (the Internet native young rarely read Newspapers – largely a preserve of the nett expiring old).

While we have our own problems, I still don’t envy the scale of task ahead of consumers in the USA to unpick their current challenges with Internet access. I sincerely hope the right result makes it in the end.