The Next Explosion – the Eyes have it

Crossing the Chasm Diagram

Crossing the Chasm – on one sheet of A4

One of the early lessons you pick up looking at product lifecycles is that some people hold out buying any new technology product or service longer than anyone else. You make it past the techies, the visionaries, the early majority, late majority and finally meet the laggards at the very right of the diagram (PDF version here). The normal way of selling at that end of the bell curve is to embed your product in something else; the person who swore they’d never buy a Microprocessor unknowingly have one inside the controls on their Microwave, or 50-100 ticking away in their car.

In 2016, Google started releasing access to its Vision API. They were routinely using their own Neural networks for several years; one typical application was taking the video footage from their Google Maps Streetview cars, and correlating house numbers from video footage onto GPS locations within each street. They even started to train their own models to pick out objects in photographs, and to be able to annotate a picture with a description of its contents – without any human interaction. They have also begun an effort to do likewise describing the story contained in hundreds of thousands of YouTube videos.

One example was to ask it to differentiate muffins and dogs:

This is does with aplomb, with usually much better than human performance. So, what’s next?

One notable time in Natural History was the explosion in the number of species on earth that  occured in the Cambrian period, some 534 million years ago. This was the time when it appears life forms first developed useful eyes, which led to an arms race between predators and prey. Eyes everywhere, and brains very sensitive to signals that come that way; if something or someone looks like they’re staring at you, sirens in your conscience will be at full volume.

Once a neural network is taught (you show it 1000s of images, and tell it which contain what, then it works out a model to fit), the resulting learning can be loaded down into a small device. It usually then needs no further training or connection to a bigger computer nor cloud service. It can just sit there, and report back what it sees, when it sees it; the target of the message can be a person or a computer program anywhere else.

While Google have been doing the heavy lifting on building the learning models in the cloud, Apple have slipped in with their own CloudML data format, a sort of PDF for the resulting machine learning data formats. Then using the Graphics Processing Units on their iPhone and iPad devices to run the resulting models on the users device. They also have their ARkit libraries (as in “Augmented Reality”) to sense surfaces and boundaries live on the embedded camera – and to superimpose objects in the field of view.

With iOS 11 coming in the autumn, any handwritten notes get automatically OCR’d and indexed – and added to local search. When a document on your desk is photo’d from an angle, it can automatically flatten it to look like a hi res scan of the original – and which you can then annotate. There are probably many like features which will be in place by the time the new iPhone models arrive in September/October.

However, tip of the iceberg. When I drive out of the car park in the local shopping centre here, the barrier automatically raises given the person with the ticket issued to my car number plate has already paid. And I guess we’re going to see a Cambrian explosion as inexpensive “eyes” get embedded in everything around us in our service.

With that, one example of what Amazon are experimenting with in their “Amazon Go” shop in Seattle. Every visitor a shoplifter: https://youtu.be/NrmMk1Myrxc

Lots more to follow.

PS: as a footnote, an example drawing a ruler on a real object. This is 3 weeks after ARkit got released. Next: personalised shoe and clothes measurements, and mail order supply to size: http://www.madewitharkit.com/post/162250399073/another-ar-measurement-app-demo-this-time

IT Trends into 2017 – or the delusions of Ian Waring

Bowling Ball and Pins

My perception is as follows. I’m also happy to be told I’m mad, or delusional, or both – but here goes. Most reflect changes well past the industry move from CapEx led investments to Opex subscriptions of several years past, and indeed the wholesale growth in use of Open Source Software across the industry over the last 10 years. Your own Mileage, or that of your Organisation, May Vary:

  1. if anyone says the words “private cloud”, run for the hills. Or make them watch https://youtu.be/URvWSsAgtJE. There is also an equivalent showing how to build a toaster for $15,000. The economics of being in the business of building your own datacentre infrastructure is now an economic fallacy. My last months Amazon AWS bill (where I’ve been developing code – and have a one page site saying what the result will look like) was for 3p. My Digital Ocean server instance (that runs a network of WordPress sites) with 30GB flash storage and more bandwidth than I can shake a stick at, plus backups, is $24/month. Apart from that, all I have is subscriptions to Microsoft, Github and Google for various point services.
  2. Most large IT vendors have approached cloud vendors as “sell to”, and sacrificed their own future by not mapping customer landscapes properly. That’s why OpenStack is painting itself into a small corner of the future market – aimed at enterprises that run their own data centres and pay support costs on a per software instance basis. That’s Banking, Finance and Telco land. Everyone else is on (or headed to) the public cloud, for both economic reasons and “where the experts to manage infrastructure and it’s security live” at scale.
  3. The War stage of Infrastructure cloud is over. Network effects are consolidating around a small number of large players (AWS, Google Cloud Platform, Microsoft Azure) and more niche players with scale (Digital Ocean among SME developers, Softlayer in IBM customers of old, Heroku with Salesforce, probably a few hosting providers).
  4. Industry move to scale out open source, NoSQL (key:value document orientated) databases, and components folks can wire together. Having been brought up on MySQL, it was surprisingly easy to set up a MongoDB cluster with shards (to spread the read load, scaled out based on index key ranges) and to have slave replicas backing data up on the fly across a wide area network. For wiring up discrete cloud services, the ground is still rough in places (I spent a couple of months trying to get an authentication/login workflow working between a single page JavaScript web app, Amazon Cognito and IAM). As is the case across the cloud industry, the documentation struggles to keep up with the speed of change; developers have to be happy to routinely dip into Github to see how to make things work.
  5. There is a lot of focus on using Containers as a delivery mechanism for scale out infrastructure, and management tools to orchestrate their environment. Go, Chef, Jenkins, Kubernetes, none of which I have operational experience with (as I’m building new apps have less dependencies on legacy code and data than most). Continuous Integration and DevOps often cited in environments were custom code needs to be deployed, with Slack as the ultimate communications tool to warn of regular incoming updates. Having been at one startup for a while, it often reminded me of the sort of military infantry call of “incoming!” from the DevOps team.
  6. There are some laudable efforts to abstract code to be able to run on multiple cloud providers. FOG in the Ruby ecosystem. CloudFoundry (termed BlueMix in IBM) is executing particularly well in large Enterprises with investments in Java code. Amazon are trying pretty hard to make their partners use functionality only available on AWS, in traditional lock-in strategy (to avoid their services becoming a price led commodity).
  7. The bleeding edge is currently “Function as a Service”, “Backend as a Service” or “Serverless apps” typified with Amazon Lambda. There are actually two different entities in the mix; one to provide code and to pay per invocation against external events, the other to be able to scale (or contract) a service in real time as demand flexes. You abstract all knowledge of the environment  away.
  8. Google, Azure and to a lesser extent AWS are packaging up API calls for various core services and machine learning facilities. Eg: I can call Google’s Vision API with a JPEG image file, and it can give me the location of every face (top of nose) on the picture, face bounds, whether each is smiling or not). Another that can describe what’s in the picture. There’s also a link into machine learning training to say “does this picture show a cookie” or “extract the invoice number off this image of a picture of an invoice”. There is an excellent 35 minute discussion on the evolving API landscape (including the 8 stages of API lifecycle, the need for honeypots to offset an emergent security threat and an insight to one impressive Uber API) on a recent edition of the Google Cloud Platform Podcast: see http://feedproxy.google.com/~r/GcpPodcast/~3/LiXCEub0LFo/
  9. Microsoft and Google (with PowerApps and App Maker respectively) trying to remove the queue of IT requests for small custom business apps based on company data. Though so far, only on internal intranet type apps, not exposed outside the organisation). This is also an antithesis of the desire for “big data”, which is really the domain of folks with massive data sets and the emergent “Internet of Things” sensor networks – where cloud vendor efforts on machine learning APIs can provide real business value. But for a lot of commercial organisations, getting data consolidated into a “single version of the truth” and accessible to the folks who need it day to day is where PowerApps and AppMaker can really help.
  10. Mobile apps are currently dogged by “winner take all” app stores, with a typical user using 5 apps for almost all of their mobile activity. With new enhancements added by all the major browser manufacturers, web components will finally come to the fore for mobile app delivery (not least as they have all the benefits of the web and all of those of mobile apps – off a single code base). Look to hear a lot more about Polymer in the coming months (which I’m using for my own app in conjunction with Google Firebase – to develop a compelling Progressive Web app). For an introduction, see: https://www.youtube.com/watch?v=VBbejeKHrjg
  11. Overall, the thing most large vendors and SIs have missed is to map their customer needs against available project components. To map user needs against axes of product life cycle and value chains – and to suss the likely movement of components (which also tells you where to apply six sigma and where agile techniques within the same organisation). But more eloquently explained by Simon Wardley: https://youtu.be/Ty6pOVEc3bA

There are quite a range of “end of 2016” of surveys I’ve seen that reflect quite a few of these trends, albeit from different perspectives (even one that mentioned the end of Java as a legacy language). You can also add overlays with security challenges and trends. But – what have I missed, or what have I got wrong? I’d love to know your views.

Future Health: DNA is one thing, but 90% of you is not you


One of my pet hates is seeing my wife visit the doctor, getting hunches of what may be afflicting her health, and this leading to a succession of “oh, that didn’t work – try this instead” visits for several weeks. I just wonder how much cost could be squeezed out of the process – and lack of secondary conditions occurring – if the root causes were much easier to identify reliably. I then wonder if there is a process to achieve that, especially in the context of new sensors coming to market and their connectivity to databases via mobile phone handsets – or indeed WiFi enabled, low end Bluetooth sensor hubs aka the Apple Watch.

I’ve personally kept a record of what i’ve eaten, down to fat, protein and carb content (plus my Monday 7am weight and daily calorie intake) every day since June 2002. A precursor to the future where devices can keep track of a wide variety of health signals, feeding a trend (in conjunction with “big data” and “machine learning” analyses) toward self service health. My Apple Watch has a years worth of heart rate data. But what signals would be far more compelling to a wider variety of (lack of) health root cause identification if they were available?

There is currently a lot of focus on Genetics, where the Human Genome can betray many characteristics or pre-dispositions to some health conditions that are inherited. My wife Jane got a complete 23andMe statistical assessment several years ago, and has also been tested for the BRCA2 (pronounced ‘bracca-2’) gene – a marker for inherited pre-disposition to risk of Breast Cancer – which she fortunately did not inherit from her afflicted father.

A lot of effort is underway to collect and sequence the complete Genome sequences from the DNA of hundreds of thousands of people, building them into a significant “Open Data” asset for ongoing research. One gotcha is that such data is being collected by numerous organisations around the world, and the size of each individuals DNA (assuming one byte to each nucleotide component – A/T or C/G combinations) runs to 3GB of base pairs. You can’t do research by throwing an SQL query (let alone thousands of machine learning attempts) over that data when samples are stored in many different organisations databases, hence the existence of an API (courtesy of the GA4GH Data Working Group) to permit distributed queries between co-operating research organisations. Notable that there are Amazon Web Services and Google employees participating in this effort.

However, I wonder if we’re missing a big and potentially just as important data asset; that of the profile of bacteria that everyone is dependent on. We are each home to approx. 10 trillion human cells among the 100 trillion microbial cells in and on our own bodies; you are 90% not you.

While our human DNA is 99.9% identical to any person next to us, the profile of our MicroBiome are typically only 10% similar; our age, diet, genetics, physiology and use of antibiotics are also heavy influencing factors. Our DNA is our blueprint; the profile of the bacteria we carry is an ever changing set of weather conditions that either influence our health – or are leading indicators of something being wrong – or both. Far from being inert passengers, these little organisms play essential roles in the most fundamental processes of our lives, including digestion, immune responses and even behaviour.

Different MicroBiome ecosystems are present in different areas of our body, from our skin, mouth, stomach, intestines and genitals; most promise is currently derived from the analysis of stool samples. Further, our gut is only second to our brain in the number of nerve endings present, many of them able to enact activity independently from decisions upstairs. In other areas, there are very active hotlines between the two nerve cities.

Research is emerging that suggests previously unknown links between our microbes and numerous diseases, including obesity, arthritis, autism, depression and a litany of auto-immune conditions. Everyone knows someone who eats like a horse but is skinny thin; the composition of microbes in their gut is a significant factor.

Meanwhile, costs of DNA sequencing and compute power have dropped to a level where analysis of our microbe ecosystems costs from $100M a decade ago to some $100 today. It should continue on that downward path to a level where personal regular sampling could become available to all – if access to the needed sequencing equipment plus compute resources were more accessible and had much shorter total turnaround times. Not least to provide a rich Open Data corpus of samples that we can use for research purposes (and to feed back discoveries to the folks providing samples). So, what’s stopping us?

Data Corpus for Research Projects

To date, significant resources are being expended on Human DNA Genetics and comparatively little on MicroBiome ecosystems; the largest research projects are custom built and have sampling populations of less than 4000 individuals. This results in insufficient population sizes and sample frequency on which to easily and quickly conduct wholesale analyses; this to understand the components of health afflictions, changes to the mix over time and to isolate root causes.

There are open data efforts underway with the American Gut Project (based out of the Knight Lab in the University of San Diego) plus a feeder “British Gut Project” (involving Tim Spector and staff at University College London). The main gotcha is that the service is one-shot and takes several months to turn around. My own sample, submitted in January, may take up 6 months to work through their sequencing then compute batch process.

In parallel, VC funded company uBiome provide the sampling with a 6-8 week turnaround (at least for the gut samples; slower for the other 4 area samples we’ve submitted), though they are currently not sharing the captured data to the best of my knowledge. That said, the analysis gives an indication of the names, types and quantities of bacteria present (with a league table of those over and under represented compared to all samples they’ve received to date), but do not currently communicate any health related findings.

My own uBiome measures suggest my gut ecosystem is more diverse than 83% of folks they’ve sampled to date, which is an analogue for being more healthy than most; those bacteria that are over represented – one up to 67x more than is usual – are of the type that orally administered probiotics attempt to get to your gut. So a life of avoiding antibiotics whenever possible appears to have helped me.

However, the gut ecosystem can flex quite dramatically. As an example, see what happened when one person contracted Salmonella over a three pay period (the green in the top of this picture; x-axis is days); you can see an aggressive killing spree where 30% of the gut bacteria population are displaced, followed by a gradual fight back to normality:

Salmonella affecting MicroBiome PopulationUnder usual circumstances, the US/UK Gut Projects and indeed uBiome take a single measure and report back many weeks later. The only extra feature that may be deduced is the delta between counts of genome start and end sequences, as this will give an indication to the relative species population growth rates from otherwise static data.

I am not aware of anyone offering a faster turnaround service, nor one that can map several successively time gapped samples, let alone one that can convey health afflictions that can be deduced from the mix – or indeed from progressive weather patterns – based on the profile of bacteria populations found.

My questions include:

  1. Is there demand for a fast turnaround, wholesale profile of a bacterial population to assist medical professionals isolating a indicators – or the root cause – of ill health with impressive accuracy?
  2. How useful would a large corpus of bacterial “open data” be to research teams, to support their own analysis hunches and indeed to support enough data to make use of machine learning inferences? Could we routinely take samples donated by patients or hospitals to incorporate into this research corpus? Do we need the extensive questionnaires the the various Gut Projects and uBiome issue completed alongside every sample?
  3. What are the steps in the analysis pipeline that are slowing the end to end process? Does increased sample size (beyond a small stain on a cotton bud) remove the need to enhance/copy the sample, with it’s associated need for nitrogen-based lab environments (many types of bacteria are happy as Larry in the Nitrogen of the gut, but perish with exposure to oxygen).
  4. Is there any work active to make the QIIME (pronounced “Chime”) pattern matching code take advantage of cloud spot instances, inc Hadoop or Spark, to speed the turnaround time from Sequencing reads to the resulting species type:volume value pairs?
  5. What’s the most effective delivery mechanism for providing “Open Data” exposure to researchers, while retaining the privacy (protection from financial or reputational prejudice) for those providing samples?
  6. How do we feed research discoveries back (in English) to the folks who’ve provided samples and their associated medical professionals?

New Generation Sequencing works by splitting DNA/RNA strands into relatively short read lengths, which then need to be reassembled against known patterns. Taking a poop sample with contains thousands of different bacteria is akin to throwing the pieces of many thousand puzzles into one pile and then having to reconstruct them back – and count the number of each. As an illustration, a single HiSeq run may generate up to 6 x 10^9 sequences; these then need reassembling and the count of 16S rDNA type:quantity value pairs deduced. I’ve seen estimates of six thousand CPU hours to do the associated analysis to end up with statistically valid type and count pairs. This is a possible use case for otherwise unused spot instance capacity at large cloud vendors if the data volumes could be ingested and processed cost effectively.

Nanopore sequencing is another route, which has much longer read lengths but is much more error prone (1% for NGS, typically up to 30% for portable Nanopore devices), which probably limits their utility for analysing bacteria samples in our use case. Much more useful if you’re testing for particular types of RNA or DNA, rather than the wholesale profiling exercise we need. Hence for the time being, we’re reliant on trying to make an industrial scale, lab based batch process turn around data as fast we are able – but having a network accessible data corpus and research findings feedback process in place if and when sampling technology gets to be low cost and distributed to the point of use.

The elephant in the room is in working out how to fund the build of the service, to map it’s likely cost profile as technology/process improvements feed through, and to know to what extent it’s diagnosis of health root causes will improve it’s commercial attractiveness as a paid service over time. That is what i’m trying to assess while on the bench between work contracts.

Other approaches

Nature has it’s way of providing short cuts. Dogs have been trained to be amazingly prescient at assessing whether someone has Parkinson’s just by smelling their skin. There are other techniques where a pocket sized spectrometer can assess the existence of 23 specific health disorders. There may well be other techniques that come to market that don’t require a thorough picture of a bacterial population profile to give medical professionals the identity of the root causes of someone’s ill health. That said, a thorough analysis may at least be of utility to the research community, even if we get to only eliminate ever rarer edge cases as we go.

Coming full circle

One thing that’s become eerily apparent to date is some of the common terminology between MicroBiome conditions and terms i’ve once heard used by Chinese Herbal Medicine (my wife’s psoriasis was cured after seeing a practitioner in Newbury for several weeks nearly 20 years ago). The concept of “balance” and the existence of “heat” (betraying the inflammation as your bacterial population of different species ebbs and flows in reaction to different conditions). Then consumption or application of specific plant matter that puts the bodies bacterial population back to operating norms.

Lingzhi Mushroom

Wild mushroom “Lingzhi” in China: cultivated in the far east, found to reduce Obesity

We’ve started to discover that some of the plants and herbs used in Chinese Medicine do have symbiotic effects on your bacterial population on conditions they are reckoned to help cure. With that, we are starting to see some statistically valid evidence that Chinese and Western medicine may well meet in the future, and be part of the same process in our future health management.

Until then, still work to do on the business plan.

Mobile Phone User Interfaces and Chinese Genius

Most of my interactions with the online world use my iPhone 6S Plus, Apple Watch, iPad Pro or MacBook – but with one eye on next big things from the US West Coast. The current Venture Capital fads being on Conversational Bots, Virtual Reality and Augmented Reality. I bought a Google Cardboard kit for my grandson to have a first glimpse of VR on his iPhone 5C, though spent most of the time trying to work out why his handset was too full to install any of the Cardboard demo apps; 8GB, 2 apps, 20 songs and the storage list that only added up to 5GB use. Hence having to borrow his Dad’s iPhone 6 while we tried to sort out what was eating up 3GB. Very impressive nonetheless.


The one device I’m waiting to buy is an Amazon Echo (currently USA only). It’s a speaker with six directional microphones, an Internet connection and some voice control smarts; these are extendable by use of an application programming interface and database residing in their US East Datacentre. Out of the box, you can ask it’s nom de plume “Alexa” to play a music single, album or wish list. To read back an audio book from where you last left off. To add an item to a shopping or to-do list. To ask about local outside weather over the next 24 hours. And so on.

It’s real beauty is that you can define your own voice keywords into what Amazon term a “Skill”, and provide your own plumbing to your own applications using what Amazon term their “Alexa Skill Kit”, aka “ASK”. There is already one UK Bank that have prototyped a Skill for the device to enquire their users bank balance, primarily as an assist to the visually impaired. More in the USA to control home lighting and heating by voice controls (and I guess very simple to give commands to change TV channels or to record for later viewing). The only missing bit is that of identity; the person speaking can be anyone in proximity to the device, or indeed any device emitting sound in the room; a radio presenter saying “Alexa – turn the heating up to full power” would not be appreciated by most listeners.

For further details on Amazon Echo and Alexa, see this post.

However, the mind wanders over to my mobile phone, and the disjointed experience it exposes to me when I’m trying to accomplish various tasks end to end. Data is stored in application silos. Enterprise apps quite often stop at a Citrix client turning your pocket supercomputer into a dumb (but secured) Windows terminal, where the UI turns into normal Enterprise app silo soup to go navigate.

Some simple client-side workflows can be managed by software like IFTTT – aka “IF This, Then That” – so I can get a new Photo automatically posted to Facebook or Instagram, or notifications issued to be when an external event occurs. But nothing that integrates a complete buying experience. The current fad for conversational bots still falls well short; imagine the workflow asking Alexa to order some flowers, as there are no visual cues to help that discussion and buying experience along.

For that, we’d really need to do one of the Jeff Bezos edicts – of wiping the slate clean, to imagine the best experience from a user perspective and work back. But the lessons have already been learnt in China, where desktop apps weren’t a path on the evolution of mobile deployments in society. An article that runs deep on this – and what folks can achieve within WeChat in China – is impressive. See: http://dangrover.com/blog/2016/04/20/bots-wont-replace-apps.html

I wonder if Android or iOS – with the appropriate enterprise APIs – could move our experience on mobile handsets to a similar next level of compelling personal servant. I hope the Advanced Development teams at both Apple and Google – or a startup – are already prototyping  such a revolutionary, notifications baked in, mobile user interface.

New Mobile Phone or Tablet? Do this now:

Find My iPhone - Real MapIf you have an iPhone or iPad, install “Find My iPhone”. If you have an Android phone or tablet, install “Android Device Manager”. Both free of charge, and will prevent you looking like a dunce on social media if your device gets lost or stolen. Instead, you can get your phone (or tablets) current location like that above – from any Internet connection.

If you do, just login to iCloud or Android Device Manager on the web, and voila – it will draw its location on a map – and allow various options (like putting a message on the screen, or turn it into a remote speaker that the volume control can’t mute, or to wipe the device).

Phone lost in undergrowth and the battery about to die? Android phones will routinely bleat their location to the cloud before all power is lost, so ADM can still remember where you should look.

So, how does a modern smartphone know work out where you are? For the engineering marvel that is the Apple iPhone, it sort of works like this:

  1. If you’re in the middle of an open field with the horizon visible in all directions, your handset will be able to pick up signals from up to 14 Global Positioning System (GPS) Satellites. If it sees only 2 of them (with the remainder obscured by buildings, structures or your car roof, etc), it can work out your x and y co-ordinates to within 3 meters – worldwide. If it can see at least 3 of the 14 satellites, then it can work out your elevation above sea level too.
  2. Your phone will typically be communicating its presence to a local cell tower. Your handset knows the approximate location of these, albeit in distances measured in kilometers or miles. It’s primary use is to suss which worldwide time zone you are in; that’s why your iPhone sets itself to the correct local time when you switch on your handset at an airport after your flight lands.
  3. Your phone will sense the presence of WiFi routers and reference a database that associates the routers unique Ethernet address with the location where it is consistently found (by other handsets, or by previous data collection when building online street view maps). Such signals are normally within a 100-200 meters range. This range is constrained because WiFi usually uses the 2.4GHz band, which is the frequency at which a microwave oven agitates and heats water; the fact the signal suffers badly in rain is why it was primarily intended for internal use inside buildings.

A combination of the above are sensed and combined to drill down to your phones timezone, it’s location as being in a mobile phone cell area (can be a few hundred yards in dense populated areas, or miles in large rural areas or open countryside); to being close to a specific wifi router, or (all else being well, your exact GPS location to within 10 feet or so.

A couple of extra capabilities feature on latest iPhone and Android handsets to extend location coverage to areas in large internal buildings and shopping centres, where the ability for a handset to see any GPS satellites are severely constrained or absent altogether.

  • One is Low Energy Bluetooth Beacons. Your phone can sense the presence of nearby beacons (or, at your option, be one itself); these are normally associated with a particular retail organisation (one half of a numeric identifier) and another unique to each beacon unit (it is up to the organisation to map the location and associated attributes – like “this is the Perfume Department Retail Sale Counter on Floor 2 of the Reading Department Store”. An application can tell whether it can sense the signal at all, if you’re within 10′ of the beacon, or if the handset is immediately adjacent to the beacon (eg: handset being held against a till).

You’ll notice that there is no central database of bluetooth beacon locations and associated positions and attributes. The handset manufacturers are relatively paranoid that they don’t want a handset user being spammed incessantly as they walk past a street of retail outlets; hence, you must typically opt into the app of specific retailers to get notifications at all, and to be able to switch them off if they abuse your trust.

  • Another feature of most modern smartphone handsets is the presence of miniature gyroscopes, accelerometers and magnetic sensors in every device. Hence the ability to know how the phone is positioned in both magnetic compass direction and its orientation in 3D space at all times. It can also sense your speed by force and direction of your movements. Hence even if in an area or building with no GPS signal, your handset can fairly accurately suss your position from the last moment it had a quality location fix on you, augmented by the directions and speeds you’ve followed since. An example of history recorded around a typical shopping centre can look like this:

Typically, apps don’t lock onto your positioning full time; users will know how their phone batteries tend to drain much faster when their handsets are used with all sensors running full time in a typical app like Google Maps (in Navigation mode) or Waze. Instead, they tend to fill a location history, so a user can retrieve their own historical movement history or places they’ve recently visited. I don’t know of any app that uses this data, but know in Apples case, you’d have to give specific permission to an app to use such data with your blessing (or it would get no access to it at all). So, mainly for future potential use.

As for other location apps – Apple Passbook is already throwing my Starbucks card onto my iPhone’s lock screen when I’m close to a Starbucks location, and likewise my boarding card at a Virgin Atlantic Check-in Desk. I also have another app (Glympse) that messages my current map location, speed and eta (continuously updated) to any person I choose to share that journey with – normally my wife when on the train home, or my boss of affected by travel delays. But am sure there is more to come.

In the meantime, I hope people just install “Find my iPhone” or “Android Device Manager” on any phone handset you buy or use. They both make life less complicated if your phone or tablet ever goes missing. And you don’t get to look like a dunce for not taking the precautions up front that any rational thinking person should do.

Another lucid flurry of Apple thinking it through – unlike everyone else

Apple Watch Home Screen

This happens every time Apple announce a new product category. Audience reaction, and the press, rush off to praise or condemn the new product without standing back and joining the dots. The Kevin Lynch presentation at the Keynote also didn’t have a precursor of a short video on-ramp to help people understand the full impact of what they were being told. With that, the full impact is a little hidden. It’s a lot more than having Facebook, Twitter, Email and notifications on your wrist when you have your phone handset in your pocket.

There were a lot of folks focussing on it’s looks and comparisons to the likely future of the Swiss watch industry. For me, the most balanced summary of the luxury esthetics from someone who’s immersed in that industry can be found at:  http://www.hodinkee.com/blog/hodinkee-apple-watch-review

Having re-watched the keynote, and seen all the lame Androidware, Samsung, LG and Moto 360 comparisons, there are three examples that explode almost all of the “meh” reactions in my view. The story is hidden my what’s on that S1 circuit board inside the watch, and the limited number of admissions of what it can already do. Three scenarios:

1. Returning home at the end of a working day (a lot of people do this).

First thing I do after I come indoors is to place my mobile phone on top of the cookery books in our kitchen. Then for the next few hours i’m usually elsewhere in the house or in the garden. Talking around, that behaviour is typical. Not least as it happens in the office too, where if i’m in a meeting, i’d normally leave my handset on silent on my desk.

With every Android or Tizen Smart Watch I know, the watch loses the connection as soon as I go out of Bluetooth range – around 6-10 meters away from the handset. That smart watch is a timepiece from that point on.

Now, who forgot to notice that the Apple Watch has got b/g WiFi integrated on their S1 module? Or that it it can not only tell me of an incoming call, but allow me to answer it, listen and talk – and indeed to hand control back to my phone handset when I return to it’s current proximity?

2. Sensors

There are a plethora of Low Energy Bluetooth sensors around – and being introduced with great regularity – for virtually every bodily function you can think of. Besides putting your own fitness tracking sensors on at home, there are probably many more that can be used in a hospital setting. With that, a person could be quite a walking network of sensors and wander to different wards or labs during their day, or indeed even be released to recuperate at home.

Apple already has some sensors (heart rate, and probably some more capabilities to be announced in time, using the infrared related ones on the skin side of the Apple watch), but can act as a hub to any collection of external bluetooth sensors at the same time. Or in smart pills you can swallow. Low Energy Bluetooth is already there on the Apple Watch. That, in combination with the processing power, storage and b/g WiFi makes the watch a complete devices hub, virtually out of the box.

If your iPhone is on the same WiFi, everything syncs up with the Health app there and the iCloud based database already – which you can (at your option) permit an external third party to have access to. Now, tell me about the equivalent on any other device or service you can think of.

3. Paying for things.

The iPhone 5S, 6 and 6 Plus all have integrated finger print scanners. Apple have put some functionality into iOS 8 where, if you’re within Bluetooth range (6-10 meters of your handset), you can authenticate (with your fingerprint) the fact your watch is already on your wrist. If the sensors on the back have any suspicion that the watch leaves your wrist, it immediately invalidates the authentication.

So, walk up to a contactless till, see the payment amount appear on the watch display, one press of the watch pays the bill. Done. Now try to do that with any other device you know.

Developers, developers, developers.

There are probably a million other applications that developers will think of, once folks realise there is a full UNIX computer on that SoC (System on a Chip). With WiFi. With Bluetooth. With a Taptic feedback mechanism that feels like someone is tapping your wrist (not loudly vibrating across the table, or flashing LED lights at you). With a GPU driving a high quality, touch sensitive display. Able to not only act as a remote control for your iTunes music collection on another device, but to play it locally when untethered too (you can always add bluetooth earbuds to keep your listening private). I suspect some of the capabilities Apple have shown (like the ability to stream your heartbeat to another Apple Watch user) will evolve into potential remote health visit applications that can work Internet wide.

Meanwhile, the tech press and the discussion boards are full of people lamenting the fact that there is no GPS sensor in the watch itself (like every other Smart Watch I should add – GPS location sensing is something that eats battery power for breakfast; better to rely on what’s in the phone handset, or to wear a dedicated bluetooth GPS band on the other wrist if you really need it).

Don’t be distracted; with the electronics already in the device, the Apple Watch is truly only the beginning. We’re now waiting for the full details of the WatchKit APIs to unleash that ecosystem with full force.

Yo! Minimalist Notifications, API and the Internet of Things

Yo LogoThought it was a joke, but having 4 hours of code resulting in $1m of VC funding, at an estimated $10M company valuation, raised quite a few eyebrows. The Yo! project team have now released their API, and with it some possibilities – over and above the initial ability to just say “Yo!” to a friend. At the time he provided some of the funds, John Borthwick of Betaworks said that there is a future of delivering binary status updates, or even commands to objects to throw an on/off switch remotely (blog post here). The first green shoots are now appearing.

The main enhancement is the ability to carry a payload with the Yo!, such as a URL. Hence your Yo!, when received, can be used to invoke an application or web page with a bookmark already put in place. That facilitates a notification, which is effectively guaranteed to have arrived, to say “look at this”. Probably extensible to all sorts of other tasks.

The other big change is the provision of an API, which allows anyone to create a Yo! list of people to notify against a defined name. So, in theory, I could create a virtual user called “IANWARING-SIMPLICITY-SELLS”, and to publicise that to my blog audience. If any user wants to subscribe, they just send a “Yo!” to that user, and bingo, they are subscribed and it is listed (as another contact) on their phone handset. If I then release a new blog post, I can use a couple of lines of Javascript or PHP to send the notification to the whole subscriber base, carrying the URL of the new post; one key press to view. If anyone wants to unsubscribe, they just drop the username on their handset, and the subscriber list updates.

Other applications described include:

  • Getting a Yo! when a FedEx package is on it’s way
  • Getting a Yo! when your favourite sports team scores – “Yo us at ASTONVILLA and we’ll Yo when we score a goal!
  • Getting a Yo! when someone famous you follow tweets or posts to Instagram
  • Breaking News from a trusted source
  • Tell me when this product comes into stock at my local retailer
  • To see if there are rental bicycles available near to you (it can Yo! you back)
  • You receive a payment on PayPal
  • To be told when it starts raining in a specific town
  • Your stocks positions go up or down by a specific percentage
  • Tell me when my wife arrives safely at work, or our kids at their travel destination

but I guess there are other “Internet of Things” applications to switch on home lights, open garage doors, switch on (or turn off) the oven. Or to Yo! you if your front door has opened unexpectedly (carrying a link to the picture of who’s there?). Simple one click subscriptions. So, an extra way to operate Apple HomeKit (which today controls home appliance networks only through Siri voice control).

Early users are showing simple Restful URLs and http GET/POSTs to trigger events to the Yo! API. I’ve also seen someone say that it will work with CoPA (Constrained Application Protocol), a lightweight protocol stack suitable for use within simple electronic devices.

Hence, notifications that are implemented easily and over which you have total control. Something Apple appear to be anal about, particularly in a future world where you’ll be walking past low energy bluetooth beacons in retail settings every few yards. Your appetite to be handed notifications will degrade quickly with volumes if there are virtual attention beggars every few paces. Apple have been locking down access to their iBeacon licensees to limit the chance of this happening.

With the Yo! API, the first of many notification services (alongside Google Now, and Apples own notification services), and a simple one at that. One that can be mixed with IFTTT (if this, then that), a simple web based logic and task action system also produced by Betaworks. And which may well be accessible directly from embedded electronics around us.

The one remaining puzzle is how the authors will be able to monetise their work (their main asset is an idea of the type and frequency of notifications you welcome receiving, and that you seek). Still a bit short of Google’s core business (which historically was to monetise purchase intentions) at this stage in Yo!’s development. So, suggestions in the case of Yo! most welcome.

 

Apple iWatch: Watch, Fashion, Sensors or all three?

iWatch Concept Guess Late last year there was an excellent 60 minute episode of the Cubed.fm Podcast by Benedict Evans and Ben Bajarin, with guest Bill Geiser, CEO of Metawatch. Bill had been working on Smart watches for over 20 years, starting with wearables to measure his swimming activity, working for over 8 years as running Fossil‘s Watch Technology Division, before buying out that division to start Metawatch. He has also consulted for Sony in the design and manufacture of their Smart watches, for Microsoft SPOT technology and for Palm on their watch efforts. The Podcast is a really fascinating background on the history and likely future directions of this (widely believed to be) nascent industry: listen here.

Following that podcast, i’ve always listened carefully to the ebbs and flows of likely smart watch releases from Google, and from Apple (largely to see how they’ve built further than the great work by Pebble). Apple duly started registering the iWatch trademark in several countries (nominally in class 9 and 14, representative of Jewelry, precious metal and watch devices). There was a flurry of patent applications from Apple in January 2014 of Liquid Metal and Sapphire materials, which included references to potential wrist-based devices.

There have also been a steady stream of rumours that an Apple watch product would likely include sensors that could pair with health related applications (over low energy bluetooth) to the users iPhone.

Apple duly recruited Angela Ahrendts, previously CEO of Burberry, to head up Apple’s Retail Operations. Shortly followed by Nike Fuelband Consultant Jay Blahnik and several Medical technology hires. Nike (where Apple CEO Tim Cook is a Director) laid off it’s Fuelband hardware team, citing a future focus on software only. And just this weekend, it was announced that Apple had recruited the Tag Heuer Watches VP of Sales (here).

That article on the Verge had a video of an interview from CNBC with Jean-Claude Biver, who is Head of Watch brands for LVMH – including Louis Vuitton, Hennessey and TAG Heuer. The bizarre thing (to me) he mentioned was that his employee who’d just left for a contract at Apple was not going to a Direct Competitor, and that he wished him well. He also cited a “Made in Switzerland” marketing asset as being something Apple could then leverage. I sincerely think he’s not naive, as Apple may well impact his market quite significantly if there was a significant product overlap. I sort of suspect that his reaction was that of someone partnering Apple in the near future, not of someone waiting for an inbound tidal wave from an foreign competitor.

Google, at their I/O Developers Conference last week, duly announced Android Wear, among which was support for Smart Watches from Samsung, LG and Motorola. Besides normal time and date use, include the ability to receive the excellent “Google Now” notifications from the users phone handset, plus process email. The core hope is that application developers will start to write their own applications to use this new set of hardware devices.

Two thoughts come to mind.

A couple of weeks back, my wife needed a new battery in one of her Swatch watches. With that, we visited the Swatch Shop outside the Arndale Centre in Manchester. While her battery was being replaced, I looked at all the displays, and indeed at least three range catalogues. Beautiful fashionable devices that convey status and personal expression. Jane duly decided to buy another Swatch that matched an evening outfit likely to be worn to an upcoming family Wedding Anniversary. A watch battery replacement turned into an £85 new sale!

Thought #1 is that the Samsung and LG watches are, not to put a finer point on it, far from fashion items (I nearly said “ugly”). Available in around 5 variations, which map to the same base unit shape and different colour wrist bands. LG likewise. The Moto 360 is better looking (bulky and circular). That said, it’s typically Fashion/Status industry suicide with an offer like this. Bill Geiser related that “one size fits all” is a dangerous strategy; suppliers typically build a common “watch movement” platform, but wrap this in an assortment of enclosures to appeal to a broad audience.

My brain sort of locks on to a possibility, given a complete absence of conventional watch manufacturers involved with Google’s work, to wonder if Apple are OEM’ing (or licensing) a “watch guts” platform usable by Watch manufacturers to use in their own enclosures.

Thought #2 relates to sensors. There are often cited assumptions that Apple’s iWatch will provide a series of sensors to feed user activity and vital signs into their iPhone based Health application. On that assumption, i’ve been noting the sort of sensors required to feed the measures maintained “out of the box” by their iPhone health app, and agonising as to if these would fit on a single wrist based device.

The main one that has been bugging me – and which would solve a need for millions of users – is that of measuring glucose levels in the bloodstream of people with Diabetes. This is usually collected today with invasive blood sampling; I suspect little demand for a watch that vampire bites the users wrist. I found today that there are devices that can measure blood glucose levels by shining Infrared Light at a skin surface using near-infrared absorption spectroscopy. One such article here.

The main gotcha is that the primary areas where such readings a best taken are on the ear drum or on the inside of an arm’s elbow joint. Neither the ideal position for a watch, but well within the reach of earbuds or a separate sensor. Both could communicate with the Health App directly wired to an iPhone or over a low energy bluetooth connection.

Blood pressure may also need such an external sensor. There are, of course, plenty of sensors that may find their way into a watch style form factor, and indeed there are Apple patents that discuss some typical ones they can sense from a wrist-attached device. That said, you’re working against limited real estate for the devices electronics, display and indeed the size of battery needed to power it’s operation.

In summary, I wonder aloud if Apple are providing an OEM watch movement for use by conventional Watch suppliers, and whether the Health sensor characteristics are better served by a raft of third party, low energy bluetooth devices rather than an iWatch itself.

About the only sure thing is that when Apple do finally announce their iWatch, that my wife will expect me to be early in the queue to buy hers. And that I won’t disappoint her. Until then, iWatch rumours updated here.

European Courts have been great; just one fumble to correct

Delete Spoof Logo

We have an outstanding parliament that works in the Public Interest. Where mobile roaming charges are being eroded into oblivion, where there is tacit support in law for the principles of Net Neutrality, and where the Minister is fully supportive of a forward looking (for consumers) Digital future. That is the European Parliament, and the excellent work of Neelie Kroes and her staff.

The one blight on the EC’s otherwise excellent work has been the decision to enact – then outsource – a “Right to be Forgotten” process to a commercial third party. The car started skidding off the road of sensibility very early in the process, albeit underpinned by one valid core assumption.

Fundamentally, there are protections in place, where a personal financial misfortune or a criminal offence in a persons formative years has occurred, to have a public disclosure time limit enshrined in law. This is to prevent undue prejudice after an agreed time, and to allow the afflicted to carry on their affairs without penalty or undue suffering after lessons have been both internalised and not repeated.

There are public data maintenance and reporting limits on some cases of data on a criminal reference database, or on financial conduct databases, that are mandated to be erased from the public record a specific number of years after first being placed there. This was the case with the Spanish Gentleman who believed his privacy was being violated by the publication of a bankruptcy asset sale well past this statutory public financial reporting boundary, in a newspaper who attributed that sale to him personally.

In my humble opinion, the resolution of the court should have been to (quietly) order the Newspaper to remove (or obfuscate) his name from that article at source. Job done; this then formally disassociated his name from the event, and all downstream (searchable) references to it likewise, so achieving the alignment of his privacy with the usual public record financial reporting acts in law.

By leaving the source in place, and merely telling search engine providers to enact processes to allow individuals to request removal of unwanted facts from the search indexes only, opens the door to a litany of undesirable consequences – and indeed leaves the original article on a newspaper web site untouched and in direct violation of the subjects right to privacy over 7 years after his bankruptcy; this association should now have no place on the public record.

Besides timescales coded into law on specific timescales where certain classes of personal data can remain on the public record, there are also ample remedies at law in place for enforcing removal (and seeking compensation for) the publication of libellous or slanderous material. Or indeed the refusal to take-down such material in a timely manner with, or without, a corresponding written apology where this is judged appropriate. No new laws needed; it is then clear that factual content has its status reinforced in history.

In the event, we’re now subject to a morass of take-down requests that have no legal basis for support. Of the initial volume (of 10’s of 1,000’s of removal requests):

  • 31 percent of requests from the UK and Ireland related to frauds or scams
  • 20 percent to arrests or convictions for violent or serious crimes
  • 12 percent to child pornography arrests
  • 5 percent to the government and police
  • 2 percent related to celebrities

That is demonstrably not serving the public interest.

I do sincerely hope the European Justices that enacted the current process will reflect on the monster they have created, and instead change the focus to enact privacy of individuals in line with the financial and criminal record keeping edicts of publicly accessible data coded in law already. In that way, justice will be served, and we will no longer be subjected to a process outsourced to a third party who should never be put in a position of judge and jury.

That is what the courts are for, where the laws are very specific, and in which the public was full confidence.

The Moving Target that is Enterprise IT infrastructures

Docker Logo

A flurry of recent Open Source Enterprise announcements, one relating to Docker – allowing Linux containers containing all their needed components to be built, distributed and then run atop Linux based servers. With this came the inference that Virtualisation was likely to get relegated to legacy application loads. Docker appears to have support right across the board – at least for Linux workloads – covering all the major public cloud vendors. I’m still unsure where that leaves the other niche that is Windows apps.

The next announcement was that of Apache Mesos, which is the software originally built by ex-Google Twitter engineers – largely the replicate the Google Borg software used to fire up multi-server workloads across Google’s internal infrastructure. This used to good effect to manage Twitters internal infrastructure and to consign their “Fail Whale” to much rarer appearances. At the same time, Google open sourced a version of their software – I’ve not yet made out if it’s derived from the 10+ year old Borg or more recent Omega projects – to do likewise, albeit at smaller scale than Google achieve inhouse. The one thing that bugs me is that I can never remember it’s name (i’m off trying to find reference to it again – and now I return 15 minutes later!).

“Google announced Kubernetes, a lean yet powerful open-source container manager that deploys containers into a fleet of machines, provides health management and replication capabilities, and makes it easy for containers to connect to one another and the outside world. (For the curious, Kubernetes (koo-ber-nay’-tace) is Greek for “helmsman” of a ship)”.

That took some finding. Koo-ber-nay-tace. No exactly memorable.

However, it looks like it’ll be a while before these packaging, deployment and associated management technologies get ingrained in Enterprise IT workloads. A lot of legacy systems out there are simply not architected to run on scale-out infrastructures yet, and it’s a source of wonder what the major Enterprise software vendors are running in their own labs. If indeed they have an appetite to disrupt themselves before others attempt to.

I still cringe with how one ERP system I used to use had the cost collection mechanisms running as a background batch process, and the margins of the running business went all over the place like a skidding car as orders were loaded. Particularly at end of quarter customer spend spikes, where the complexity of relational table joins had a replicated mirror copy of the transaction system consistently running 20-25 minutes behind the live system. I should probably cringe even more given there’s no obvious attempt by startups to fundamentally redesign an ERP system from the ground up using modern techniques. At least yet.

Startups appear to be much more heavily focussed on much lighter mobile based applications – of which there are a million different bets chasing VC money. Moving Enterprise IT workloads into much more cost effective (but loosely coupled) public cloud based infrastructure – and that take full advantage of its economics – is likely to take a little longer. I sometimes agonise over what change(s) would precipitate that transition – and whether that’s a monolith app, or a network of simple ones daisy chained together.

I think we need a 2014 networked version of Silicon Office or Hypercard to trigger some progress. Certainly their abject simplicity is no more, and we’re consigned to the lower level, piecemeal building bricks – like JavaScript – which is what life was like in assembler before high level languages liberated us. Some way to go.