Ians Brain goes all Economics on him

A couple of unconnected events in the last week. One was an article by Scott Adams of Dilbert Fame, with some observations about how Silicon Valley was really one big Psychological Experiment (see his blog post: http://dilbert.com/blog/entry/the_pivot/).

It’s a further extension on a comment I once read by Max Schireson, CEO of MongoDB, reflecting on how Salespeoples compensation works – very much like paying in lottery tickets: http://maxschireson.com/2013/02/02/sales-compensation-and-lottery-tickets/.

The main connection being that Salespeople tend to get paid in lottery tickets in Max’s case, whereas Scott thinks the same is an industry-wide phenomenon – for hundreds of startup companies in one part of California just south of San Francisco. Both hence disputing a central ethos of the American Dream – that he who works hard gets the (financial) spoils.

Today, there was a piece on BBC Radio 2 about books that people never get to finish reading. This was based on some analysis of progress of many people reading Kindle books; this being useful because researchers can see where people stop reading as they progress through each book. By far the worst case example turned out to be “Capital in the Twenty-First Century” by Thomas Piketty, where people tended to stop around Page 26 of a 700-page book.

The executive summary of this book was in fact quite pithy; it predicts that the (asset) rich will continue to get richer, to the expense of the rest of the population whose survival depends on receiving an income flow. Full review here. And that it didn’t happen last century due to two world wars and the 1930’s depression, something we’ve not experienced this century. So far. The book just went into great detail, chapter by chapter, to demonstrate the connections leading to the authors thesis, and people abandoned the book early en mass.

However, it sounds plausible to me; assets tend to hold their relative “value”, whereas money is typically deflationary (inflation of monetary values and devaluation through printing money, no longer anchored to a specific value of gold assets). Even the UK Government factor the devaluation in when calculating their future debt repayment commitments. Just hoping this doesn’t send us too far to repeat what happened to Rome a couple of thousand years ago or so (as cited in one of my previous blog posts here).

Stand back – intellectual deep thought follows:

The place where my brain shorted out was the thought that, if that trend continued, that at some point our tax regime would need to switch from being based monetary income flows to being based on assets owned instead. The implications of this would be very far reaching.

That’ll be a tough sell – at least until everyone thinks we’ve returned to a feudal system and the crowds with pitchforks appear on the scene.

European Courts have been great; just one fumble to correct

Delete Spoof Logo

We have an outstanding parliament that works in the Public Interest. Where mobile roaming charges are being eroded into oblivion, where there is tacit support in law for the principles of Net Neutrality, and where the Minister is fully supportive of a forward looking (for consumers) Digital future. That is the European Parliament, and the excellent work of Neelie Kroes and her staff.

The one blight on the EC’s otherwise excellent work has been the decision to enact – then outsource – a “Right to be Forgotten” process to a commercial third party. The car started skidding off the road of sensibility very early in the process, albeit underpinned by one valid core assumption.

Fundamentally, there are protections in place, where a personal financial misfortune or a criminal offence in a persons formative years has occurred, to have a public disclosure time limit enshrined in law. This is to prevent undue prejudice after an agreed time, and to allow the afflicted to carry on their affairs without penalty or undue suffering after lessons have been both internalised and not repeated.

There are public data maintenance and reporting limits on some cases of data on a criminal reference database, or on financial conduct databases, that are mandated to be erased from the public record a specific number of years after first being placed there. This was the case with the Spanish Gentleman who believed his privacy was being violated by the publication of a bankruptcy asset sale well past this statutory public financial reporting boundary, in a newspaper who attributed that sale to him personally.

In my humble opinion, the resolution of the court should have been to (quietly) order the Newspaper to remove (or obfuscate) his name from that article at source. Job done; this then formally disassociated his name from the event, and all downstream (searchable) references to it likewise, so achieving the alignment of his privacy with the usual public record financial reporting acts in law.

By leaving the source in place, and merely telling search engine providers to enact processes to allow individuals to request removal of unwanted facts from the search indexes only, opens the door to a litany of undesirable consequences – and indeed leaves the original article on a newspaper web site untouched and in direct violation of the subjects right to privacy over 7 years after his bankruptcy; this association should now have no place on the public record.

Besides timescales coded into law on specific timescales where certain classes of personal data can remain on the public record, there are also ample remedies at law in place for enforcing removal (and seeking compensation for) the publication of libellous or slanderous material. Or indeed the refusal to take-down such material in a timely manner with, or without, a corresponding written apology where this is judged appropriate. No new laws needed; it is then clear that factual content has its status reinforced in history.

In the event, we’re now subject to a morass of take-down requests that have no legal basis for support. Of the initial volume (of 10’s of 1,000’s of removal requests):

  • 31 percent of requests from the UK and Ireland related to frauds or scams
  • 20 percent to arrests or convictions for violent or serious crimes
  • 12 percent to child pornography arrests
  • 5 percent to the government and police
  • 2 percent related to celebrities

That is demonstrably not serving the public interest.

I do sincerely hope the European Justices that enacted the current process will reflect on the monster they have created, and instead change the focus to enact privacy of individuals in line with the financial and criminal record keeping edicts of publicly accessible data coded in law already. In that way, justice will be served, and we will no longer be subjected to a process outsourced to a third party who should never be put in a position of judge and jury.

That is what the courts are for, where the laws are very specific, and in which the public was full confidence.

Facebook Mood Research: who’s really not thinking this through?

Facebook Logo

Must admit, i’ve been totally bemused by the reaction of many folks and media outlets I usually respect to this “incident”. As you may recall from other news sources, Facebook did some research to see if posts they deemed as “happier” (or the opposite) had a corresponding effect on the mood of other friends seeing those status posts. From what I can make out, Facebook didn’t inject any changes to any text; they merely prioritised the feed of specific posts based on a sentiment analysis of the words in them. With that came cries of outrage that Facebook should not be meddling with the moods of it’s users.

The piece folks miss is that due to the volume of status updates – and the propensity of your friends to be able to consume that flow of information from their friends – an average of 16% of your status posts get seen by folks in your network (the spread, depending on various other factors, is from 2% to 47% – but the mean is 16% – 1 in 6). This has been progressively stepping down; two years ago, the same average was 25% or so. Facebooks algorithms make a judgement on how pertinent any status makes to each of your friends, and selectively places (or ignores) that in their feed at the time they read their wall.

As an advertiser with Facebook, you can add weight to a posts exposure to show ads in the wall of people with specific demographics or declared interests (aka “likes”). Which can usually be a specific advert, or an invite to “like” a specific interest area or brand – and hence to be more likely to see that content in your wall alongside other posts from friends.

So, Facebook changed their algorithm, based on text sentiment analysis, to slightly prioritise updates with a seemingly positive (or negative) disposition – and to see if that disposition found it’s way downstream into your friends’ own status updates. And in something like 1 in a 1000 cases, it did have an influence.

Bang! Reports everywhere of “How dare Facebook cross the line and start to meddle with the mood swings of their audience”. My initial reaction, and one I still hold, is the surprising naivety of that point of view, totally out of depth with:

  1. the physics of how many people see your Facebook updates
  2. the fact that Facebook did not inject anything into the text – just prioritised based on an automated sentiment analysis of what was written and above all:
  3. have people being living under a rock that they don’t know how editorial decisions get prioritised by *every* media outlet known to man?

There are six Newspaper proprietors in the UK that control virtually all the National Newsprint output, albeit a business that will continue to erode with an ever aging readership demographic. Are people so naive that they don’t think Tabloid headlines, articles and limited right to reply do not follow a carefully orchestrated interest of their owners and associated funding sources? Likewise the Television and Radio networks.

The full horror is seeing output from a Newspaper, relaying stories about foreign benefit cheats, who end up hiring a Russian model to act as a Latvian immigrant, inject alleged comments from her to incite a “how dare you” reaction, add text of a government ministerial condemnation, and then heavily moderate the resulting forum posts to keep a sense of “Nationalistic” outrage at the manufactured fiction. That I find appalling and beneath any sense of moral decency. That is the land of the Tabloid Press; to never let facts get in the way of a good story. That is a part of society actively fiddling with the mood swings of their customers. By any measure, Facebook don’t even get on the same playing field.

In that context, folks getting their knickers in a twist about this Facebook research are, I fear, losing all sense of perspective. Time to engage brain, and think things through, before imitating Mr Angry. They should know better.

What if Quality Journalism isn’t?

Read all about it

Carrying on with the same theme as yesterdays post – the fact that content is becoming disaggregated from a web sites home page – I read an excellent blog post today: What if Quality Journalism isn’t? In this, the author looks at the seemingly divergent claims from the New York Times, who claim:

  • They are “winning” at Journalism
  • Readership is falling, both on web and mobile platforms
  • therefore they need to pursue strategies to grow their audience

The author asks “If its product is ‘the world’s best journalism‘, why does it have a problem growing its audience?”. You can’t be the world’s best and fail at the same time. Indeed. And then goes into a deeper analysis.

I like the analogue of the supermarket of intent (Amazon) versus a supermarket of interest (social) versus Niche. The central issue is how to curate articles of interest to a specific subscriber, without filling their delivery with superfluous (to the reader) content. This where Newspapers (in the authors case) typically contain 70% or more of wasted content to a typical specific user.

One comment under the article suggests one approach: existence of an open source aggregation model for the municipal bond market on Twitter via #muniland… journos from 20+ pubs, think tanks, govts, law firms, market commentators hash their story and all share.

Deep linking to useful, pertinent and interesting content is probably a big potential area if alternative approaches can crack it. Until then, i’m having to rely on RSS feeds of known authors I respect, or from common watering holes, or from the occasional flash of brilliance that crosses my twitter stream at times i’m watching it.

Just need to update Aaron Swartz’s code to spot water-cooler conversations on Twitter among specific people or sources I respect. That would probably do most of the leg work to enlighten me more productively, and without subjecting myself to pages of search engine discovery.

Death of the Web Home Page. What replaces it??

Go Back You Are Going Wrong Way Sign

One of the gold nuggets on the “This week in Google” podcast this week was that some US News sites historically had 20% of their web traffic coming in through their front door home page. 80% of their traffic arrived from links elsewhere that landed on individual articles deep inside their site. More recently, that has dropped to 10%.

If they’re anything like my site, only a small proportion of these “deep links” will come from search engine traffic (for me, search sources account for around 20% of traffic most days). Of those that do, many arrive searching for something more basic than what I have for them here. By far my most popular “accident” is my post about “Google: where did I park my car?”. This is a feature of Google Now on my Nexus 5 handset, but I guess many folks are just tapping that query into Google’s search box absolutely raw (and raw Google will be clueless – you need a handset reporting your GPS location and the fact it sensed your transition from driving to walking for this to work). My second common one is people trying to see if Tesco sell the Google Chromecast, which invariably lands on me giving a demo of Chromecast working with a Tesco Hudl tablet.

My major boosts in traffic come when someone famous spots a suitably tagged Twitter or LinkedIn article that appears topical. My biggest surge ever was when Geoffrey Moore, author of “Crossing the Chasm”, mentioned my one page PDF that summarised his whole book on LinkedIn. The second largest when my post that congratulated Apple for the security depth in their CloudKit API, as a fresh change to the sort of shenanigans that several UK public sector data releases violate, appeared on the O’Reilly Radar blog. Outside of those two, I bump along at between 50-200 reads per day, driven primarily by my (in)ability to tag posts on social networks well enough to get flashes of attention.

10% coming through home pages though; that haunts me a bit. Is that indicative of a sea change to single, simple task completion by a mobile app? Or that content is being littered around in small, single article chunks, much like the music industry is seeing a transition from Album Compilations to Singles? I guess one example is this weeks purchase of Songza by Google – and indeed Beats by Apple – giving both companies access to curated playlists. Medium is one literary equivalent, as is Longreads. However, I can’t imagine their existence explains the delta between searches and targeted landing directly into your web site.

So, if a home page is no longer a valid thing to have, what takes it’s place? Ideas or answers on a postcard (or comment here) please!

Email: is 20% getting through really a success?

Baseball Throw

Over the weekend, I sent an email out to a lot of my contacts on LinkedIn. Because of the number of folks i’m connected to, I elected to subscribe to Mailchimp, the email distribution service recommended by most of the experts I engage in the WordPress community. I might be sad, but it’s been fascinating to watch  the stats roll in after sending that email.

In terms of proportion of all my emails successfully delivered, that looks fine:

Emails Delivered to LinkedIn Contacts

However, 2 days after the email was sent, readership of my email message, with the subject line including the recipients Christian name to avoid one of the main traps that spam gets caught in, is:

Emails Seen and UnOpened

Eh, pardon? Only 47.4% of the emails I sent out were read at all? On first blush, that sounds really low to an amateur me. I would have expected it for folks on annual leave, but still not as low as less than half of all messages sent out. In terms of device types used to read the email:

Desktop vs Mobile Email Receipt

which I guess isn’t surprising, given the big volume of readers that looked at the email in the first hour of when it was sent (which was at around 9:00pm on Saturday night). There was another smaller peak between 7am-8am on Sunday morning, and then fairly level tides with small surges around working day arrival, lunch and departure times. In terms of devices used:

Devices used to read Email

However, Mailchimp insert a health warning, saying that iOS devices do handshake the email comms reliably, whereas other services are a lot more fickle – so the number of Apple devices may tend to over report. That said, it reinforces the point I made in a post a few days ago about the importance of keeping your email subject line down to 35 characters – to ensure it’s fully displayed on an iPhone.

All in, I was still shocked by the apparent number of emails successively delivered but not opened at all. Thinking it was bad, I checked and found that Mailchimp reckon the average response aimed into folks aligned to Computers and Electronics (which is my main industry), voluntarily opted in, is 17.8%, and click throughs to provided content around the 1.9% mark. My email click through rate is running at 2.9%. So, my email was 2x the industry norm for readership and 50% above normal click-through rates, though these are predominantly people i’ve enjoyed working with in the past – and who voluntarily connected to me down the years.

So, sending an email looks to be as bad at getting through as expecting to see Tweets from a specific person in your Twitter stream. I know some of my SMS traffic to my wife goes awry occasionally, and i’m told Snapchat is one of the few messaging services that routinely gives you an indication that your message did get through and was viewed.

Getting guaranteed attention of a communication is hence a much longer journey than I expected, and probably (like newspaper ads of old) relying on repeat “opportunities to see”. But don’t panic – i’m not sending the email again to that list; it was a one-time exercise.

This is probably a dose of the obvious to most people, but the proportion of emails lost in action – when I always thought it a reliable distribution mechanism – remains a big learning for me.

Am I the only one shaking my head at US Net Neutrality?

Internet Open Sign

I’ve always had the view that:

  1. ISPs receive a monthly payment for the speed of connection I have to the Internet
  2. Economics are such that I expect this to be effectively uncapped for almost all “normal” use, though the few edge cases of excessive use would be subject to a speed reduction to ration use of the resources for the good of the ISPs user base as a whole (to avoid a tragedy of the commons)
  3. That a proportion of my monthly costs would track investments needed to ensure peering equipment and the ISPs own infrastructure delivered service to me at the capacity needed to deliver (1) and (2) without any discrimination based on traffic nor its content.

Living in Europe, i’ve been listening to lots of commentary in the USA about both the proposed merger between Comcast and Time Warner Cable on one hand, and of the various ebbs and flows surrounding “Net Neutrality” and the FCC on the other. It’s probably really surprising to know that broadband speeds in the USA are at best mid-table on the world stage, and that Comcast and Time Warner have some of the worst customer satisfaction scores in their respective service areas. There is also the spectacle of seeing the widespread funding of politicians there by industry, and the presence of a far from independent chairman of the FCC (the regulator) whose term is likely to be back through the revolving door to the very industry he currently is charged to regulate and from whence he came.

I’ve read “Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age” by Susan Crawford, which logged what happened as the Bell Telephone Monopoly was deregulated, and the result the US consumer was left with. Mindful of this, there was an excellent blog post that amply demonstrates what happens when the FCC lets go of the steering wheel, and refuses to classify Internet provision being subject to the “common carrier” status. Dancing around this serves no true political purpose, other than to encourage the receipt of Economic rent in ample excess to the cost of service provision in areas of mandated exclusivity of provision.

It appears that the 5 of the major “last mile” ISPs in the USA (there are 6 of them – while unnamed, folks on various forums suspect that Verizon are the only ones not cited) are not investing in equipment at their peering points, leading to an inference that they are double dipping. ie: asking the source of traffic (like Netflix, YouTube, etc) to pay transit costs to their customers for the “last mile”. Equipment costs that are reckoned to be marginal (fractions of a cent to each customer served) to correct. There is one European ISP implicated, though comments i’ve seen around the USA suggest this is most likely to be to Germany.

The blog post is by Mark Taylor, an executive of Level 3 (who provide a lot of the long distance bandwidth in the USA). Entitled “Observations of an Internet Middleman”, it is well worth a read here.

I just thank god we’re in Europe, where we have politicians like Neelie Kroes who works relentlessly, and effectively, to look after the interests of her constituents above all else. With that, a commitment to Net Neutrality, dropping roaming charges for mobile telcos, no software patents and pushing investments consistent with the long term interests of the population in the EC.

We do have our own challenges in the UK. Some organisations still profit handsomely from scientific research we pay for. We fund efforts by organisations to deliver hammer blows to frustrated consumers rather than encouraging producers to make their content accessible in a timely and cost effective fashion. And we have one of the worst cases of misdirected campaigns, with no factual basis and playing on media-fanned fear, to promote government mandated censorship (fascinating parallels to US history in “The Men who open your mail” here – it’ll take around 7 minutes to read). Horrific parallels to this, and conveniently avoiding the fact that wholesale games of “wac-a-mole” have demonstrably never worked.

That all said, our problems will probably trend to disappear, be it with the passing of the current government and longer term trends in media readership (the Internet native young rarely read Newspapers – largely a preserve of the nett expiring old).

While we have our own problems, I still don’t envy the scale of task ahead of consumers in the USA to unpick their current challenges with Internet access. I sincerely hope the right result makes it in the end.

Simple words often work better than neat adverts

Love at First Website Advert

An example advert from the time I led the Marketing Services Team at Demon Internet. It was a dumb sounding advert, but it pulled response like crazy. Some of the responses we received back in the mail (asking for trial CDs) contained nice poems, so it appeared to strike a healthy connection.

When we first entertained bids for a new agency, we had super looking, consistent, nicely branded advert samples from one company, and these tongue in cheek worded ones from another. Cliff Stanford (owner of Demon Internet) liked the worded ones, while I thought he was nuts – but he agreed to do some tests to see who was correct. He was absolutely right; the worded ads pulled much more effectively. Lesson learnt!

The Valentines Day Advert was done in a rush a week before, and Les Hewitt (media buyer extraordinaire) got it in most target newspapers near the back. Once in, he phoned them hourly to twist their arm relentlessly, getting it shifted page by page towards the front. The advert made it to the dating page on Valentines Day in the Times I believe, where we got fantastic response levels.

We ran quite a few variations of the theme in over 40 different publications:

Thick as two short planks advert

piece at cake advert

We also tried cross-track and a 40-sheet poster treatment of the piece@cake advert, but had a bit of a mishap on the approach to Wembley Stadium the evening when the Spice Girls were giving a concert. Hence thousands of young fans, being driven in by their parents to see the concert were greeted with:

Piece @ Cake Advert, dropped E

We had them paste the ‘e’ panel back on the next day.

Average cost to land a £10/month paying customer was £30, around 1/6 that of competitive ISPs at the time (this was 1998-9). We tested everything, and knew what the landed cost of a customer was for every ad we placed. Even knew which ones gave us high response and then heavy churn 3 months later (waves hello to the Sun and Mirror). The most effective medium one of my folks tried gave us acquisition costs of £4 per landed customer, but many odd ball complaints. But that’s another story, and described near the end of an older post here.

Class work, well executed and full of personality. In my humble opinion, of course.

 

A modern take on peoples valiant attempts to get attention

Facebook Newsfeed Algorithm Equation

A really well written story in Techcrunch today, which relates the ever increasing difficulty of getting a message you publish in front of people you know. Well worth a read if you have a spare 5 minutes: http://techcrunch.com/2014/04/03/the-filtered-feed-problem/

The main surprise for me is that if you “Like” a particular vendors Facebook page, the best historical chance (from Feb 2012) of seeing one individual post from them was around 1 in 6 – 16%. With an increase in potential traffic to go into your personal news feed, it is (in March 2014) now down to 1 in 15 – 6.51%. So, businesses are facing the same challenges to that of the Advertising industry in general, even on these new platforms.

Despite the sheer amount of signal data available to them, even folks like Facebook (and I guess the same is true of Google, Twitter, LinkedIn, Pinterest, etc) have a big challenge to separate what we value seeing, and what we skip by. Even why we look at these social media sites can be interpreted in many different ways from the get go. One of my ex-work colleagues, at a s Senior Management program at Harvard, had a professor saying that males were on Facebook for the eye candy, and females to one-plus their looks and social life among their social circle (and had a habit of publishing less flattering pictures of other women in the same!).

The challenge of these sites is one of the few true need for “big data” analyses that isn’t just IT industry hype to sell more kit. Their own future depends on getting a rich vein of signals from users they act as a content platform for, while feeding paid content into the stream that advertisers are willing to subvert in their favo(u)r  – which is a centuries old pursuit and nothing remarkable, nor new.

Over the past few weeks, i’ve increased the number of times per week I go out for a walk with my wife. This week, Google Now on my Nexus 5 flashed this up:

Google Now Walking Stats Screenshot

 

So, it knows i’m walking, and how far! I guess this isn’t unusual. I know that the complete stock of photographs people upload also contain location data (deduced from GPS or the SSID of Wireless routers close by), date/time and readily admit the make and model of the device that it was taken on. And if you have a professional DSLR camera, often with the serial number of the camera and lens on board (hence some organisations offering to trace stolen cameras by looking at the EXIF data in uploaded photographs).

Individually identifiable data like that is not inserted by any of the popular mobile phones (to the best of my knowledge), and besides, most social media sites strip the EXIF data out of pictures they display publicly anyway. You’d need a warrant to request a search of that sort of data from the social media company, case by case. That said, Facebook and their ilk do have access to the data, and also a fair guess at your social circle given who gets tagged in your pictures!

Traditional media will instead trot out statistics on OTS (aka “Opportunities to see” an advert) and be able to supply some basic demographics – gleaned from subscriptions and competition entries – to work out the typical demographics of their audience you can pay to address. Getting “likely purchase intent” signals is much, much more difficult.

Love At First Website Demon Ad

When doing advertising for Demon Internet, we used to ask the person calling up for a trial CD some basic questions about where they’d seen the advert that led them to contact us. Knowing the media used, and it’s placement cost, we could in time measure the cost per customer acquired and work to keep that as low as possible. We routinely shared that data every week with our external media buyers, who used the data as part of their advertising space buying negotiation patter, and could relate back which positions and advert sizes in each publication pulled the best response.

The main gotcha is that if you ask, you may not get an accurate answer from the customer, or you can be undone by your own staff misattributing the call. We noticed this when we were planning to do a small trial some TV advertising, so had “TV” put on the response systems menu – as it happens, it appeared as the first option on the list. We were somewhat bemused after a week that TV was our best source of new customers – but before any of our ads had been aired. So, a little nudge to our phone staff to please be more accurate, while we changed every ad, for each different media title we used, to different 0800 numbers – and could hence take the response readings off the switch, cutting out the question and generally making the initial customer experience a bit more friction free.

With that, our cost per acquired customer stayed around the £20 each mark, and cost per long term retained customer kept at around £30 (we found, along the way, some publications had high response rates, but high churn rates to go with them).

Demon Trial Postmark

The best response rates of all were getting the Royal Mail franking machines to cancel stamps on half of all stamped letters in the UK for two two-week periods – which came out at £7 per acquired customer; a great result for Michelle Laufer, who followed up when she noticed letters arriving at home cancelled with “Have a Break, Have a Kit Kat”. Unfortunately, the Royal Mail stopped allowing ads to be done in this way, probably in the knowledge that seeing “Demon Internet” on letters resulted in a few complaints from people and places with a nervous disposition (one Mental Hospital as a case in point).

The main challenge for people carrying a Marketing job title these days is to be relentless on their testing, so they can measure – with whatever signals they can collect – what works, what doesn’t and what (from two alternative different treatments) pulls better. Unfortunately, many such departments are littered with people with no wherewithal beyond “please get this mailer out”. Poorest of Amateur behaviour, and wasting money unnecessarily for their shareholders.

As in most walks in life, those that try slightly harder get a much greater proportion of the resulting spoils for their organisation. And that is why seminal books like “Commonsense Direct and Digital Marketing“, and indeed folks like Google, Facebook et al, are anal about the thoroughness of testing everything they do.

The rise & rise of A1 (internet fuelled) Journalism

Newspaper Industry RIPThere’s been a bit of to and fro about the future of Newspapers and Journalism in the last week, where both bundling of advertising and editorial content is being disaggregated by Internet dynamics. Readership of newspapers is increasingly a preserve of the old. Like many other folks I know, we increasingly derive a lot of our inbound content from online newsletters, blogs, podcasts and social media feeds. Usually in much smaller chunks than we’d find in mainstream media of old.

Ben Thompson (@monkbent) wrote a great series of pieces on Journalist “winner takes all” dynamics, where people tend to hook primarily onto personalities or journalists they respect:

I think he’s absolutely correct, but the gotcha is that they all publish in different places and among different colleagues, so it’s difficult (or at the very least time consuming) for a lot of us to pick them out systematically. A few examples of the ones I think are brilliant are folks like:

  • John Lanchester – usually on the London Review of Books and talking about the state of the UK economy (“Let’s Call it Failure“), the behaviour of our post-crash Banking Industry (“Let’s consider Kate“), and about the PPI scandal (“Are we having fun yet?“)
  • Douglas Adams – now RIP – on how people always resist new things as they age or where things work differently to what they’re used to – in “Stop worrying and Learn to Love the Internet
  • Tim Harford – mainly in books, but this corker of an Article about “Big Data: are we making a big mistake“. There is a hidden elephant in the room, given “Big Data” is one of the keystone fads to drive equipment sales in the IT Industry right now. Most companies have a Timely Data Presentation problem in most scenarios i’ve seen; there’s only so much you can derive from Twitter Sentiment Analysis (which typically only derives stats from single percentage figure portions of your customer/prospect base), or from working out how to throw log file data at a Hadoop cluster (where Splunk can do a “good enough” job already).
  • The occasional random article on Medium, such as a probably emotive one to the usual calls of the UK press: “How we were fooled into thinking that sexual predators lurk everywhere” – suggesting that Creating a moral panic about social media didn’t protect teens – it left them vulnerable. There are many other, very readable, articles on there every week across a whole spectrum of subjects.
  • The Monday Note (www.mondaynote.com), edited by Frederic Filloux and Jean-Louis Gassee (JLG used to be CTO of Apple). The neat thing here is that Jean-Louis Gassee never shirks from putting some numbers up on the wall before framing his opinions – a characteristic common to many senior managers i’ve had the privilege to work for.
  • There’s a variety of other newsletter sources I feed from, but subject for another day!

The common thread through what appears to run here is that each other can speak authoritatively, backed by statistically valid proof points, rather than fast trips to the areas of Maslow’s Hierarchy that are unduly influenced by fear alone. I know from reading Dan Ariely’s Predictably Irrational: The Hidden Forces that Shape Our Decisions book that folks will, to a greater or lesser extent, listen to what they want to hear, but I do nevertheless value opinions with some statistically valid meat behind them.

There was another piece by Ken McCarthy (@kenmccarthy), who did a piece shovelling doubt on the existence of Journalism as a historical trade; more as a side effect of needing to keep printing presses occupied – here. He cites:

Frank Luther Mott who won the 1939 Pulitzer Prize for “A History of American Magazines” described the content of the newspapers from this era thusly:

  • 1. Scare headlines in huge print, often of minor news
  • 2. Lavish use of pictures, or imaginary drawings
  • 3. Use of faked interviews, misleading headlines, pseudoscience, and a parade of false learning from so-called experts
  • 4. Emphasis on full-color Sunday supplements, usually with comic strips
  • 5. Dramatic sympathy with the “underdog” against the system

Besides the fact that this sounds an awful like like TV news today, where in this listing of the characteristics of turn-of-the-last-century newspapers is there any mention of journalism? There isn’t because there wasn’t any.

I’d probably add a sixth, which is as a platform to push a political agenda to the more gullible souls in the population – most of whom are opinionated, loud and/or old – or all three – but have a tendency to not spend time fact checking. And amongst the section of the population who still buy printed newspapers and who have a tendency to turn out on election day to vote in large numbers, which is an ever aging phenomenon. Very susceptible to “Don’t let facts get in the way of a good story”, rather than the younger audience that relies instead on a more varied news feed from the Internet at large.

We were treated to a classic example last year. The Sun reported news of the latest “Eastern European Benefits Scrounger”, milking the UK economy for all it’s worth while those who’ve worked hard for years suffer. The responsible government minister, Ian Duncan-Smith, weighs in with a paragraph to be appalled by the injustice. This is followed by over 800 replies, the tone of which (post moderation) is heavily “Nationalistic”:

The Sun - Headline "You're a soft touch"

So, who is this single Mum from the Baltics? She was, in fact, a Russian model hired for the role:

Natalia - Russian Model for Hire

Meanwhile, all comments pointing out the hypocrisy of the paper on the associated forums, or to fill in the blanks on the missing facts, got conveniently deleted. Got to stir things up to sell the papers, and to provide a commentary to victimise a large swathe of the population while greater wrongs elsewhere are shovelled under the carpet.

At some point, the readership of the main UK newspaper titles, owned as they are by six organisations, will ebb away into obscurity as their readership progressively dies off.

I sincerely hope we can find some way of monetising good quality journalists who are skilled in fact finding, of conveying meaningful statistics and to tell it like it is without side; then to give them the reach and exposure in order to fill the void. A little difficult, but eminently possible in a world where you don’t have to fill a fixed number of pages, or minutes of TV news, with superfluous “filler”.

A consolidated result, tuned to your interest areas (personal, local, national and beyond) would probably be the greatest gift to the UK population at large. I wonder if Facebook will be the first to get there.