so when is it Series A? Everpix, funding and @pmarca

Marc Andreessen (@pmarca) of A16Z was posting in two interrelated threads on twitter over the weekend which I found particularly fascinating. To give some context here, Everpix, a photo sharing startup which closed last year, posted a huge amount of data about the company on github a few months ago. This included their funding details, conversations with VC’s and a wealth of other information (if you are anyway interested in the business of start ups all of the information Everpix posted is worth a read).

On Everpix in particular Andreessen commented that the founders of Everpix had gotten quite a lot of feedback from VC’s, more so than normal. What interested me more was the context with one of his legendary (or is that infamous) twitter essays. In his latest one he discussed the perception difference and expectations of VC’s versus founders as to what funding rounds such as Seed, A etc, and the amounts involved, actually mean. Now @pmarca does post rather lengthy twitter essays which garner a huge amount of responses, so to make it a little easier to read I have reproduced them below.

And in response, Fred Wilson gave what is a lovely clear classification of funding rounds

Having been through a well funded early stage startup (although not as a founder) relatively recently that hit a Series A funding wall, point 12 in particular resonates with me. So when is it Series A funding, when is it B and so on? It is still something which is somewhat subjective, but I would argue in the case of Everpix they may have called it Series Seed, to investors it definitely looked like Series A, and their expectations of where the business should have been were set at that level.

IBM BlueMix and a simple WordPress Deployment

Shoreditch Works announced a sponsorship deal with IBM a little while back, which included a series of IBM events at the Village Hall. Chatting with @monkchips following the first of the IBM events the suggestion came up – why not migrate the Shoreditch Works website to BlueMix?

Now I have used a number of different cloud platforms, in both the IaaS and PaaS space, over the years but I have not spent any significant time on IBM’s offerings before. While a wordpress instance is not exactly going to strech things it does provide a nice quick intro to BlueMix.


The original Shoreditch Works website is a wordpress instance and was installed in a standard shared hosting provider. Nothing particuarly special here, but it has been through a number of iterations and as tends to happen had various stale/unused plugins and other bits and pieces which built up over the years.

The latest iteration has a custom theme, and a couple of simple plugins. As BlueMix uses CloudFoundry we need to have a basic awareness of the CloudFoundry’s approach – in particular you do not have a persistent file storage. WordPress and php unfortunately spend quite a lot of time assuming they have local storage (which in fairness is an okay assumption to make for most traditional environments). From our perspective this means we need to stage our themes and plugins before we deploy to BlueMix.

Local Dev Environment

I tend to use Vagrant for small development environments like this, and while I have my own custom images, for the purposes of this writeup we will use one of the lamp boxes available on the Vagrant Cloud. If you want to play around with Cloud Foundry I highly recommend spending some time with BOSH.

tiresias:bluemix-demo fintanr$ vagrant init ktbartholomew/lamp
tiresias:bluemix-demo fintanr$ vagrant up

Now we install the cloudfoundry CLI tool.

tiresias:bluemix-demo fintanr$ vagrant ssh
vagrant@ubuntu:~$ curl -o cf-cli_amd64.deb
vagrant@ubuntu:~$ sudo dpkg -i cf-cli_amd64.deb


Next up is to get the basics of our wordpress install in place. There is a nice article on IBM Developerworks which covers both this and the later deployment. A very quick summary of the wordpress releated steps are below, and we are using a different buildpack which I go into in a little more detail further on in this article.

vagrant@ubuntu:~/public$ curl -o wp-latest.tgz
vagrant@ubuntu:~/public$ gunzip -c wp-latest.tgz | tar xf -
vagrant@ubuntu:~/public$ cd wordpress/
vagrant@ubuntu:~/public/wordpress$ curl > wp-salt.php
vagrant@ubuntu:~/public/wordpress$ curl -o wp-config.php

We also added a .htaccess file with our mod_rewrite rules, and the various plugins that we need. There is also an issue we encountered with the current (3.9.1) version of WordPress explained below.

Our Custom Buildpack

Our wordpress instance uses permalinks, which under the hood relies on apaches mod_rewrite. Now when we started out doing this configuration and fired up wordpress for the first time it was pretty obvious this wasn’t working. This kind of issue generaly manifests itself as one of two issues, AH00128 or AH00670 (yes I’ve been here before in different scenarios). A quick check in our logs showed up the problem

vagrant@ubuntu:~/public/wordpress$ cf logs --recent sdw-101 | grep AH00
2014-06-06T06:48:47.34-0500 [App/0]   OUT 11:48:47 httpd        | [Fri Jun 06 11:48:47.312997 2014]
 [core:info] [pid 75:tid 140036321601280] [client] AH00128: File does not exist:
 /home/vcap/app/htdocs/coworking/, referer:

We started with a PHP Buildpack from pivotal, but needed to update this slightly to enable the rewrite rules. The fix itself is very straight forward, and the resulting buildpack is published on github, and if your really bored the diff is here.

Bluemix Deployment

So we have all of our bits and pieces together, onto the Bluemix deployment – a very simple and straight forward process. We assume you have already signed up for Bluemix.

# login to bluemix
cf login -a

# We need a mysql db for wordpress
cf create-service mysql 100 mysql-wordpress-service

# use our custom buildpack
cf push sdw-101 -b --no-manifest --no-start

# bind our mysql service
cf bind-service sdw-101 mysql-wordpress-service

# and start
cf start sdw-101

And thats it. You need to do some initial configuration of your wordpress site when you first login, but thats it.

and the obligatory WordPress issue

It wouldn’t be wordpress without something odd happening, and in our case it was a wordpress import error with WordPress 3.9.1. On importing an old site into 3.9.1 you may see an error similar too

Strict Standards: Redefining already defined constructor for class WXR_Parser_Regex in
 /home/vcap/app/htdocs/wp-content/plugins/wordpress-importer/parsers.php on line 408

Strict Standards: Declaration of WP_Import::bump_request_timeout() should be compatible with
 WP_Importer::bump_request_timeout($val) in
 /home/vcap/app/htdocs/wp-content/plugins/wordpress-importer/wordpress-importer.php on line 38

This is a known issue with wordpress, recently reopened as TRAC 24373. There is a diff which fixes the problem posted to the bug. If you are following our example you will need to apply this patch before you push your app.

EMC ViPR and Software Defined Storage

Redmonks Stephen O’Grady has an interesting post Software and EMC  in which he discusses some of the problems that EMC, and other storage incumbents, are having with declining hardware sales and their change in focus to software and very specific high end hardware/software combinations (DSSD is one hell of an acquisition, everyone watching the storage space has been wondering what Bonwick et al have been up to, and Bechtolsheim has been obsessed with flash for quite some time).

Software and the Emergence of Commodity Storage

I would argue however that software has always been at the core of what EMC deliver though. What has made companies like EMC so profitable is their attention to areas such as availability and customer service. The age old story is that of the EMC engineer turning up to replace a failing component before you even knew it was a problem. Only software can facilitate that.

The storage market overall is growing, you just have to look at the uptick that companies like SuperMicro, Quanta and others are having, the entry and growing market share of multiple relatively new players such as Nutanix, Tintree, Tegile and others, and the incredibly growth of Amazon S3. Whats interesting here is that the growth is a combination of commodity components, and solutions for specific pain points.

This hardware growth is not where the real money is though and I think EMC see this, at least at an executive level, the real money is in managing the data and the exponential growth of the storage footprint at scale. Which leads us back to Stephens question

If it’s true that EMC is betting heavily on software to restore its hardware growth, the next logical question is whether this is the appropriate response

I don’t think EMC are trying to restore hardware growth at all, I think they are trying to completely shift their business model in the longer (5 to 8 years) term.

At the customer facing software, and developer, level we have removed many of the requirements for highly available hardware solutions – the high margin business EMC is the dominant player in – and moved to rapidly scaling solutions. This moves the real value in storage to managing and provisioning of storage, rather than supplying the underlying storage infrastructure (if you happen to provide it at a high margin, well and good, but its no longer your primary goal).


One point I do have to take up is around registration. I am all for ease of access, but industry context is important. Stephen notes

..the company keeps assets like ViPR locked up behind registration walls. In a market in which technology decisions are being made based more on what’s available than what’s good, that’s an issue

This is very much a cultural thing, traditional vendors such as EMC, IBM, Oracle and the like will always look for registration. EMC are not alone in looking for registration in the software defined storage market place either. If we look at  true “software defined storage” vendors (not requiring a hardware product) such as Nexenta, OSNexus and StarWind only Nexenta provides a download that doesn’t require registration.

Bluntly put storage is hard, spinning rust breaks and the traditional way of controlling that is to have a defined hardware stack. When you are providing a software defined storage solution you run into issues similar to what Linux encountered in years gone by such as people installing on older hardware configurations with poor device support or broken components and so on. You then have to deal with the fall out from this on social media, forums etc. Registration allows you to understand a little about who the customer is, and understand if they are a potential sale worth investing time in, or someone (like me for instance) who has just downloaded the software to take a look. Its far from ideal, but its a commercial reality.

I would also argue that storage decisions are made on price, reputation and performance requirements rather than availability of a product to play with. Its a different market place, and DevOps are now driving some of the purchasing decisions. The capex required for storage is a massive proportion of most IT budgets, and any one dealing with operations and “spinning rust” will be happy to have a vendor involved in a proper proof of concept. Registration is not a barrier in that context.


Do I think EMC will succeed with ViPR? I am really not sure. As a preferred vendor to traditional customers, yes definitely. In new markets? Well their developer and devops story is still very hard to follow. Once you actually find their developer site you will see just how little activity there has been there so far, and the limited content is, to my mind anyway, still focused on very traditional storage admins.

Ciscos Cloud and the Legislative Landscape

Cisco announced their latest cloud strategy several weeks ago. The general media coverage was, in my opinion, quite disparaging – dismissing the announcement as being little more than “me too”. To my mind this is a very shortsighted view to take, and misses some of the political and legal risks which are emerging.

While Cisco have some very strong commercial headwinds to deal with, aspects of their cloud strategy appears, on the surface at least, to be very well thought out.

To understand my reasoning here, lets first step back and look at the geographic locations of the main global public cloud providers, Amazon, Google and Microsoft. All of these entities are headquartered in the US, and are as such are subject to the uncertainty and legal risk created by the ongoing PRISM revelations.

Indeed, in a recent report on ‘Next Generation IT infrastructure’ from McKinsey infrastructure technology executives noted one of their chief security concerns is “that public cloud providers are legally barred from informing a customer if that customer’s data is subpoenaed by government”.

Robin Harris wrote a nice piece for ZDNet on the ongoing ramifications of the PRISM programme on IT strategy in which he highlighted a report titled ‘NSA Aftershocks’ from NTT. Now of course NTT are a telecoms provider, and have a vested interest in highlighting risks with global public cloud providers. However it is the very fact that they are a telecoms provider that is important.

This is where Cisco’s approach differs. In the initial announcement of the Cisco Global Intercloud they also announced that Telstra, an Australian telecoms provider, is their first partner. While it is not clear is who the ultimately legally entity responsible for the Telstra branded cloud will be, but I would hazard an educated guess that it will be Telstra, and most, if not all data, will be kept within Australia.

The model that is being put forward by Cisco, of having an interoperable federation of clouds, allows for segmentation on a geographical or legislative basis, but still allows for a common resource management methodology. More importantly by partnering with local firms they may be avoiding any hands on ownership of data.

This aspect of the model is, to my mind, a key strategic differentiator for Cisco. The fallout from the Prism revelations are only just beginning, one only needs to look at items such as the recent, albeit scaled back, legislation in Brazil, comments from across the EU, in particular Germany, regarding the NSA and the general awareness which individual citizens are slowly developing about their online privacy to see that where data is stored, who can request access to it and how it is legally protected are going to become ever more important concerns for politicians – and that ultimately means for businesses.

With such a shifting political landscape positioning your company as the cloud provider to “trusted” telecom incumbents, with their local legislative knowledge may be a very smart idea.

‘IT under Pressure’ and Talent Development

The latest McKinsey Global Survey Results , ‘IT under pressure’, made for some very interesting reading. A number of themes are evident, but the area that really stood out to me is the continuing issue around the development and acquisition of talent.  In reading the McKinsey survey I was reminded of a paper I read several years ago by academic Joe Peppard, in which he noted:

“… many of the CIOs interviewed experienced problems in hiring senior staff with the right skills, experience, and attitude for their own leadership team. The CIO of a global insurance company noted, ‘Its difficult to find people with the right mix of skills – the deep IT understanding combined with broad commercial acumen’.” (Peppard, 2010)

Having moved from a technology background into mainly business-focused roles I am more the familiar with this refrain, and find myself largely agreeing with it. However, one aspect of the talent issue, which I feel is missing in the general discourse, is our view of technology in organizations and more importantly our approach to the provision of technology services.

Risk Aversion and Process Adherence

If we look at the systems and processes that are in use in most large organizations today to manage their IT estate we see a major focus on very structured methodologies such as ITIL and ITSM, CoBIT etc.

The primary, and overriding, theme of all of these systems is that of risk aversion, the second being unwavering faith in a defined process. Now both risk aversion and process are good things in their own right, and indeed in for certain types of IT function they are not just necessary, the are essential – a subject I come back to later in this post.

However, this focus on micro level processes and being risk averse to the point of stagnation is an extremely tactical and low-level approach. It is not possible for people to develop an understanding of the strategic needs of a business if their main goal is, and their reward system is based around, metrics such as those defined in service level agreements around call handling speed, ticket closures and so on. I refer to this as “keep the lights on” IT.

There in lies the rub, on one level CIO’s and business leaders are looking for people with “deep IT understanding combined with broad commercial acumen”, on the other the very systems they are asking large sections of their teams to follow do not allow for the development of either strategic thinking or commercial acumen. The ability to take a strategic view and a calculated risk when change is required is stripped away from many key staff by the very processes that we put in place to support the business needs of the organization.


As an industry we then compound this tactical risk adverse approach to IT with outsourcing agreements that are built around very rigid service level agreements, again based on the same methodologies.

We now create a two-tier system of risk aversion.  Those managing the relationship within a business are concerned with ensuring that the SLA is upheld and additional costs are contained. Those working in the company we outsource to are goaled with minimizing change in order to preserve and enhance profit margins.

I am reminded of a WSJ Café talk by Mike Bracken of which I attended in late 2012. Among the topics he touched on was the procurement of technology services in the public sector and gave a strikingly simple example of what is wrong with large parts of the current outsourcing model.

A government body had signed a multi year outsourcing agreement in 2008 (if I recall correctly) for the delivery of various services, including the maintenance of a consumer facing website. The service level agreement included a clause to maintain the existing website, and the required functionality at the time of was in place.

In 2010 consumers started to use iPads, mobile phones continued to become more ubiquitous and we all became more accustomed to being able to access services on multiple devices.

As for this particular website? Well updating it would require a change to the contracts, which in turn would incur costs. The end result for consumers is that they see a public service that is slow to change to meet their new usage requirements.

The underlying issue here was the failure to realize that the system being outsourced would need to evolve, and to build that into to the contracts.

While I am not privy to the contract details, my sense here is that the tactical need, outsourcing the maintenance and support of the system, was met. However, no one appears to have been tasked with considering the future direction the system may need to take and understanding what risks, such as effectively locking down the system, the outsourcing agreement was creating. In a risk adverse culture or process such risks are not considered, as the underlying assumption is that of little to no change.

Offensive and Defensive IT Strategy

Now this is not to say that the methodologies such as ITIL mentioned above should be abandoned, far from it – in many cases they are absolutely what is needed. What needs to be reviewed is the rigid adherence to such methodologies in the context of the industry and initiative we are discussing. This industry context should also consider any requirements for staff development.

The table below (extracted from my dissertation thesis) summarizes and relates work from two academic papers, Nolan and McFarlan (2005) and Peppard, Edwards and Lambert (2011)

Nolan and McFarlan (if you have the time I highly recommend reading the HBR article) gave a very simple and effective breakdown of IT strategies in an organization, dividing them into defensive and offensive, and sub dividing into four modes.  Peppard, Edwards and Lambert provide a very useful explanation of the different types of CIO’s in organizations.



Nolan & McFarlan Peppard, Edwards & Lambert Nolan & McFarlan Peppard, Edwards & Lambert
Factory Mode

Business needs IT to run; minimal strategic differentiation with IT possible

Utility IT Director Strategic Mode

Large strategic benefits seen from IT; Business needs IT to run; minimal strategic differentiation with IT possible

Agility IT Director
Evangelist CIO
Support Mode

IT supports business needs; business can run without IT systems if necessary

Utility IT Director Turnaround Mode

IT provides opportunities for major improvements

Evangelist CIO
Innovator CIO
Facilitator CIO

In my own experience I have yet to encounter an IT estate, programme or initiative that cannot be classified in one of the modes defined by Nolan and McFarlan.

Anything that can be classed as defensive is almost guaranteed to be a perfect candidate for rigid processes and frameworks, easy outsourcing and strict service level agreements.  These tend to be the areas where strategic thinking and high levels of commercial acumen from IT staff is not required.

As we move into the offensive areas there are obvious opportunities for both strategic thinking and commercial focus. Having key staff work in these areas, rather than in “keep the lights on” IT would undoubtedly allow them to develop both technical skills and the commercial acumen that is highlighted in the recent McKinsey report and the earlier work from Peppard.

A Potential Answer?

The division of IT initiatives into defensive and offensive, and the subsequent allocation of staff, not a solution to talent issues. However by identifying the broad areas their initiatives are falling into CIO’s have the opportunity to develop the business focused, strategic and analytical abilities of their key staff.

While far from a panacea, allocating key staff to offensive initiatives would be a simple step for CIO’s to take in addressing the skills gap. A small step, but a step nonetheless.


Moores Law, McKinsey and 450nm

McKinsey posted a very interesting article, Moores Law: Repeal or Renew, a little while back. In the article an number of potential scenarios for the semiconductor industry are set out. Essentially these boil down to the successful emergence of 450nm fabs and EUV technologies. Given that both Intel and TSMC have been investing heavily in both of these areas it would seem to be a relatively safe bet that we will see 450nm in the coming years.

The more interesting question here, in my opinion, is whether this move will result in a decommoditization of the chip industry? The sheer scale required to build a facility capable of 450nm limits the number of potential players. One wonders what the longer term economic impact will be?

Innovation and Buzzword Bingo

Like many people in the technology industry I keep an eye on emerging trends – both business and technology related. In part this means keeping an eye on what the myriad of analysts are commenting on. And if a segment of the technology industry can really play buzzword bingo, it is analysts.

Yesterday a headline popped up (via @vambenepe) in an article by Ovum[1]Telcos unveil pragmatic innovation at MWC, where it was stated:

“….an undercurrent of pragmatic innovation is occurring away from the limelight. Telcos are unveiling pricing innovations, and outlining their refreshingly candid expectations of product and service innovations.”

Now innovation is a sorely overused word. The Wall Street Journal gave a great description on the hijacking of the word innovation by the mundane and necessary back in 2012

“Got innovation? Just about every company says it does.

Businesses throw around the term to show they’re on the cutting edge of everything from technology and medicine to snacks and cosmetics. Companies are touting chief innovation officers, innovation teams, innovation strategies and even innovation days.

But that doesn’t mean the companies are actually doing any innovating. Instead they are using the word to convey monumental change when the progress they’re describing is quite ordinary.”

Pragmatic innovation? Given that the scenarios of most concern for the mobile telecoms industry essentially relate to over the top services disrupting traditional services (e.g. WhatsApp versus text messaging), and thus amount to a threat to the margins of telecoms companies. Surely, these “pragmatic innovations” are best described as a relatively iterative commercial response, by way of evolving aspects of a business model, to a set of competitive threats?

Pragmatic – absolutely. Innovation? I think not.


  1. I have a lot of time for Ovum as an analyst firm, having had the opportunity to both use their analysis on a number of occasions and being involved in giving analyst briefings in the past. They are one of the more impartial firms out there.

Leadership in Difficult Times

One of the key attributes of a leader is how to deliver bad news, be it the day to day management of helping a team member through a performance issue, to much more difficult areas like managing a crisis in the overall business.

My professional career began just prior to the dotcom boom. A few years after I started I was chatting with an older, and much wiser, colleague of mine about the merits of a particular persons management. I could see he had his doubts about this particular manager’s leadership ability and he finished up our conversation with a wonderfully percipient comment

“It is very easy to be a good manager in good times, lets see if he is a good leader in bad times”.

Of course it turned out that the person in question was thoroughly unable to lead when it came to tough times. Sadly, such a lack of leadership ultimately manifests itself as mediocrity. Dan Rockwell neatly sums this up in a nice article about personal transparency when he states “mediocrity arrives when difficult conversations are avoided.”

Sport is often a great place to look when you are looking for examples of how to approach leadership, and in particular how to deliver bad news. My favourite sport to watch is rugby, and having moved from Ireland a number of years ago I keep up with what is happening via newspapers and podcasts. When I was first thinking about this post a number of weeks back I listened to an interview with the new Irish coach, Joel Schmidt

It’s a long interview, but if you have time listen in from about 14:40 for five minutes. The key points are in how he addresses selection for the national team, and more importantly how he tells people. He always makes the call to the players not selected – it is never delegated, he is transparent about what the player is doing, what he likes, what he doesn’t like and what needs to improve.  It is with this strong example that Schmidt drives his teams further, and prevents mediocrity from ever taking hold.

Now if you look at what Schmidt is saying here, it essentially boils down to a number of key points, which were well summarized by Erika Andersen earlier this year

  • Speak up
  • Be accurate
  • Take responsibility
  • Listen
  • Say what you will do next
  • Do what you say and repeat all the above

Bringing this into a more managerial context, smartblog has a nice post with suggestions from members of the Young Entrepreneur Council. Of these my personal favourites are

  • Deliver the news directly
  • Act fast
  • Plan the information you want to communicate
  • Be clear and direct

Most importantly though be prepared to be frank. The worst two things you can do in a difficult leadership situation are to put off and/or spin the conversation.

On Digital Literacy

A theme, which we will visit frequently here in the coming months, is that of digital literacy.

So what do we mean by digital literacy? It is not the ability to use technology – rather it is the understanding of what is possible with technology. It is having realistic expectations of what can be achieved given the constraints you, or your business, are inevitably under.

Digital Literacy v’s Business-IT Alignment

There is a large body of work from industry, consultants and academia on the area of business-IT alignment, and it is also a subject we will come back too frequently. However, over the last four to five years there has been a marked shift in the more interesting (imho) papers that are coming out in this area from discussing alignment at a business unit level to focusing on the level of understanding of technology, an organizations digital literacy, at senior executive and board level. One of the most interesting papers in this area is Joe Peppards “Unlocking the Performance of the Chief Information Officer (CIO)” [1], in which he discusses the need for C-Suite and board level understanding of technology.

So what is the difference? Business-IT alignment tends to be the level of discrete business unit problems. Over the last number of years a lot of firms have taken large steps forward towards taking an overarching view point under the guise of enterprise architecture (just do a search for TOGAF to see the amount of enterprise architect roles currently advertised), and I would argue that well thought through enterprise architectures are a huge step forward. However one still seems to find a lot of enterprise architects focused at a micro (e.g. departmental) level, never truly stepping back and looking at the big picture.

Digital literacy on is much more about understanding the information that a business has, what it needs, and how it can be used. The best description I have seen for this is “information orientation” [2].

The first step towards digital literacy is in understanding what this information is, what is involved in gathering this information, and finally understanding what is involved in making this information useful to the business.

If board members can understand the implications of these three questions from a technology context, the can better understand the implications of proposed solutions and evaluate if the level of investment is justified.


  1.  Peppard, J. (2010) ‘Unlocking the Performance of the Chief Information Officer (CIO)’, California Management Review, vol. 52, no. 4, pp. 73-­‐94.
  2. Kettinger, W., Zhang, C. and Marchand, D. (2011) ‘CIO and Business Executive Leadership Approaches to Establishing Company-­‐Wide Information Orientation’, MIS Quarterly Executive, vol. 10, no. 4, Dec, pp. 157-­‐174.