Weaving Monks

Yesterday was my last day working at Weaveworks. Six months ago the opportunity came up to work at the very heart of the container space, with two experienced founders, Alexis and Matthias, who had already brought the world RabbitMQ, and are now building an immensely strong team to solve another very difficult problem.

It has been a privilege to work with what has to be one of the most talented team of programmers I have seen in a long time. Like most teams currently developing the infrastructure to solve hard distributed systems problems their current focus is Go, but, as one always sees, the best programmers they can turn their hands to anything.

Weaveworks have a relentless focus on making developers lives easier when it comes to microservices, containers, networking and distributed systems. I gave my last ever talk as a Weaveworks employee on Wednesday night in London, and as always people in the audience loved how simple Weave makes container networking (the slides are here). Making an area as complex as networking seem trivial is very, very hard.

However, I have always been interested in the bigger picture, both with, and around technology. Which is why applying for a role at Redmonk when a vacancy arose was an obvious choice for me, and joining next week as an industry analyst, is both a great honour and a fantastic opportunity.

Weaveworks is just one of many companies I will be looking at, and joining the dots from hipster container stacks to enterprise developers, and ultimately to customers, is one of many things I am looking forward to doing at Redmonk.

My last action at Weaveworks was to remove my picture and bio from their website codebase, confirm it all looked okay and commit the change to master. And with a git push origin master:prod, I sat back and watched CircleCI do its thing, looked back at the last six months with fond memories and then turned my thoughts towards to the coming years with Redmonk. Perhaps your photo could replace mine at Weave?

If you want to work on some very interesting problems in the container space with a superb team, experienced founders, a strong boardtalented advisors and serious investors you should investigate the code base and see what type of people Weaveworks are looking for.

As for me? The next chapter starts on August 5th. Come find me at Redmonk from then.

Oracle Hardware Sales v Services Update, Q4FY15

The release of Oracles Q4FY15 results yesterday felt like an appropriate moment to provide an update on my commentary on Oracles Hardware Business from last year.

Overall the trend is still downward for hardware sales, albeit back up from the inflection point we noted in Q1FY15. On an annual basis the drop was just over 5%.
Oracle Hardware v Support Sales Q4FY15

The upward march of gross support margins continues, to a very impressive 68% this quarter. Where the legacy Sun install base has refreshed, both the inventory requirements and the cost of maintenance and support for Oracle are obviously decreasing, which one would expect. Engineered systems, such as Exadata, should be a lot cheaper to maintain once on a customer site.

The gross margins for hardware on the other hand are continuing their steady decline, which must be a source of concern.

As a percentage of overall revenue hardware sales and support continues to decline, albeit at a much reduced pace as well.

The question as to how much of Oracles hardware sales are actually internal, and in support of areas such as Oracles cloud offerings, is still something I would love to get more data on. Given the decline in gross margins you do wonder if there are cross sales happening internally?

The other big question is the breakdown in Sparc v’s x86 based machines, in particular across the engineered systems portfolio. As close to a gauge as one can get is the number of published reference cases across the various product lines.

All in all no major surprises.


The R scripts and raw data for the analysis are available on github.


I worked for Sun Microsystems, and then Oracle, for 12.5 years before leaving in 2012. I own Oracle shares.

October Business of IoT Meetup References

On Monday October 6th we had the inaugural Business of IoT meetup at Shoreditch Villagehall in London. I mentioned a number of books, articles and papers in my talk and following a few requests heres a list of them

Any questions do let me know. As for the slides, they really don’t make any sense without context so I won’t be posting them.

Our next Business of IoT meetup is in Cambridge on November 12th followed by December 17th in London.

Apache Spark, Mesos and the End of Big Data Vendorfests

TL; DR The agility of Mesos and Spark will hit established vendors big data infrastructure professional services revenue in the medium term

Spark and Tweets

Apache Spark has been getting a lot of attention in recent days after publishing a set of benchmark numbers that on the surface look very impressive. This prompted Ben Horowitz to tweet

Now I have been thinking about the changes that projects such as Spark and Mesos are about to make to the industry, and I responded

Which then prompted a question

Now this is a question that you can’t answer in 140 characters. Or even 140 words.


First off we need to divide our vendors into three broad groups

  1. Software providers (Datastax, Cloudera, Hortonworks etc)
  2. Hardware & Systems vendors (e.g. Dell, HP, Oracle etc)
  3. Systems Integrators (Accenture, Cap-Gemini etc)

When I refer to a vendorfest, the vendors I am referring to groups 2 and 3 here. The companies which provide hardware and some form of integration service, or third parties that pull various technologies together for you.

Economics and Structure of Professional Services

The value of data is in the interpretation of the data, not in the underlying infrastructure used to provide it. Every business leader is aware of just how valuable existing and new forms of data are when they can be queried in an intelligent manner.

How we ultimately get data into something useful is an important aspect of this, but the vendors in groups two and three above have a vested interest in ensuring that they provide, integrate and manage the software and hardware where the data is stored and analyzed.


Software can be very complex – there is no getting away from this fact. When we add the requirement of a large infrastructural component it gets much more complex. The practicalities of rolling out a 1000 node system in any area are far from trivial – which is why we see products such as Exadata, VCE and so forth.

On Premises v’s Off Premises

People are still hesitant about the cloud, and depending on who you talk too we are either destined to have on premises compute for ever more or everything will shift to the cloud. Personally I sit somewhere in between, particularly in relationship to storage, but I do think the vast bulk of workloads will shift towards a cloud only model, and very quickly.

Hardware vendors are resolutely certain, in public at least, that everyone needs on premises hardware of some sort – the phrase hybrid cloud permeates the marketing efforts of many entrenched firms. This is, of course, an understandable position for the traditional vendors to take.

The Professional Services Arms of a Company

Now traditionally the large vendors have created extensive professional services arms. For long established companies these professional services are strong profit centers in their own right, and allow them to hire smart people around the globe. But, by being profit centers, they also become a key part of the business which need to be maintained.

When you see a traditional vendor creating a professional services offering around a new technology it is safe to assume that the primary goal is to maintain, and hopefully grow, this professional services revenue stream. The companies were unlikely to have engaged in the community that creates the technology until a media focus appeared. Possibly an analyst focus, but generally these moves lag well behind analyst comment.

The rush into OpenStack is a great example of an institutional rush to become credible with a technology. Whether the OpenStack pie actually exists is a whole other question. The two most interesting independent OpenStack companies in CloudScaling and Metacloud have been acquired by EMC and Cisco respectively, so we are back to the major vendors again.

The common threads in both OpenStack and Hadoop are complexity and scale. Professional service arms thrive on complexity – that project you need to bring outside help in for.

Mesos and Spark

So where do Mesos and Spark sit in this discussion?

Well, to my mind, there are two parts to this – abstraction and scale.

Currently if you want a sizeable hadoop implementation you will have to invest a significant amount of time in the basic configuration, resource management and so forth. Rather interestingly Cloudera just announced software to address exactly this problem. However this still does not remove the resource utilization issues that we see with dedicated infrastructures.

Mesos has a stated goal of being a kernel for the data center, providing resource management and utilization in a manner you would expect of an operating system. This is a hugely powerful approach. Mesos accomplishes by providing an abstraction layer to develop your applications too. This abstraction allows you to move from working to an infrastructure to working with a resource.

Now Apache Spark can be easily run on Mesos, and, if the we have seen thus far are to be believed, in a much more efficient manner. Combine this efficiency with the resource management of Mesos and you have a powerful solution that can sit on top of any infrastructure Mesos is managing and be dialed up and down with demand.

This is the first true manifestation of analytics with extremely easy to use on demand computing, that can move across infrastructures with almost zero cost. That removes the vendor lock in we see with a lot of the current offerings. Yes you can move to new infrastructure, but this has a tangible cost.

When I say Mesos and Spark are a threat to the “vendorfest” it is this agility in terms of dialing up, down and moving across infrastructures that I am referring to.

Side Note – Benchmarks

The data, sadly, has an issue in terms of the benchmark itself. As Donnie Berkholz has pointed out when a vendor publishes a benchmark we all need to be able to see what is behind it.

I worked in performance benchmarking for a number of years early on in my career, and to say vendors lay train tracks through systems to gain extra speed for the purposes of a result would be an understatement.

End of the Polyglot Technologist?


One of the truly wonderful aspects of the “Internet of Things” is the explosion of “maker” related activities. The emergence of devices such as Arduino, Raspberry Pi and so forth, along with their surrounding ecosystems, have allowed hobbyists around the world to play with devices in ways that even five years ago were not possible.

Now this explosion in maker related activities is both well documented and well understood. What, thus far, has been less commented on are the concepts of trust and fear which are manifesting themselves with technology practitioners.

When you take a device like an Ardunio it is possible for one person to understand all of its component parts, what the hardware, software and so forth do, all of the possible error states etc. This type of understanding is at the core of what technologists do for a living – they understand a set of technologies, and they understand how they are put together. This understanding, however, is limited to an essentially closed system. It may be a broad system of many parts, but it is still contained.

Until very recently technology teams within medium to large organizations have controlled their infrastructure, their software stack and by extension their business users. It may be outsourced in some ways, but the complexity is more or less understood by a small team of people. Ultimately it is a closed system.

Business Model Changes

Stepping back and looking at the markets that are being created by the Internet of Things, and indeed by other digital trends, we see two fundamental shifts occurring

  1. Service Inversion
  2. Technological Partnerships

Both of these areas are long posts in and of themselves, but lets quickly touch on them.

Service Inversion and the Subscription Economy

The industrial Internet of Things is driving an inversion to the classic services model many businesses have created over the last fifty years. Nothing in this is new, the concept of proactive services is well embedded in many industries – but it generally comes at a pretty hefty premium.

If you take, for example, a refrigeration system used in a supermarket, the old model was for the system to break, a bunch of food to go off, the supermarket to phone the manufacturer to have a service engineer dispatched and eventually the unit is repaired, the supermarket restocks the unit and we start the cycle again.

Now for the company providing the service to the refrigerator this results in some extra revenue – the classic break fix revenue many service companies are addicted too – but for everyone else the experience is less than satisfactory.

With a combination of cheap sensors and connectivity it is now practical to proactively monitor low cost units, and remove this break-fix cycle. Too the supermarket owner there is much more value in the unit never having an issue – something they are happy to pay for.

This changes the model of how you deliver a service. You no longer wait for the break fix call, rather your customer subscribes to your service and you ensure the call never happens. This changes the relationship, as a business you are now a service provider. Your client is subscribing to your service.

Technological Partnerships

It no longer makes sense to “go it alone” and create all of the systems you need as a business. From telemetry gathering and aggregation, to CRM and support systems, to business intelligence systems there are a host of providers who can provide you with the technology you need to focus on the business problem you are solving, not the technological problems underneath.

As we deal with greater and greater scale these technological partnerships will only grow. Infrastructure, and this also means your supporting business systems, is a commodity. It is also an open system, one which you must trust.

Trust and Fear

At a recent presentation in London Mark Shuttleworth of Canonical touched on the concept of trust within computing – trust and the need to trust partners and ecosystems to do what we have hither to sought to understand in depth. It was in the context of broader shifts in the industry, but he highlighted the need for trust and the innate fear of not understanding displayed by technologists.

What truly brought this fear within technologists home to me was listening to some of the questions at the same event Mark was speaking at. One question, paraphrased here, was “e-mail is essentially a well defined service, understood for the last twenty years, but for some marketing reason we all outsource it”. And the statement is indeed true. E-mail is also a reasonably complex service to maintain at any sort of scale, which requires an expensive to acquire set of skills. But on a business level there is zero value to running an e-mail server.

The question could be rephrased as “we understand how to do this, its something we control, why let it go”. That is the fear aspect. Once you move your e-mail to a provider like Microsoft or Google, you have let go of the control. You have also placed your trust in a third party body that you have, at best, a service level agreement with.

This is just a single piece of infrastructure – this fear becomes much starker when you start layering multiple technologies, and you can’t control the underlying components.


Now how does all of this result in the end of the polyglot?

Most large organizations, particularly heavily technology focused ones, will have a number of truly gifted technologists working in them. These are the people you see at the conferences, who are active in communities, attend the meetups on their own time and who devour technical implementation details. They may have a title, they may not, but we all know them. They can delve into the guts of a kernel, architect a distributed system, design an api and understand the business needs.

Right now they are facing an onslaught of technologies, and indeed are already starting to retrench in areas such as programming languages as Stephen O’Grady notes. These technologists are fast approaching a point of being unable to truly understand each component piece of technology they may be using. Their polyglot nature is threatened by the sheer volume of change which they are facing.

This is not a new trend. It is just an extension of specialization that we see in many industries. It is however something new for the top tier of technologists to understand. This is where trust comes in – learning to trust the decisions of others for the components you are building upon and the partners you are using.

We have reached a tipping point where it is no longer possible, and indeed it no longer makes sense, to try an understand everything around the technologies a business uses. This is a scary place for people who want to understand everything they use, and design upon, to find themselves.

It will, however, result in the end of the polyglot. And to an ever expanding maker community where you will find technologists who want to understand everything they are using.

Mesos, Docker – The Next Disruptive Wave?

TL; DR: Mesos & Docker have the potential to disrupt PaaS, IaaS and configuration management in a significant way

I attended a talk by Benjamin Hindman of the Apache Mesos project last night in London. Having watched the development for a while, it was good to hear more about the direction Mesos is going in, and in particular the work to integrate with Docker.

A very simplistic explanation for Mesos is a kernel for the datacenter, but it provides an abstraction layer which applications can be written too. In terms of your datacenter or cloud provider think of this as a layer between your applications and the hardware.

Everyone observing the industry can tell you that IaaS is a commodity game. Vendors are scrambling to add value via providing a PaaS layer or a differentiation of some form (such as Cisco’s country specific clouds with partners).

However, the issue with the PaaS approach is that, generally, it is optimized towards a specific type of application. Which brings us to a chicken and egg problem – does my tooling or the business needs of my application define the approach I take. Currently, even with the most advanced PaaS solutions, the tooling still dictates a large part of your approach.

This is one of the major areas that Docker addresses. Rather than having to spend time thinking about what you can and can’t do, you essentially get on with putting together the application you need and package it up. Now there are a number of areas such as how to address networking in docker that are in need of abstraction and improvement. Tools such as weave are starting to address these gaps.

The holy grail of developer productivity for large scale distributed systems is to have continuous integration and deployment that involves very little effort and is not tied to a specific infrastructure. Now to my mind Docker has two specific impacts

  • you no longer need to design for any specific IaaS or PaaS, just a higher level abstraction
  • you remove a large chunk of the configuration management infrastructure that we have come to rely on over the last few years

Adding Mesos to the mix gives you an abstraction from your infrastructure. It allows your applications to be portable. Combining Mesos and Docker, well that suddenly frees you from a specific infrastructure, a lot of the configuration management that’s required to keep all the tooling in place and allows for better resource utilization.

Wither configuration management? Not quite. But for applications at scale it is about to get a lot simpler.

Oracles Hardware Sales v’s Services Inflection Point

The Oracle headlines last week were always going to be about Larry Ellison. This is very understandable. Whether you like him, loathe him, or, such as in my case, you are somewhere in between, you have to admire his business acumen in growing Oracle to the behemoth that it is today.

Looking at the earnings released on the same day one set of figures in particular grabbed my attention. Beyond the now customary drop in hardware sales, we also saw that revenue from new hardware sales has dropped below that of hardware support services for the first time at $578M v ‘s $587M (49.6% v’s 50.4% of total hardware revenue). This is, in my opinion, a very significant inflection point in an already negative trend.

Oracle, until this quarter, have been very bullish about hardware sales, engineered systems and so forth. This quarter there was an acknowledgement that Sparc sales continue to slow, tape declined and the obligatory engineered systems continue to grow comment. Ellison moving from CEO aside, perhaps this lack of bullish comment is because the numbers do not lie.


From accounting for 67.3% of total hardware revenue in Q1FY10, new hardware sales, as noted above, now only account for 49.6%. The overall total trend is downward, and it is important to strip the support revenue out when looking at the real picture. No matter what way you slice and dice the numbers for the sale of new hardware, Oracles hardware sales have dropped on every measure you can compare (annual, quarterly, comparisons with the previous fiscal quarter and so on) with the exception of the final quarter of FY14.

Oracle have also been actively expanding their cloud offering, so one has to ask how much of their stated hardware revenue is due to internal purchases. The quarterly reports sadly do not shed any definitive light in this area, but I would find it hard to believe that spend on new hardware did not account for a significant amount of the $221 Million investment in cloud made in Q4FY14, and would speculate that it was certainly more than $21 Million increase in hardware sales seen in Q4FY14 v’s Q4FY13. The hardware business now accounts for just 13.6% of Oracles total revenues, versus a peak of 22.6% in Q1FY11.


Of course the news is not all negative in their hardware business. The gross margins which Oracle have achieved on their hardware support business are very impressive, improving each year from a starting point of 48.8% to 67.3% in the latest quarter. This is testament to an immense amount of work tidying up the supply chain, inventory management and support processes inherited from Sun, coupled with newer, simpler to support, systems.

Gross margins on hardware, baring a couple of very good quarters in 2011, have hovered between 48% and 51%; respectable, if not stellar. This lack of movement in gross hardware margins also raises questions as to the sales of products based on Oracles own Sparc chip, versus sales of their Intel chip based product line such as Exadata. Customer references are always an interesting way to view traction with an established product, and to identify if its market is consolidation of existing footholds, or a new business. Oracle are fond of publishing references, it will be interesting to see what new ones emerge this coming week.


Notes on this analysis

The financial data used for this analysis was sourced directly from Oracles quarterly statements – http://investor.oracle.com/financial-reporting/quarterly-reports/default.aspx

Oracle acquired Pillar Data Systems in Q1FY12 and ACME Networks Q3FY13. I do not believe either of these acquisitions had a significant enough material impact on hardware sales to warrant investigation.


I worked for Sun Microsystems, and then Oracle, for 12.5 years before leaving in 2012. I own Oracle shares.

Packaging, Solaris and losing a market…

TL;DR : For software ignoring parts of your customer experience, like the initial install, ultimately costs you market share.

The phrase Eating your own dog food is often used in software development. Often used but sadly often ignored, or selectively applied by development teams, until it becomes too late.

In my mind this always starts with installation – essentially your packaging, its the first experience a user has of your product. If the install experience is poor you have already started on a very difficult path in convincing your customer that your technology is the right one for them to use. The folks over at Redmonk have been highlighting packaging for years.

Solaris and BFU

Now where I observed the importance of the installation experience first hand was at Sun Microsystems. At Sun everyone involved in the OS space made heavy use of a tool called BFU (Blindingly Fast Update or Bonwick Faulkner Update). It was a time saver, it facilitated quick development and you avoided time consuming reinstalls and so forth. See the problem yet?

A Quick History Lesson and Why Observing Others Matters

For most people these days the idea of installing an operating system from scratch seems almost quaint. I watched Alex from CoreOS demonstrate launching 1400 CoreOS servers in the first few minutes of his presentation at Container Camp last Friday. But back when I started playing with operating systems in 1994 installing from scratch was a necessity. Like a lot of people back then my first experience was installing Slackware from floppy disks. You were used to downloading the source for other tools, building them from scratch and so on. Everyone had their own tool chains and ways of installing systems.

But the world changed, in particular Debian arrived, and more importantly was followed by Ubuntu in 2004.

Within Sun we had (and I’m sure some people will disagree with me) an incredibly strong not invented here culture combined with a cultural, and almost quasi-religious, obsession around architectural purity. Talk to anyone about making things slightly more usable and you got told to use the free software packages bundled, slightly awkwardly, into a different location, or build the newer version from scratch – and yes both of these examples are relatively easy to cope with. When they are part of your day to day workflow you don’t notice, when you are a new comer to a technology they are at best frustrating, at worse they rapidly move you too the conclusion that the people producing this technology have no interest in your productivity.

The teams at various Linux distros, and in particular at Ubuntu, understood that the days of ./configure; make; make install and adding multiple entries to your $PATH variable were well and truly over – particularly for the rapidly expanding developer community using Linux. The packaging of components together into coherent bundles became a mantra. But at Sun we stuck to a belief that SVR4 packages were fine, people could build from scratch if they really wanted the latest version, and everyone should have a highly customised profile (and your root shell should be sh – probably the single most frequent complaint you would hear).

Enter BFU

This is were BFU became part of the problem. We were partly using a dogfood test. It was very true to say that people had something very close to the latest versions of the development trains running on their systems. Many people bfu’ed their system at least bi-weekly, some would update nightly. But the problems and frustrations that our customers were encountering with installation were rarely seen by our very best engineers. And the problems and confusion that a new customer would encounter were invisible.


Now all of this comes back to that initial perception of packaging. Because it was not part of the development teams daily dog food it was viewed as not overly important. It took until the release of Solaris 11, in 2011, for a new packaging system to be in place in a full commercial offering. That is almost 8 years after Ubuntu came to market, and 12.5 years after Debian first featured apt. In an industry that fundamentally reinvents itself every 18 months that is far, far too long.

Culture v’s Strategy

This is not to say people weren’t aware of the problem – they were. Their were initiatives to improve the install experience, to make the platform more developer friendly and so on. The work on fixing packaging started in 2007. Eventually in 2010 bfu was replaced by a new tool called ONU which makes use of the packaging system, if not all of the install experience a customer went through. But culturally, as I alluded to already, people did not see a real need for rapid change.

A quote often attributed to Peter Drucker is “culture eats strategy for breakfast”. The cultural inertia, and the length of time it took Sun to address the packaging deficiencies in a flagship project, ate all of the Solaris based growth strategies Sun had for breakfast. The market was lost, but could be defended. By the time Sun got around to addressing the packaging problem properly in 2007 the battle was already lost, although a small chance of wining the war and regaining a market remained. By 2011 that chance had well and truly passed.


I worked for Sun, and then Oracle, for 12.5 years. I use a Mac, VirtualBox, VMWare and whatever is the best tool for the job at hand on my desktop. These days if it involves operating systems it tends to be some variant of Linux. Sometimes its Solaris, sometimes its Windows. But mainly Linux. I still enjoy using Solaris  and following what arrives in the new releases. However, pragmatism in my recommendations, including assessing the availability of people with the skills to manage the technologies suggested, is a lot more important to my clients than a biased suggestion.

so when is it Series A? Everpix, funding and @pmarca

Marc Andreessen (@pmarca) of A16Z was posting in two interrelated threads on twitter over the weekend which I found particularly fascinating. To give some context here, Everpix, a photo sharing startup which closed last year, posted a huge amount of data about the company on github a few months ago. This included their funding details, conversations with VC’s and a wealth of other information (if you are anyway interested in the business of start ups all of the information Everpix posted is worth a read).

On Everpix in particular Andreessen commented that the founders of Everpix had gotten quite a lot of feedback from VC’s, more so than normal. What interested me more was the context with one of his legendary (or is that infamous) twitter essays. In his latest one he discussed the perception difference and expectations of VC’s versus founders as to what funding rounds such as Seed, A etc, and the amounts involved, actually mean. Now @pmarca does post rather lengthy twitter essays which garner a huge amount of responses, so to make it a little easier to read I have reproduced them below.

And in response, Fred Wilson gave what is a lovely clear classification of funding rounds

Having been through a well funded early stage startup (although not as a founder) relatively recently that hit a Series A funding wall, point 12 in particular resonates with me. So when is it Series A funding, when is it B and so on? It is still something which is somewhat subjective, but I would argue in the case of Everpix they may have called it Series Seed, to investors it definitely looked like Series A, and their expectations of where the business should have been were set at that level.

IBM BlueMix and a simple WordPress Deployment

Shoreditch Works announced a sponsorship deal with IBM a little while back, which included a series of IBM events at the Village Hall. Chatting with @monkchips following the first of the IBM events the suggestion came up – why not migrate the Shoreditch Works website to BlueMix?

Now I have used a number of different cloud platforms, in both the IaaS and PaaS space, over the years but I have not spent any significant time on IBM’s offerings before. While a wordpress instance is not exactly going to strech things it does provide a nice quick intro to BlueMix.


The original Shoreditch Works website is a wordpress instance and was installed in a standard shared hosting provider. Nothing particuarly special here, but it has been through a number of iterations and as tends to happen had various stale/unused plugins and other bits and pieces which built up over the years.

The latest iteration has a custom theme, and a couple of simple plugins. As BlueMix uses CloudFoundry we need to have a basic awareness of the CloudFoundry’s approach – in particular you do not have a persistent file storage. WordPress and php unfortunately spend quite a lot of time assuming they have local storage (which in fairness is an okay assumption to make for most traditional environments). From our perspective this means we need to stage our themes and plugins before we deploy to BlueMix.

Local Dev Environment

I tend to use Vagrant for small development environments like this, and while I have my own custom images, for the purposes of this writeup we will use one of the lamp boxes available on the Vagrant Cloud. If you want to play around with Cloud Foundry I highly recommend spending some time with BOSH.

tiresias:bluemix-demo fintanr$ vagrant init ktbartholomew/lamp
tiresias:bluemix-demo fintanr$ vagrant up

Now we install the cloudfoundry CLI tool.

tiresias:bluemix-demo fintanr$ vagrant ssh
vagrant@ubuntu:~$ curl -o cf-cli_amd64.deb https://s3.amazonaws.com/go-cli/releases/v6.1.2/cf-cli_amd64.deb
vagrant@ubuntu:~$ sudo dpkg -i cf-cli_amd64.deb


Next up is to get the basics of our wordpress install in place. There is a nice article on IBM Developerworks which covers both this and the later deployment. A very quick summary of the wordpress releated steps are below, and we are using a different buildpack which I go into in a little more detail further on in this article.

vagrant@ubuntu:~/public$ curl -o wp-latest.tgz http://wordpress.org/latest.tar.gz
vagrant@ubuntu:~/public$ gunzip -c wp-latest.tgz | tar xf -
vagrant@ubuntu:~/public$ cd wordpress/
vagrant@ubuntu:~/public/wordpress$ curl https://api.wordpress.org/secret-key/1.1/salt/ > wp-salt.php
vagrant@ubuntu:~/public/wordpress$ curl -o wp-config.php https://raw.githubusercontent.com/ibmjstart/bluemix-php-frameworks/master/wordpress/wp-config.php

We also added a .htaccess file with our mod_rewrite rules, and the various plugins that we need. There is also an issue we encountered with the current (3.9.1) version of WordPress explained below.

Our Custom Buildpack

Our wordpress instance uses permalinks, which under the hood relies on apaches mod_rewrite. Now when we started out doing this configuration and fired up wordpress for the first time it was pretty obvious this wasn’t working. This kind of issue generaly manifests itself as one of two issues, AH00128 or AH00670 (yes I’ve been here before in different scenarios). A quick check in our logs showed up the problem

vagrant@ubuntu:~/public/wordpress$ cf logs --recent sdw-101 | grep AH00
2014-06-06T06:48:47.34-0500 [App/0]   OUT 11:48:47 httpd        | [Fri Jun 06 11:48:47.312997 2014]
 [core:info] [pid 75:tid 140036321601280] [client] AH00128: File does not exist:
 /home/vcap/app/htdocs/coworking/, referer: http://sdw-101.ng.bluemix.net/

We started with a PHP Buildpack from pivotal, but needed to update this slightly to enable the rewrite rules. The fix itself is very straight forward, and the resulting buildpack is published on github, and if your really bored the diff is here.

Bluemix Deployment

So we have all of our bits and pieces together, onto the Bluemix deployment – a very simple and straight forward process. We assume you have already signed up for Bluemix.

# login to bluemix
cf login -a https://api.ng.bluemix.net

# We need a mysql db for wordpress
cf create-service mysql 100 mysql-wordpress-service

# use our custom buildpack
cf push sdw-101 -b https://github.com/fintanr/cf-php-build-pack.git --no-manifest --no-start

# bind our mysql service
cf bind-service sdw-101 mysql-wordpress-service

# and start
cf start sdw-101

And thats it. You need to do some initial configuration of your wordpress site when you first login, but thats it.

and the obligatory WordPress issue

It wouldn’t be wordpress without something odd happening, and in our case it was a wordpress import error with WordPress 3.9.1. On importing an old site into 3.9.1 you may see an error similar too

Strict Standards: Redefining already defined constructor for class WXR_Parser_Regex in
 /home/vcap/app/htdocs/wp-content/plugins/wordpress-importer/parsers.php on line 408

Strict Standards: Declaration of WP_Import::bump_request_timeout() should be compatible with
 WP_Importer::bump_request_timeout($val) in
 /home/vcap/app/htdocs/wp-content/plugins/wordpress-importer/wordpress-importer.php on line 38

This is a known issue with wordpress, recently reopened as TRAC 24373. There is a diff which fixes the problem posted to the bug. If you are following our example you will need to apply this patch before you push your app.