The Death of an Icon – Literally


From written by me and commented on by my co-bloggers

As I type this we are about to go to a special WebEx meeting for vExperts only but by the time you read this; VMware would have announced this on their blog and the gag can come off!

So after enduring a huge amount of pain due to typical conference call pings/rings/background noise we eventually got down to business.

The meeting is basically telling us we are going to get a new GUI to manage VMware estates however there can be only one (cue Highlander movie MEME). What does this mean? Well it means:

VMware are no longer offering C sharp client (VI CLIENT) in the next release of vSphere

So what is the low down and why did VMware make this strategic or tactical move? Well according to VMware they did it for a number of reasons and these are, but not limited to:

  • HTML5 is going to be the replacement in it’s entirety
  • Getting customers a web client interface that performs and has the same structure as the current GUI is a priority as the current one has known performance issues.
  • Since 5.5+ more features have been moved into to the GUI and so this is a natural progression
  • Web client is the way forward and the ONLY way forward in the eyes of VMware
  • vSphere 6.0 comes with embedded client already based on HTML5 so they are expanding on this
  • No answer on when the next release of vSphere so no exact time known to prepare
  • C sharp will continue to be supported until the End Of Life of the vSphere version currently running it (note 5.0 and 5.1 are almost EOL already by the way!)

Main concerns from fellow vExperts

  1. We need the C# to manage platform i.e. file uploads etc
  2. we don’t like change
  3. 3rd party plugins – what happens to them? How will they work? Will they work?
  4. Too many tools to manage the same estate

From listening to the concerns, there was a lot of push back from the vExperts to say that we don’t like being told by VMware how to manage our environments, but more along the lines of we need VMware to offer us the functionality that we are asking for in the real world.

What was also clear was that this wasn’t really discussed with the community in depth first as there were a lot of un answered questions with regards to the primary concerns of the vExperts and customers. A member of the call also suggested we get a separate dedicated page that will act as a compatibility matrix of vendor plugin to the next release version including the HTML5 GUI.

As an architect we will absolutely need this to measure the impact of the HTML5 client and management of complex vendor specific work flows and environments. More so if you are doing upgrades to new versions of vSphere as you may already have established work flow practices and DR run books or documentation which could literally all breakdown if your plugins no longer work. In green field deployments; you will already have got a test plan in place and introducing new ways of managing the platform in some cases so this may not be an issue.

So we there you have it, the death of VI client is announced so make the most of it while you can as VMware will have a lot to do to make the new GUI not only offer the same features as the C# client, but also to make it quick and a single pane of glass. I wouldn’t be surprised if there were features missing and VMware turn round and say use PowerCLi. We will find out soon enough!

Final thoughts

I’ll miss the VI Client. It’s been a faithful servant to me since the very start of using vSphere and has got me out of jail on a number of occasions when the vCenter has crapped itself due to some boo boo made by an admin (me in most instances). To lose this critical functionality and get out jail free card in case of a web GUI failure of some description, will more than likely not go down well with most of you either. I personally think it’s a mistake relying on web services for the be all and end all of management because web servers fail so if they do, you’ll want another way in. You always get taught to make things highly available and to have a back up plan so why not keep the VI Client in this way but to only be used in emergency. The new GUI MUST deliver the same functionality as the VI client if we are to let our friend go to into retirement as we really do need it.

@VMware – The vExperts have made it loud and clear on the call, if we lose it, you better come up with the goods and by goods I mean a GUI that performs much faster than the current GUI for the new GUI or you’ll have more than a few tough conversations in the future.

Thanks for reading.

Want to know what the HTML5 client fling  is about: Click here!

Bilal here:

I think getting rid of the C# client all together is a bad idea. Yes by all means move on into the new HTML 5 client and make the web client the sole focus….but leave the functionality that the C# has, and let people use it who want to use it.

It’s obvious VMware have been moving towards this for a while, but to say you are binning it off, before you have got a decent production ready web client up and running for people to use, is a bit silly and a hell of a gamble.

I guess in some ways burning your bridges so you cant go back, will force you to go full throttle ahead and put everything into it and that could work out really well.

Graeme here:

Man there was some mad interference on that conference call! I had a headache by the end of it. I agree with Bilal, VMware should let admins have the choice of either the C# client or the HTML 5 client. There’s no real reason (in my mind) why they can’t replicate what you see in the thick client straight into the H5 client. As admins, we should have a choice what tool we want to use – for example, regardless of what GUI we use, CLI will always be an option. I’ve always HATED the Web Client but LOVED the idea of it. It will be a sad day when I remove the C# client from my machine.

Of course we have to remember we can still use the thick client until we’re fully off 5.5 and 6.0 – the H5 client will be the ONLY way from the next major release of vSphere. I have a feeling that a lot of admins and companies will hold back on upgrading until the H5 client does what it should be able to do from day 1 – in our experience it won’t!!

Posted in Operation Grandslam

New Blog!

If you haven’t already checked out then do so as my co-bloggers have loads of experience and knowledge to share with you all! I will still maintain this one and post my personal blogs here as well as the ones I do for our group so fear not, this site will remain for some time. It has been a while since my last post on here and anywhere to be truthful. It would appear the enthusiasm of my new co-bloggers is infectious and now I’m back on the wagon.

The history behind the new blog was that all of my co-bloggers were in a VMware VCAP-DCD study group agonising over the exam and testing each other. Over time we eventually passed the exam with a few of us taking that step further and going for the ultimate exam which is the VCDX. Only one of us has passed it and one with a few attempts so far but we are expecting this number to be increased as the guys are as determined as ever. What this means for you is that you’ll have a window into what we are really thinking with no bullshit filters.

So don’t be shy and go over there and take a look!

Thanks for reading!



Posted in Operation Grandslam

Operation Quicksand Mission Status – Complete

Well well well, guess who’s back on track? Once again? Wardy’s back, back again! Ok enough of Eminem tunes…

So as most of you who may be reading this blog will know; as part of the journey to getting the VCDX certification is obviously to make sure that you get the pre-requisites completed.If you’ve been reading since the start of my blog then you’ve probably listened to me moan about the fact that I’ve had a few issues trying to pass the VCAP5-DCD for over a year (Having attempted three different versions).

The reasons why I had issues were mainly that I do like to ask questions if I think there is something a little vague, especially in the real world as it’s a critical part of the requirements gathering process. However this is not a luxury that you have in the exam and to be frank, nothing technical is hard in the exam, the bit that will bite you on the arse is actually trying to understand what it is they are asking you to do in terms of the requirements and the way they (VMware) want you to lay out the questions.

It used to be that some of the questions in the exam were indeed very open to interpretation and second guessing as to what’s needed. I am pleased to report that since the revamp of the exam early this year; the 5.5 version this time around was a lot clearer, though there are still some gremlins in there waiting to pounce on any self doubt which was the case for me! Perhaps if I hadn’t been so stressed and paranoid about doing the whole thing all over again then my mind may have been clearer.

I’m not going to lie, I put the hours in trawling all over the forums and study groups as well as chatting to those who’ve completed it and those not so. I was armed to the teeth with knowledge this time and didn’t leave anything to chance and was confident in my knowledge and experience but not in my exam taking skills. I’m always the same in exams, I pass but never well in my own standards and it was with no surprise this was the case again for my DCD exam too.

I was disappointed in the score at 339 to be brutally honest but I was relieved to see the Congratulations screen instead of the dreaded “You did not pass this Exam” from which I was familiar with. I did kinda let out a cheeky “YESSSS!) in the examination room which prompted a few stares and evils but sod it, a pass is a pass.There are those reading that I do need to thank for their support and you know who you are and I don’t forget things like that. There were those on twitter I’d like to thank as well for continued support and Michael Webster who after one of my fails was able to steer me back on the right track and pick up the pieces of what was my confidence.

So there you have it, I you haven’t guessed already I pass and now officially a VMware Certified Advanced Professional 5 in Data Center Design BOOM!!! So if you’ve failed and reading this, keep at it as you WILL pass eventually with the right support from your fellow professionals and self belief.


So this means Operation Quicksand comes to a close with mission complete status but as one hurdle has been overcame then one required jumping. Now for Operation Genesis is up next in the VCAP5-DCA. Allegedly this is the easier of the two exams to pass however it’s been some time since I’ve been mucking around in my lab so have no doubt I’m going to be rusty and lacking in hands on with certain new features with 5.5. The ones I’m most anxious about is the vCenter Orchestrator as I never could get my head around the logic. The rest I’m reasonably happy with but need to go deeper than vanilla installs. I will also need to set up my Domain again and a few other pre-existing requirements so that I’ve got an environment similar to the exam. So that’s all folks, keep an eye out for VCDX and DCA updates as they are going to be coming more often now. Till then, ride safe!

Regards to all


Tagged with:
Posted in Operation Grandslam

Nutanix and the blueprint Part One


As you will be aware in my last post I will now be starting my VCDX design in an effort to be ready for the October defences in Frimley UK. I am under no illusions as to the mammoth task that faces me and the toll it will take out of me by the time I’ve had a crack at this.

You can almost compare it to something like Man in the 1960s deciding it was time to have a go at landing on the moon so nothing short of a monumental effort is going to get me to my moon. There are going to be few dummies spat out, tantrums thrown and self-doubt along the way but sod it, I’m all in. I’m a normal guy, I’m not a genius, I find it a major challenge trying to keep up with all the new technology coming out and doing my best to remember the bits that matter!

If and when I get to the end of this road you can be sure that you too can do it, it just takes some effort and I’ll be letting you know about it all the way through it, with no bullshit and telling it how it is.

So some more good news! I’ve been contacted by Nutanix and asked if I’d like their help during the process. This help with be in the form of mentorship being offered by some of their many VCDX holders in their company. Firstly I was chuffed to have been considered and secondly very excited to speak to Mark Brunstad (pretty much the godfather of VCDX) and get his blessing on this.

To be clear though, I will be doing all the work, no-one is going to be doing it for me and I’ll be doing all the hard graft. They will certainly help me prepare for process and give me a bloody good grilling and tearing me to pieces but to be honest, I damn well need this and look forward to it. Personally I learn best from being surrounded by those of a higher standard as it raises my game too and I absorb a lot more by seeing others point of view.

Let’s start then! Well naturally the first thing I will need to do is read the VCDX blueprint which for the Datacenter version is found here:

VMware linky!

The first section essentially tells you pre-requisites to take the VCDX. As things stand at this point in time, I’ve still yet pass the VCAP-DCD and the DCA however I do have some time to get this done and dusted before October and already have the DCD booked for later this month…..which I should actually be studying for right now on this Saturday night. Slaps wrist, bad man! These posts don’t write themselves!

So the pre-requirements are:

  • VCP5-DCV
  • VCAP-DCD – (No easy feat either)

Section 1.2 tells you who should consider doing the VCDX process and there are no real surprises to be had as it pretty much says anyone can do it. Big caveat to this however, it will be more likely that if you have a day to day job of doing designs for vSphere and enterprise scale you’ll be a better position technically and experience wise to get through successfully.

Section 2.1 tells what needs to be in the design and gives a hint at the process you are expected to follow. At a high level the design submission will need:

  • A Conceptual model/design
  • A Logical design
  • A Physical Design

I will write a separate post as to my understandings as to what Conceptual, Logical and Physical designs are and their definitions as quite frankly trying to find good examples of these is a struggle and the white paper which you are pointed to by VMware in this:

With respect to the author who I’m sure is extremely well respected, this document just isn’t good enough and not in a language that everyone can understand and very confusing to put it as politely as I can. The next post I’m going to do will be to explain what I think the definitions are of all three of these in plain English or at least attempt to.

The section then goes on to say that writing a design document that has more pages than the JANES Weapons Encyclopaedia 2014 (No idea how many it has but can assume it’s a lot!) it will not necessary mean that it’s a good design document. In short this section is telling you to Keep it Simple Stupid! (KISS). Mental note, it’s quality not quantity! Can’t believe I actually blogged that and should get my mind out of the gutter.

Moving on! Section 2.2 explains the process of the design defence IF your design passes the steely eyes of the design reviewers.

A long story cut short, you will need to do the following on the big day:

  1. Do a short 15 minute presentation of an executive summary of your design and the justifications you took to come up with that particular design.
  2. Get “grilled” by the VCDX panel – I’ve read reports that this isn’t as bad as everyone makes out as all the guys/girls on the panel are there to try and help you show your skills rather than be the enemy trying to shoot you down in flames.
  3. Go through a design problem which is done by a conversation between you and the panellists and is expected to last 30 minutes.
  4. The last stage is working through a scenario where you are expected to troubleshoot and prove that you’re methodical in your approach rather than just jump straight to the fix or root cause.

The whole process is expected to be around 2 hours excluding breaks. This to me doesn’t feel like enough considering the sheer amount of time and effort taken to get there in the first place and so a small window to allow the panellists to measure your ability. I guess they are pretty switched on and can work out the blaggers from those that know their onions but still, 2 hours is all you’re getting so guess I’d better make the most of it and make a good case to pass.

Sections 2.3 to 2.8 go over smaller details such as your integrity i.e. you haven’t lied about anything like qualifications etc or more to the point, section 2.7 where it tells you about the retake policy. It’s a hard factor life that very few people make it through the VCDX process the first time. While I don’t want to be a glass half empty guy, I do accept this is a fact and statistic which goes against you. In the Army we used to use a saying (one of many) which was:


We also used the seven “Ps” which might sound a little rude but still something I always use and stick to even now as a “civilian” all these years later:

Proper Planning and Prevention, Prevents Piss Poor Performance

If I do these and keep my nerves under control then it’s going to help on the day.

This leads me up to section 3 which is THE most important section and already has raised a few things for me to think about more seriously and start practising.

I will cover the rest of the blueprint in part 2 of this post and also go in to further details for the designs I’ve got in mind and see which one could fit the blueprint best. Until then, thanks for reading and staying awake!



Posted in Operation Grandslam

VCDX Design selection

Operation Grandslam is now under way. I’m forcing myself to do this instead of putting it off while I try and get the VCAPs done and dusted. We’ve all been there when life at work and home consume vast amounts of time and effort as well as will power but the line in the sand has been drawn and I’ve stepped over it.

In the following weeks and months I will be doing an open design submission unless told I can’t due to NDAs or VCDX rules but at the very least I will be tracking my progress, airing my thoughts and agonising over decisions or processes. THe aim of this is so that you can perhaps get ideas or with any luck give some feedback and or support. It’s not going to be easy, if it were then everyone would be doing it wouldn’t they?

So this is just a quick post and in the next post I will be looking at the blueprint to see which of these design scenarios I will be going for in order to meet the blueprint requirements.

So these are the three I scenarios I’m considering for the VCDX design.

1. IAAS platform (non cloud)

A director of a managed services company wants to transform an existing co-lo data center and start offering Infrastructure As A Service to the current customer base as well as prospective new clients. This solution will need to take advantage of the very low network latencies between two data centers connected by Cisco Nexus switching and dark fibre services. This will be greenfield deployment in terms of VMware estate. More details to follow

2. Online banking platform

An existing customer has requested consultancy for deploying a new online banking platform service in order to keep up with technology and other services being offered by larger UK and Global banks. The new infrastructure must utilise the providers data centers, BGP networks/MPLS and meet explicit RTO and RPOs of services and data though PCI compliance is not required due to the nature of application delivery.

3. Acquisition and migration

An umbrella company has a business model of acquisitions and expansion and as such as taken over an insurance company to further increase their service portfolio and catalog. There are tight deadlines to absorb and migrate the new companies infrastructure from the existing sites due to termination of co-lo and managed services contracts. The online presence and services must not be impacted where possible with very small windows of downtime available. In addition to there are numerous Windows 2003 servers which will be out of support in the very near future as well as ageing infrastructure coming to end of life and service. There are additional requirements and constraints to follow…..


So those are the three that I’ve had to face in real life which I can draw upon in terms of experience but I will need to be tactical in which to pursue as too complex will increase the attack surface in my design defence or too little that I may not be able to score enough points on. I will be adding fictional requirements in all of these and detailing the Functional, non-functional requirements as well as the constraints, risks and of course the assumptions.

Oh and here’s the kicker…..I WILL be using hyper converged technology and will most likely be Nutanix. The reason for this was that I’ve experience with Nutanix and their products but more to do with the fact I truly believe in Hyper Convergence concepts and have seen first hand what it can do for a company and wished I had it for some of the engagements I’ve previously had to do. Just to be clear though, I’m not employed by Nutanix and if I had exposure to VMware EVO Rail or Pernix data then I’d be seriously considering these too IF and big IF in all cases it meets the REQUIREMENTS.

Until next time. Take care and have fun

Comme Je Fus


Tagged with: , ,
Posted in Operation Grandslam

VMware Application Dependencies and Entity Relationship Diagrams MK2

OK ladies and gentleman it’s been a while since my last post and that’s primarily down to things like looking after babies and changing jobs. The reasons for the job change was primarily because I felt I needed to get more true enterprise experiences in order to achieve a true understanding of Enterprise Architecting. Most if not all of my work to date as been in the SMB spaces so therefore the amount of complexity hasn’t really been relevant to the point where I felt it could justify me becoming a VCDX.

Anyway moving on to the subject at hand. You may recall that previously I questioned VMware’s stance on their concept and point of view when it comes to the way they consider what upstream and downstream components are, as well as their impact upon each other. Well I still don’t agree with the logic but concede to the fact it’s their trainset and their exam so I best shut my mush and get on with it. I am not alone in this however, as there are many study groups and many people looking for the same answer I’ve been looking for in trying to determine the actual approach and answer to the likely VCAP-DCD question(s) you are going to get in the exam.

I searched endlessly for VMware specific documentation on this and there is non to be found. I hope that VMware use this blog post to perhaps create their own version and distribute it to the community for reference and the add the link in the VCAP-DCD blueprints. It was only my re-reading the official study guide on my Kindle that I found the information I was looking for. Unless you’ve bought this book you’d never have know this existed.

So just to clarify this in the simplest of terms of what VMware are trying to get us to learn about the orientation of upstream and downstream components.

The component closest to the end user is considered the upstream entity and therefore all components support this entity are considered downstream as a result.

In plain English it means that if a user needs to access a website for example, then they would put a URL in their web browser and then hit go/enter/search. This web address would then be resolved to a DNS Server record first and therefore the External DNS server tasked with this would be the first Upstream component (in most cases). Behind this would sit a firewall probably and a DMZ containing load balancers and front end web servers. These would be downstream components of the External DNS server. This would continue through to all components and layers supporting the web application.

If a DOWNSTREAM component fails the UPSTREAM components will be affected.***

***This statement assumes that the components themselves are not resilient/fault tolerant but you will pretty much identify single points of failure when you do your current state analysis.

So let’s look at this in a real world example in nice picture form. See below for an Entity Relationship diagram for a multi tiered web application. To the left you’ll see the simple relationship diagram and to the right you’ll see the logical component diagram of what it might look like through from start to finish.


VMware Entity Relationship Diagrams

So the end result here it is to remember for the exam that you need to understand what the order is what the end user is trying to access and how. You then need to find out what relies on what there after or as others may suggest, X depends on Y which depends on Z etc etc.


Just remember the principle rule as above in BOLD and you should achieve the DCD objective 2.2. Good luck in the exams and please feel free to feedback with regards to this post. I highly recommend you get the VCAP5-DCD Official Certification Guide as this will be a great point of call for all your studies. Paul McSharry has written the book in what I consider a non jargon rich and concise manner which he gives you real world thinking so it’s a great reference.

That’s all for now folks

Comme Je fus (Ward family motto As I was)


Posted in Operation Grandslam

Application Dependency – Upstream and Downstream Definitions

NOTE: Check out the latest on application dependencies with this post

One of the objectives on the VCAP-DCD exam is to be able to demonstrate and build Entity Relationship Diagrams (ERD) and define the upstream and downstream components. What I wanted to discuss with you was how you, my peers, interpret application dependencies and define what you consider as upstream and downstream components.

After my exam I researched it just to check that I’d got the answer right but if what I’m reading through my research is correct, it’s completely the reverse of what I had done in the exam and thus got it wrong.

This specifically in the VCAP-DCD exam is Objective 2.2 in the exam blue prints for 5.0, 5.1 and the latest 5.5 version.

In the blueprint you are given a link to the following pdf:

VMware link

It also says “Product Documentation” which is vague and not very helpful when you are trying to look at specifics but hey ho.

I’ve read and re-read through this document and can not see a definition of what the upstream/downstream relationships are so further investigation was required. You would have thought that this would be easy to find as VMware are excellent at documentation and knowledge bases! I also asked my instructor of my VMware Design Workshop course and he too confirms there are conflicting answers to this and at the time of writing; was unable to give me any solid answer.

If you google or bing search this topic you will find peoples interpretations to how to approach this when looking at it from an IT perspective. Just about all other examples I’ve seen back my interpretation up but Yes the only one that should matter for the exam is VMware’s opinion but it it’s still wrong in reality. There I said it, I actually disagree with VMware and not something I’d say or take likely. What worries me is that I really hope I’m wrong in understanding VMware’s interpretation of what the upstream and downstream components are and why. This is why I’ve taken to this blog to justify my own interpretation and perhaps question VMware’s stance on it.

After much painful research and trawling on google (other search engines are available) I found what appears to be the VMware’s answer to the definition at least it purports to be by the vExpert and it appears to be repeated in a few other blogs.

“In upstream and downstream relationships anything that happens downstream can have an adverse affect on upstream configuration items.”

Sorry but I disagree with this and maybe I’m looking at this too literally but here we go.

A river or stream runs from the highest elevation to the lowest elevation i.e. From top to bottom aka from upstream to downstream. So if we were to chuck some sort of dye upstream it would flow downstream and turn in to the dye’s colour. Therefore the upstream has a direct impact on the downstream. So if were to put the dye in the downstream portion of the river/stream what would be the affect on the upstream section of the river? Bugger all is the answer you’re looking for!

So lets take this into the world of technical dependencies. If for example we have a multi-tiered application like a website. Let’s say Lets assume I’m using part of the LAMP stack to provide this service. In order of reliance we have the http://www.vbikerblog that runs on an Apache server. This Apache server relies on two application servers Tomcat. Each of these Tomcat servers needs a data base and therefore we have two mySQL servers to support this.

All of these components rely on DNS as well as vSphere but I’ve not put these in for the sake of simplicity.


Now if you look at the way I’ve done the hierarchy you will see that mySQL is at the bottom and the end result vBikerblog website at the top.

Some of my research has led me to a few analogies on how to explain upstream and downstream relationships and one of them is think of a house. A house has foundations at the bottom and a roof at the top with in this case a lower and upper floor. If the foundations fail then the whole house comes down. If the roof fails you still have a house and an upper and lower floor all be it a draughty and wet one but you still have a house.


So if we consider the mySQL as the foundations and the website the roof this would fit quite well in this analogy and make good logical sense. However VMware don’t see it like this. From my interpretation they deem the website as the upstream component and the mySQL as the downstream component. In short this is completely opposite to what I consider logical sense so I’m trying my best to get my head around why they’ve gone down this road. I would actively encourage you the reader to leave your opinion and explanation in the comments to help me and maybe others out here. There’s some very smart cookies at VMware so there must be a reason but as to why I can’t see for the moment so please do feedback!

So back to this example then. If you take VMware’s approach which way do the arrows point? Well I think they would be pointing from top to bottom and with my and other’s approach vice versa. If this is the case then it goes against logic and many other industries interpretations so why it’s this way I don’t know. I really hope that this entire post is waste of time and that I’ve got the wrong end of the stick perhaps but it’s nice to get it off my chest anyway!

In conclusion to my virtual disagreement I feel that this desperately needs to be clarified by VMware with document and perhaps some instruction on how to map the dependencies with clear defined definitions. Building entity relationships can be a critical part of the design process and so if we are all doing in a conformed manner then this will mitigate any ambiguity between customers and professionals alike.

If VMware to use their method as I think I understand it, then they need to use a different terminology from upstream and downstream as its my opinion that it is not logical the way they explain it and I’m not alone in thinking that either.

I guess at the end of the day when it comes to your documentation and how you explain it to the customer they understand the context and explanation but we could do without the confusion among fellow VM professionals.

Please feel free to leave your comments and explain what I might not be seeing or justify the VMware approach or anything that would help your fellow virtual professionals. Ideally I’d like to be pointed to the VMware document defining this in black and white so post the link if you can!  🙂 Please note that I have posted an update to this subject which will clarify this much clearer so check it out!

Comme Je Fus
(As I was)

Don Ward VCP 3.5,4,5.x

Posted in Operation Grandslam

Long time no see!

Ok so it’s been a while since I’ve last put anything on my blog and some people will wonder why I’ve been off the RADAR so to speak. Well lets cover off a few things which I’ve been up to and you might understand better.

  1. Parenthood

Well I’m a proud new father and at the time of starting the vBikerblog my responsibilities and duties were just to support my wife while she breast feeds our awesome boy Maximus. I do chores around the house to help out and try my best to keep mother happy – and often get my ear bent by her! Well that was all well and good but Max is growing at a rapid rate of knots and becoming more and more demanding in his needs. So my wife and I have had to dig in and devote more of our time to him while still maintaining order in the Ward household and Max is now in full on teething mode. We still have to look after two demanding border collies too and I need to work so time is at an absolute premium as is the mental and physical energy levels! No one told me just how much they melt your heart when they start blowing raspberries at you and making all sorts of baby noises. Very cute! Check out the video for a little giggle.

  1. Training and learning

I’ve already taken one VCAP-DCD exam in January and just failed it by 10 points as I missed the last 10 questions due to a massive balls up in calculating the time needed for each question. While this was a failure I also learnt a lot from the experience and the style of the exam questions and have since been trying my best to study to rectify some areas in which I saw myself as a little weak.

In order to make sure I was prepared to sit the exam again I recently attended the VMware Design workshop hosted by QA right next to the Tower of London. I was under the impression that this course would be geared towards to preparing candidates for the VCAP-DCD exam. In actual fact this was not to be and it wasn’t directly aimed at the exam at all but a general principle of VMware logic and design methodology which to our instructors own admission, was a little out of date in terms of the content.

However an awful lot of the content was helpful if you’ve never done architecting before and will enable you to take a good logical approach to design. I big take away for me was just the sheer differences in approaches to design at different levels from my fellow students. Another nugget of info was that there was now a “back” button on the exam so that you can do the design questions first and then return to the lesser scoring questions after.

With this knowledge and my performance on the course I felt I was ready to take the exam and promptly booked it for a few weeks later while I just prepared mentally and covered some technical details.

So the exam day came and I had to travel quite some distance to the exam centre (about 200 miles) and I elected to take my R1 (Yamaha Superbike) as I was going to go on the dreaded M25 Motorway which orbits London. This turned out to be a very good call as lo and behold there was an epic jam and congestion all along the sections of road I needed to go so being on two wheels I was able to filter through the traffic and get to the exam centre with time to spare.

So I did the obligatory signing in process and went and did the exam. A few things went wrong for me. The first was the fact that this time I had got to the end with enough time to go back to the questions I wasn’t sure about and to perfect my design answers. The first issue I found was that some of the multiple drag and drop answers I had placed in the tables were not where I’d left them and were in different sections so I corrected these again and moved on to the designs. I decided to re-do a few from scratch and found that for some bizarre reason when i selected active connections from the drop down boxes they were now blue dotted lines instead of the solid red. Just to make sure I wasn’t going mad I clicked on redo and tried again carefully selecting the active connectors and found that this was still showing the blue dotted line.

Anyway I made the designs as best as I could and and with about 5 minutes to spare I clicked confidently on the END exam radio button.


To my horror I saw that the I’d not only failed but I’d actually a worse score than I did the first attempt.

I was in a state of shock and disbelief and left the exam room gutted. I did report to the reception about some of the issues I had but nothing was really done so I left the centre completely flattened with depression and self doubt. Just to compound my life a bit more the heavens decided open to pour the entire contents of the Atlantic all over me on the motorbike on the way home completely soaking me making me even more miserable.

I’d already alerted my wife of the massacre of what was my confidence and she greeted me on my return with a cold bottle of Magners cider and a big hug and kiss. I was despondent and sulked upstairs and the first thing I did was to open a word document and write down as many as details of the design questions and other questions. I wanted to get some clarification over what it was asking and what I’d answered so I could see if where I went wrong. A post mortem as such.

Once I’d downloaded brain memory to internet memory I then shut off the PC and continued to sulk and second guess where or what I did get right and or wrong. It bugged me a lot and I won’t lie, I was hard to live with or talk to as my mind was in turmoil. I’d listened to all the advice, I’d read all the books, I’d viewed many hours of Pluralsight videos and Youtube videos so what the hell did I do wrong? The worst thing about the exam is that the results only give you generic areas to review i.e. the whole bloody blueprint. It doesn’t give you an area(s) of where you are weak so you can go back and reassess/learn so VMware would do well to implement this in the future.

So after the dust had settled and my mood turned from sulk to get even, I took to twitter to ask for help. That in itself was a tough pill to swallow but I’m man enough to admit my failings as clearly I must be doing something wrong and wanted to put it right ASAP. As with all things in the community I had no less than two VCDX’s contact me offering some assistance which blew me away and reminded me of why I do love Twitter and being in the VM community as everyone wants to help you.

Step up Sir Michael Webster @VCDXNZ001 my virtual knight in shining armour. Michael set up a webex so we could have a chat about some design concepts and have a discussion about the way I was going about doing the exam questions. While Michael couldn’t actually give me an specifics of the VCAP-DCD exam due to NDA he did convey some critical advice for me next attempt which was to draw the design out before I look at the available components in the drop down box. The thinking behind this was that if my design fits the requirements and all the necessary components were available in the selection boxes; there would be a high chance that I would get the question correct rather than be fooled by a few red herrings and un-necessary components that would be available.

Michael did also console me in the fact that he thinks the anomalies I experienced is highly likely due to a bug in the exam and as this is written I still have not had a reply from VMware Certification as to my queries but I can’t and won’t dwell on it. The past is the past and I can’t change it but I would like some closure on this just incase this happens again.  

We also had a discussion with regards to one particular concept which both Michael and I agree (I think he did?) VMware’s interpretation is not how we would explain it to a customer . This one item has kind of vexed me because it’s possible I could have got this wrong in the exam which could have meant the difference between pass and fail. What is it that vexed me you ask?

Upstream and downstream dependencies!!!!! ARRRGGHHHH Pulls hair out whilst banging the desk!!!. I will dedicate a single blogpost to this alone so that others can refer to it and perhaps set me or the record straight so look forward to that.

After Michael and I’d finished our webex we came to the conclusion that my method of thinking and technical knowledge might not be at fault but perhaps maybe missing a the subtleties of the way the question was being asked or that I shouldn’t apply too much real world thinking to the design i.e. instead of trying to save the customer money on arrays and have two different IO profiles on a single array perhaps just separate them on two completely different arrays. There’s still the question mark over the anomalies I had which I don’t think helped me one iota. 

4. VCDX design scenario selection.

While I still have the hurdles to get over I still want to use my time effectively and get the ball rolling. I’ve actually selected my first ever design case which I was asked to do for a company I used to work for. The reason is that I feel that it will fulfill the blueprint requirements and also a chance for me to do it right having got a lot more experience since I did it. I’ve written some initial paragraphs on the scenario and will flesh them out and improve upon them but you should hopefully get the idea. I’m actually considering doing an open VCDX design submission so that anyone can assist, criticise or offer advice on my design. I think for others who are also starting at the beginning or thinking about doing it will be able get some useful ideas for their own application. I might not be allowed to though however but I’ll find that out in due course.

There’s more to come so keep an eye out especially for the applications mapping post as I want/need your input!

Comme Je Fus
(As I Was)


Tagged with:
Posted in Operation Grandslam

vPepsi versus vCoca-Cola



Well it appears I have to brace myself for what the architects dread the most – the conversation with the customer as to what’s better VMware vSphere 5.5 versus Server 2012R2 HyperV. Before you all start jumping on my nuts yes I know it does depend on the actual functional and non functional requirements blah blah blah. It’s just that as all good architects should do is that they should seek other opinions and so some research and not just rely on their own experience. There is a danger that you can inadvertently become biased so it’s important to seek AGNOSTIC and BALANCED advice. See what I did there? Yeah you got it and now you can probably see where I’m going with this.

If you do a google search (other search engines are available!) on server 2012R Hyperv versus vSphere 5.5 you are going to see that every man and his dog has an opinion just like an ars3hol3. For me though there were two that stood out. The first was from a gentleman working for Microsoft as an Evangelist for their products and therefore committed to his side of the fence and he’s made a comparison of key features of each product to each other from his standpoint.

Here’s the original

If you look at his tables and the entries it does look pretty damning evidence in court for the case against VMware. Here in lies the problem. While technically most of the author’s findings are correct; you do need to view these in context.

Step up Mr Paul Meehan. A quick note on Paul before we start. Paul’s an independent and agnostic architect in terms of who he sides with on hypervisor vendors. He deals with both VMware and HyperV and so has a balanced view of the hypervisor battlefield. Paul has conveyed his views in his article

Naturally everybody then jumps on the bandwagon and starts to add their input. The sad thing is that Microsoft in my own opinion have scored an own goal by removing Paul’s feedback and tried to rubbish his arguments rather than promote open engagement.

Paul has some very valid points with respect to the original technet article and whether MS like it or not, a lot of us that work on both sides of the fence tend to agree with him.

How I feel about it is this:

As an extreme analogy but lets say that HyperV with all it’s bells and whistles is a BMW M5 and VMware is an Audi A5 RS. These are both awesome cars and side by side each can boast performance statistics and specs better than each other fpr this and that and one may have a feature as standard and the opposite number doesn’t. But these cars are still cars at the end of the day. The owners will see these depreciate over time and eventually trade these in for the latest model release. Each of these cars will do the vast majority of what the owners want to do and that’s drive from A to B. The manner of which they want to get there is up to the owner as they could put pedal to metal and be a real hooligan do doughnuts/burnouts or simply enjoy the drive knowing they’ve got the power there if they need it.

You will find that in most cases I’d say the vast majority of owners of these cars don’t even reach the limits of the performance of their cars and those that do often pay a high price like crashing. My point is this. There are not many companies that are reaching the limits of what the hypervisors can offer and those that are could arguably be said to have implemented the wrong solution and need to address it.

Generally car drivers have brand preference and personal biases in most cases. Marketing has great influence on how they perceive their cars and instils a sense of belonging to one or the other. VMware and Microsoft are no different and this is where it comes back to the title of this blog. vCoke and vPepsi. It’s my opinion and personal feelings only, that we can draw comparisons of the way Pepsi markets their products in that they have historically try to bash Coke cola and demean them in some way or another. Coca-cola on the other hand just promotes their own product and not try to attempt to bash the competition. Why not leave the end user to decide which ones best by tasting both of the products and make their own minds up? The customer has a perception of what they think is a good taste so why should this be different in the virtualisation space too?

I’ve spoken with many many customers and they all say the same thing when I ask them about their thoughts about vendor bashing. They don’t like it. Not one iota. If anything it pisses them off and makes them feel alientated. This doesn’t just apply to virtualisation but in hardware vendors too.

We must maintain focus and look at what is really the functional and non functional requirements and then design a solution that will fit ALL the requirements.

This might mean that this could be that many options are available but then other factors can come in to play such as cost, ease of management and long term strategy etc all of which should have been covered by the project goals anyway but you’d already have a good idea of where the customers comfort zones are too.

Ultimately companies that bash their competitors offerings are in my opinion doomed to failure or less success than they would be if they’d spent more time and effort promoting their own products.

Pepsi appear to have turned a corner and decreased their negativity towards Coke and I have a sneaky suspicion they are getting the rewards for doing so now. Take note vendors it’s not too late to do the same too and. Yes it’s business at the end of the day but ethically I don’t agree with it and the maybe the majority of us will agree too.

As a final word I say this. Microsoft Hyperv Server 2012R2 has fired a decent size 11 foot up VMware’s arse and good on them. They are now really starting to up their game and instead of copying VMware looking to offer a different approach. They have slapped VMware’s face and laid down the gauntlet. For the true neutral I can’t wait to see what they do next as between you and me – it’s all good news as we will benefit from this heavy weight duel as the new features should be awesome either way. God I hope VMware picks the metaphorical glove up and accepts the challenge.


That is all………

Posted in Operation Grandslam

Tortoise and the hare



So it’s been a while since the last update so here’s what’s going on. The title of this little blog is a somewhat reflection of my current state in terms of mindset in getting Operation Genesis completed. My heart is saying just rush through all the documentation in the blue print and go for the DCA exam sooner rather than later. However and thankfully my brain is winning on this occasion as saying to me to take my time and actually learn, digest and remember what I’m learning.It is important to do this because I’ve no doubt any corner cutting now will ultimately come back to haunt me in the VCDX defense so it’s just not worth the risk.

Too often in the past have I geared up for exams by cramming in data into short term memory just to pass the exam and then forget afterwards. I’ve regretted this before as there have been the odd occasion in the past where I’ve really needed to remember a certain detail in the heat of battle but come up empty just because I’ve forgotten. I doubt I’m alone here though as companies can sometimes force exams on to you and expect you pass them first time in order to gain company kickbacks/rewards/discounts but ultimately it’s down to the individual to resist the pressure and do what is right long term.

Speaking of forgetting – In the process of putting together my lab I’ve found that I’m very rusty in certain what I consider basic operating knowledge of vSphere and that’s a reflection on what I’ve been doing for the last few years i.e. design work and not actual hands on. This further reinforces my belief that ALL professionals dealing with VMware be it in design or operations should have a home lab to keep the skills up to date and sharp. Admittedly vSphere 5.5 is a long way from what I’m used to i.e. vSphere 5.0 but I should still remember how to mooch around the esxcli and do basic tasks etc etc. Thankfully the last few days I’ve been brushing up those skills again and getting to grips with PowerCLI and ESXCli.

As of Sunday I’ve taken ownership of a QNAP Pro 500 series array to add to the lab environment courtesy of a former boss who’s supporting my journey so I’ve now got an awesome fit for purpose shared storage option so a special mention to Adam Courtney for his help.I’ve configured it with iSCSI and carved up two 512GB luns for VMware hosts to connect to and use as datastores. I’ve also allocated the HA heartbeats to one of these datastores and another from the FreeNAS to provide some level of resiliency.

vCenter Server Heartbeat

In other news it’s been announced the VMware are going to retire vCenter Heartbeat but still support it till 2018 as per this linky:

When I read this I thought this was a mistake on VMware’s side as they’ve not offered any real alternative other than to rely on HA. This seemed a little unusual for me as VMware normally give you a like for like replacement or alternative so it leads me to think that something is immanent to be released or they are working on something that will make the vCSH obsolete. I bloody well hope so as looking at all my customers and their requirements; we are going to need something pretty soon or I need to think about using other methods of protecting vCenter.

I did some googling to see what the reaction to this and was as well as check Twitter and was glad to read this article by a highly respected VCDX holder Michael Webster on this link:

As you’d expect from a VCDX he hits the nail on the head and echos what most of us are probably thinking. You shouldn’t retire a key critical feature to your solution and not give an alternative which will do the same if not better than the previous option.

My take on this is that ok – what’s done is done we can’t change it so adapt and overcome. So what are the options? Other than those in Michaels blog how about trying something different? How about (I’ll get killed for suggesting this) creating a HyperV server 2012R2 guess cluster and installing the virtual center on top of a Server 2012R2 OS and relying on the clustering mechanisms from HyperV? Just a thought really and will need some investigation as to the impact of this but could be doable. Then there’s other replication technologies like DoubleTake where you can replicate realtime to a shadow VM that’s ready to go at short notice. Again this has implications of additional traffic etc etc but still a possibility. VEEAM too also offer features where there’s possibility of “Instant Recovery” where the actual VEEAM backup of a VM can be literally powered on and put in to production.

Could this also be tackled using storage snapshots like NetAPPs offerings which can give you an option to revert back to former state? All of these can do with some R&D to ensure there’s no gotchyas but my aim here is to try and think out side of the box a little and see what’s on the market already.

I guess it’s down to what the requirements are and RTO/RPOs as to what method you’d employ. In most SMB I think HA will suffice but for the enterprise boys I’d suggest they’d be a lot more sensitive and critical towards getting the vCenter up and running.

NUTANIX and ATLANTISCOMPUTING Match made in heaven?

I’ve worked with a lot of Nutanix opportunities before and really love their approach to virtual architecture. I love new technology that turns convention on it’s head and Nutanix was the first real converged platform I’ve really related this to. For those that don’t know who or what Nutanix does or is check out this link and make sure you do as these guys are going to go big and shake things up – if not already!

In a nutshell these guys are offering hyper-converged storage and compute in a small 2U form factor known as a “Block”. Essentially you’ve got up to four servers each with SSD and HDD in a block dependent on what series model you go for. Underneath the hood these guys have a their own proprietary Nutanix Data File System (NDFS) which handles all the meta data as well as data. Think of it as software defined storage. I could go on and on about this platform but essentially what you need to know is that it really is fast due to the use of the SSD/HDD and the way it accesses the hot data and that it is a very good offering for VDI deployments and consolidation of old hardware requirements. There are some gotchyas like the controller virtual machines that are mandatory in order to provide the link between the hypervisor/virtual machines and the backend storage all required a minimum of 12GB-20GB dependent on if you want deduplication or not. I’ll be honest and say the price can be a turn off when you look at it alone but when you start scaling up in terms of the number of blocks deployed it will all start making some big sense in terms of cost savings.

So what am I driving at? Well those who are still awake reading this will have noticed that I’ve put AtlantisComputing into the subheading. Click on this linky for more info

These are another company I’m very excited about as their product USX is bound to be bought by a big player. The reason for this is that they can essentially form a new tier of storage but place into memory. This in turn gives you mega IO response and performance as you’d expect from say a Fusion IO card for example. The difference here is that allegedly this USX is agnostic in terms of platform and can create real SAN storage using almost any type of storge be it local, DAS, NFS etc

What this means is that you can create very fast SAN storage using older kit as long as you’ve got the RAM i.e. around 64GB+ to start with. This again is another software defined storage solution and what I was wondering is that, what would happen if you placed USX on a Nutanix platform configured with 128GB RAM??? Maybe there is a technical impediment but if you could do it, the Nutanix platform is already fast but to put USX on there it would piss all over anything out there in the market in terms of density/price/performance. I’d love to hear some thoughts and feedback on this from those who are closer to the product than I but either way USX is here and I’ve every belief it’s going to go through the roof and become very relevant in our virtual futures.

That is all for now.

Comme Je Fus


Posted in Operation Grandslam
%d bloggers like this: