Tortoise and the hare

Image

 

So it’s been a while since the last update so here’s what’s going on. The title of this little blog is a somewhat reflection of my current state in terms of mindset in getting Operation Genesis completed. My heart is saying just rush through all the documentation in the blue print and go for the DCA exam sooner rather than later. However and thankfully my brain is winning on this occasion as saying to me to take my time and actually learn, digest and remember what I’m learning.It is important to do this because I’ve no doubt any corner cutting now will ultimately come back to haunt me in the VCDX defense so it’s just not worth the risk.

Too often in the past have I geared up for exams by cramming in data into short term memory just to pass the exam and then forget afterwards. I’ve regretted this before as there have been the odd occasion in the past where I’ve really needed to remember a certain detail in the heat of battle but come up empty just because I’ve forgotten. I doubt I’m alone here though as companies can sometimes force exams on to you and expect you pass them first time in order to gain company kickbacks/rewards/discounts but ultimately it’s down to the individual to resist the pressure and do what is right long term.

Speaking of forgetting – In the process of putting together my lab I’ve found that I’m very rusty in certain what I consider basic operating knowledge of vSphere and that’s a reflection on what I’ve been doing for the last few years i.e. design work and not actual hands on. This further reinforces my belief that ALL professionals dealing with VMware be it in design or operations should have a home lab to keep the skills up to date and sharp. Admittedly vSphere 5.5 is a long way from what I’m used to i.e. vSphere 5.0 but I should still remember how to mooch around the esxcli and do basic tasks etc etc. Thankfully the last few days I’ve been brushing up those skills again and getting to grips with PowerCLI and ESXCli.

As of Sunday I’ve taken ownership of a QNAP Pro 500 series array to add to the lab environment courtesy of a former boss who’s supporting my journey so I’ve now got an awesome fit for purpose shared storage option so a special mention to Adam Courtney for his help.I’ve configured it with iSCSI and carved up two 512GB luns for VMware hosts to connect to and use as datastores. I’ve also allocated the HA heartbeats to one of these datastores and another from the FreeNAS to provide some level of resiliency.

vCenter Server Heartbeat

In other news it’s been announced the VMware are going to retire vCenter Heartbeat but still support it till 2018 as per this linky:

http://www.vmware.com/products/vcenter-server-heartbeat/

When I read this I thought this was a mistake on VMware’s side as they’ve not offered any real alternative other than to rely on HA. This seemed a little unusual for me as VMware normally give you a like for like replacement or alternative so it leads me to think that something is immanent to be released or they are working on something that will make the vCSH obsolete. I bloody well hope so as looking at all my customers and their requirements; we are going to need something pretty soon or I need to think about using other methods of protecting vCenter.

I did some googling to see what the reaction to this and was as well as check Twitter and was glad to read this article by a highly respected VCDX holder Michael Webster on this link:

http://longwhiteclouds.com/2014/06/10/vmware-vcenter-heartbeat-dead-but-not-forgotten/

As you’d expect from a VCDX he hits the nail on the head and echos what most of us are probably thinking. You shouldn’t retire a key critical feature to your solution and not give an alternative which will do the same if not better than the previous option.

My take on this is that ok – what’s done is done we can’t change it so adapt and overcome. So what are the options? Other than those in Michaels blog how about trying something different? How about (I’ll get killed for suggesting this) creating a HyperV server 2012R2 guess cluster and installing the virtual center on top of a Server 2012R2 OS and relying on the clustering mechanisms from HyperV? Just a thought really and will need some investigation as to the impact of this but could be doable. Then there’s other replication technologies like DoubleTake where you can replicate realtime to a shadow VM that’s ready to go at short notice. Again this has implications of additional traffic etc etc but still a possibility. VEEAM too also offer features where there’s possibility of “Instant Recovery” where the actual VEEAM backup of a VM can be literally powered on and put in to production.

Could this also be tackled using storage snapshots like NetAPPs offerings which can give you an option to revert back to former state? All of these can do with some R&D to ensure there’s no gotchyas but my aim here is to try and think out side of the box a little and see what’s on the market already.

I guess it’s down to what the requirements are and RTO/RPOs as to what method you’d employ. In most SMB I think HA will suffice but for the enterprise boys I’d suggest they’d be a lot more sensitive and critical towards getting the vCenter up and running.

NUTANIX and ATLANTISCOMPUTING Match made in heaven?

I’ve worked with a lot of Nutanix opportunities before and really love their approach to virtual architecture. I love new technology that turns convention on it’s head and Nutanix was the first real converged platform I’ve really related this to. For those that don’t know who or what Nutanix does or is check out this link and make sure you do as these guys are going to go big and shake things up – if not already!

http://www.nutanix.com/evolution-of-the-data-center/

In a nutshell these guys are offering hyper-converged storage and compute in a small 2U form factor known as a “Block”. Essentially you’ve got up to four servers each with SSD and HDD in a block dependent on what series model you go for. Underneath the hood these guys have a their own proprietary Nutanix Data File System (NDFS) which handles all the meta data as well as data. Think of it as software defined storage. I could go on and on about this platform but essentially what you need to know is that it really is fast due to the use of the SSD/HDD and the way it accesses the hot data and that it is a very good offering for VDI deployments and consolidation of old hardware requirements. There are some gotchyas like the controller virtual machines that are mandatory in order to provide the link between the hypervisor/virtual machines and the backend storage all required a minimum of 12GB-20GB dependent on if you want deduplication or not. I’ll be honest and say the price can be a turn off when you look at it alone but when you start scaling up in terms of the number of blocks deployed it will all start making some big sense in terms of cost savings.

So what am I driving at? Well those who are still awake reading this will have noticed that I’ve put AtlantisComputing into the subheading. Click on this linky for more info http://www.atlantiscomputing.com/solutions/overview

These are another company I’m very excited about as their product USX is bound to be bought by a big player. The reason for this is that they can essentially form a new tier of storage but place into memory. This in turn gives you mega IO response and performance as you’d expect from say a Fusion IO card for example. The difference here is that allegedly this USX is agnostic in terms of platform and can create real SAN storage using almost any type of storge be it local, DAS, NFS etc

What this means is that you can create very fast SAN storage using older kit as long as you’ve got the RAM i.e. around 64GB+ to start with. This again is another software defined storage solution and what I was wondering is that, what would happen if you placed USX on a Nutanix platform configured with 128GB RAM??? Maybe there is a technical impediment but if you could do it, the Nutanix platform is already fast but to put USX on there it would piss all over anything out there in the market in terms of density/price/performance. I’d love to hear some thoughts and feedback on this from those who are closer to the product than I but either way USX is here and I’ve every belief it’s going to go through the roof and become very relevant in our virtual futures.

That is all for now.

Comme Je Fus

Don

About

IT virtualisation professional since 2002 after serving 7 years in the British Army as a REME Corporal. Currently a Solutions Architect. I'm a dedicated husband, motorcyclist, proud father and owner of two beautiful border collies Monty and Mitch. Hobbies are: Motorcycles - Isle of Man TT/Road races and MotoGP/BSB/WSB Rugby Union Skiing DCS Flight Simulator -A10A/C and SU25 Frogfoot specialist Airsoft - Closer Quarter Battle

Posted in Operation Grandslam

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: