Deploying DellEMC VxFlex on GCP with Ansible

Today I wanted to share a video that my colleague and fellow Avenger Masanori Nakamura has created. He is a presales engineer in Japan and is very knowledgeable with our VxFlex product.

DellEMC VxFlex is a software defined storage solution that can be deployed either on a 2-tier or as a a hyperconverged architecture. It is typically consumed through appliances or as a fully integrated rack solution (formerly called VxRack Flex)

However, since it is a software defined solution, it can be deployed on top of virtually any IaaS. In this case, Masanori-san deploys it on top of GCP.

VPLEX and PowerOne Postman collection

In today's post I only wanted to notify the addition of 2 new collections to Project Vision:VPLEXPowerOne The VPLEX collection has been kindly contributed by a colleague in the US, "Ankur Patel". It is very comprehensive and it uses variables so that you can use it against multiple systems by simply selecting different environments
The PowerOne collection has only been tested with the Glenn Simulator, not with a real system. Hence you will see the IP address is hard coded as the localhost.
Visit the Project Vision repo page in GitHub to retrieve the collections:
You can see more details on how to get the most out of the collections in this article
And finally you can see Project Vision in action in this article

Installation of CSI drivers in DellEMC arrays

Kubernetes can be hard work. The success of distributions like Openshift, Rancher or PKS is due to the promise of making it easier. Still a large part of the user base (in fact this still has the largest market share of any on-prem Kubernetes distro) choose to deploy plain vanilla Kubernetes to take advantage of the super fast-paced innovation

However this is usually a path that leads to long hours trying to figure how to make things work and overcome many problems. This is painful even though the community out there is massive. With this in mind one of my colleagues (Deepak Waghmare) took upon himself the task of simplifying and streamlining some of these tasks as much as possible. He created an Ansible collection with several roles. This collection is now published in Galaxy

The collection is primarily aiming at the the simplification of the installation of the CSI driver in DellEMC storage arrays. All these arrays follow the same pattern by using Helm, so with a little extra effor…

Driving PowerOne API with Ansible

The Ansible community is massive, so it is no surprise that version 2.9 came out with more than 3600 modules. This number keeps growing and it is motivating a change in how modules are distributed in the future. To read more about these changes you can read from Jeff Geerling himself

However sometimes still you will come across with either:

some functionality that hasn't been implemented on a moduleor a target for which there are no modules What do you do then? You have a few options: create your own module. This can be done in Python and even if you are not a Python guru you can find many tutorials that don't look intimidating at allyou use the "shell" or "command" modules to run some other scriptif your target can be managed through a REST API then you can use the "uri" module
REST API's are the basis for many automation tasks nowadays. An advantage of building automation through the API is that the workflow you build is portable to other too…

Storage provisioning with ServiceNow and Ansible AWX

My colleague (and fellow Avenger) Andrew Vella has a strong developer background with experience on SaaS offerings, so when we came up with the idea of using Service Now to drive our Ansible playbooks he was very quick to put this hand up.

This combination of Ansible and ServiceNow is an integration that we see many customers trying to build. Ansible's growth is phenomenal and is quickly sticking out above other Configuration Management competitors. ServiceNow is also dominating his market

Andrew has promised to share his learnings in a blog when time allows. But in the meantime I can explain that he:

created a developer instance in ServiceNow. This is free and you can keep it active as long as you don't forget to use it at least every 10 daysinstalled the mid server inside the datacentercreated a custom catalog with entries and an approval workflowconfigured REST API calls to trigger playbooks that are stored in AWX (the community version of Ansible Tower)made the same catalo…

Cloud data co-location prototype

The cloud data co-location use case is a very interesting one. Potentially the biggest reasons why organizations choose not to move a workload to the cloud include:
compliance / regulatory requirementslock-in risks and difficulty/costs of migrating to another public/private cloudunpredictable and/or high costs These reasons also play an important part in the ongoing workload repatriation activity we are seeing in the market. The idea that "cloud is not a destination, it's an operating model" is finally sinking in. But, when we talk about destinations, it doesn't all have to be public or private. There is a third location that plays an interesting part
Last year I participated in a project that studied the feasibility of implementing "cloud data co-location". This basically consists on hosting a traditional storage array in a datacenter that is physically close to a public cloud and connect to it through a high-throughput low-latency link. Some providers lik…

Demo of Kubernetes Persistence Volumes with XtremIO CSI

In this post I covered the basics of providing persistent storage to Kubernetes pods using the CSI driver for XtremIO. Organizations are increasingly looking at running stateful applications in Kubernetes, and the CSI driver is quickly gaining maturity, so this topic is becoming more and more relevant.

In this video I have recorded a quick demo that shows CSI in action and demonstrates the basic premise that the lifecycle of the PV (Persistence Volume) is independent from that of the pods it attaches to.

For instructions that you can use don't forget to visit the original article.