Posts by krishnan:
- Integration with multiple data sources
- Powerful filtering capabilities in the UI to blend data across various sources and functions to gain insights
- powerful machine learning capabilities
- RackN team has vast experience in the infrastructure provisioning and they have put their expertise in building this powerful platform.
- Decoupled approach to automation makes the platform lightweight and simple. Keeping orchestration separate helps users use any tool of their choice whether it is Ansible, Terraform, Puppet or Chef
- The success of Kubernetes and the hooks offered in the recent versions helps RackN platform provide the bare metal infrastructure for the containers and Kubernetes clusters can be deployed without the additional overhead of orchestration tools
- They have distinct advantage over the competitors with the powerful workflow automation feature
- They have to overcome the resistance to change. Most data center people are used to tools like cobbler and foreman. They need to convince them of RackN’s value. Essentially, they will be fighting against the human inertia
- In this era of cloud, they need to make a convincing case that Baremetal as a Service offers competitive ROI
- Containers are getting big and their integration with Kubernetes is going to give them an opportunity to emerge as the infrastructure for containers
- We have seen evidence that companies move back to their own data centers from the public cloud when they reach a certain scale (think Dropbox). RackN, with their competitive ROI, can emerge as a strong player in this trend
- Infrastructure companies can easily find a partner (or a potential acquisition target) in RackN
- Companies like Red Hat and Canonical with their server provisioning platforms will compete hard against RackN. But RackN has an advantage in terms of heterogeneity in the operating systems they support
- Public cloud market could grow big putting pressure on companies in the datacenter space
- Oracle’s cloud strategy involves basic IaaS which competes with AWS on the “enterprise-centric” pricing strategy and vendor claims about better performance and “PaaS” (quotes used to differentiate Oracle’s definition of PaaS from the traditional industry definition of PaaS) which includes their middleware offerings and database service. At this point in time, their differentiating factor from the industry leader AWS is pricing
- Set of container-based services on top of their IaaS to compete with container offerings by every other cloud provider. This includes Kubernetes based container service, Oracle container registry and Oracle container pipelines
- Announcement of an open source functions as a service platform. It is an early stage software than a service on top of Oracle IaaS. However, with their middleware tools and IaaS, this could be a Oracle cloud service in the future
- Announcements regarding AI strategy and blockchain tools in their cloud
- Oracle 18c, their enterprise database offering with automation based in machine learning and enterprise grade SLAs
- Powerful NLP engine to handle machine data, add context and give customization options to users
- Starting with Splunk data (with an investment from Splunk, of course) gives them an easy on ramp to enterprise IT
- Their initial focus on Security will help them get the attention of IT decision makers
- Autopilot feature that provides a more pro-active approach to security might serve as model for future autonomic computing platforms
- They are a startup pushing a newer technology in the enterprise market. The barrier to entry for startups in enterprise is high. However, their partnership with Splunk should help them
- They provide on-premise deployments which makes it difficult for the learning engine to continuously improve. But they are tapping into the anonymized meta data to help the learning engines learn from user behaviors. This will work well with modern enterprises but they may have trouble convincing enterprises in highly regulated verticals. It is not a weakness for Insight Engines alone but for any company trying to build AI systems that can be deployed on-premises. To overcome this, Insight Engines, as a pioneer in this space, has to convince the customers to share their metadata with them. It is a potential weakness but also an opportunity for them to emerge as thought leaders
- Insight Engines has the first mover advantage and they are attacking low hanging fruits (machine data and security) with a more powerful NLP engine. They can easily broaden their product portfolio going forward
- As we embrace modern stacks with deeper and deeper levels of automation, ML and AI are going to be the next wave of innovation. The early innovations in this kind of autonomic IT will come around user interface and user experience. Insight Engines is well positioned to take advantage of the trend
- Many established players collect lot of machine data (including Splunk) and it is a logical next step for them to attack the low hanging fruits like NLP for UI/UX. Though it is a threat, it is also an opportunity
- Open source NLP engines can come and disrupt the market. OSS need not come from the traditional IT companies but also from end customers who develop ML and AI engines for their internal use. There is also an opportunity for Insight Engines to lead here but OSS by startups is not easy.
- John Allwright, Pivotal Inc
- Rob Bissett, Virtustream
- Scott Fulton, The New Stack
- Bryan Friedman, Pivotal Inc
- Krishnan Subramanian, Rishidot Research (Moderator)
- If you are ok to outsource your infrastructure decisions to Pivotal/VMware, betting on Pivotal is the way to go
- If you prefer Pivotal’s approach to application platform against, say, OpenShift, this will help you with multi-cloud strategy
- If you are not sure, we strongly recommend you do the homework on the acquisition costs, lock-in costs, training costs, etc. to make a decision. There are other multi-cloud platforms available in the market
Accelerite, the Silicon Valley based company focussed on Hybrid Clouds and Big Data, today announced the next version of their data analytics product, ShareInsights 2.0. ShareInsights is their self service data analysis product focussed on taking the grunt work out of business users and make it easy for them to gain critical insights needed for their organization. Their product offers data preparation (ETL), OLAP, visualization and collaboration in one single interface, making it an end to end stack for data analysis.
Ever since big data infrastructure became mature, mainly driven by open source technologies, the focus has shifted to data analytics and machine learning. From offering superior customer experience to gaining critical business insights from disparate sources to developing product roadmaps, data analytics is becoming the core competency of any modern enterprise. The biggest pain points felt by modern enterprise decision makers were in data analysis, data transformation and data collection. The big expectation from the decision makers is to have a tool that seamlessly breaks down the data silos across disparate set of data sources. One of the biggest asks from business to enterprise decision makers is a self service analytics tools which takes the groundwork out of getting the data ready for analysis.
Company and Product
Shareinsights is Accelerite’s big data analytics tools that is focussed on offering business users a self service tool to gain insights from large volumes of data. Accelerite also has Rovius hybrid cloud platform and Concert for IoT. Unlike Rovius which is an acquisition, Shareinsights was built from ground up with a single goal of simplifying data analysis and take any pain out of business users that can slow them down. The key focus in their product is speed whether it is the speed of getting started with an analytics product using their platform or the platform speed itself. The platform can run on Hadoop or Kafka, easily integrating with your existing tools in your organization. The platform accelerates the lifecycle from infrastructure operator to business user, by streamlining the flow from data preparation to transformation to visualization.
With Shareinsights 2.0, it is easy to process large volumes of data from multiple sources, whether it is a CSV file or API from a third party service. It is easy to blend data from across multiple functions inside the organization and gain insights beyond what is usually available in legacy analytics tools. They have also included vast library for machine learning algorithms, making it easy for even business users to run machine learning models on top of data and gain critical insights.
The typical customer expectation from a tool like Shareinsights are:
Shareinsights 2.0 appears (from the demo we used for our evaluation) to solve these needs and we expect the tool to meet the needs of modern enterprises. We will continue to watch their progress and talk to customers in the future to gain better insights on their product in the future.
Yesterday at Dockercon EU, Docker announced its support for Kubernetes on the Docker Enterprise Edition, Docker Community Edition as well as its desktop apps as well as the Moby project. This is a significant shift for a company that almost broke the open source community around the then Docker project. They wanted to push the hooks for their orchestration and management plane into the containers under the “batteries included but swappable” marketing campaign. Since then, the wind has blown in the direction of Kubernetes at the orchestration level and the conversation has effectively moved from the standardization around containers to standardization on orchestration plane. In this post, we will discuss the implication of this announcement in the market and how it impacts IT decision makers.
Docker’s foray into Kubernetes World
Yesterday Docker pre-announced the availability of Kubernetes on Docker platforms and the Moby project citing the shared roots between Docker community and Kubernetes community. They also announced that they would make vanilla Kubernetes available and stay close to the recent version instead of the Red Hat model of releasing stable releases for OpenShift Container Platform. According to Docker, there will be better collaboration between the Moby project and Kubernetes project. The end users get the option of selecting Kubernetes or Swarm for orchestration.
The State Of Developer Platforms
It is all about application platforms. How do you empower developers in your organization to seamlessly deploy apps ensuring faster time to market? How organizations enable them depends on the abstraction which, in turn, depends on the nature and requirements of the application being deployed. The early days of cloud saw the debates of IaaS+ vs PaaS and we see similar trends in the era of container native workloads. Kubernetes is fast gaining mindshare, driven by the declarative approach it offers in the automation of container native infrastructure. The quest to pick the right abstraction needed for various applications still see the same kind of demarcation we saw in the early days of cloud computing. It is IaaS+ (driven mainly by Kubernetes even though Mesosphere DCOS and Docker Swarm are other competing platforms) vs the platform abstraction at the developer layer enabled by platforms like OpenShift and Pivotal CloudFoundry (picking Pivotal CloudFoundry specifically because I don’t see any other credible vendor in that ecosystem) vs the serverless or Functions as a Service offerings. The usage patterns range from monolithic and web apps in IaaS+ to Modern apps including Microservices on developers focussed platforms like OpenShift and CloudFoundry to event-driven Microservices in the Serverless/FaaS platforms.
The announcement by CloudFoundry that Kubernetes will become the Container Runtime for CloudFoundry platform combined with Docker’s announcement that Kubernetes will be one of the choices in orchestration plane puts Kubernetes as the core component in the container native application platforms. Kubernetes, by itself, has limited impact but it is emerging as the core component of modern day platforms whether it is IaaS+ or modern PaaS or FaaS. Both Pivotal CloudFoundry and Docker are positioning their support for Kubernetes as giving a choice to their customers. While this may be true in the short term, there is a high chance that Kubernetes will emerge as a standard in the container orchestration and be a standard component of any developer-centric platform.
In that sense, Kubernetes is fast emerging as a standard for container orchestration. But, we want to discount any notion that Kubernetes has won the platform wars. The platform market is wide open with many of the workloads still in VM machines and Kubernetes adoption in production is still in early stages. Functions as a Service (as a public cloud service) or a FaaS Platform that is multi-cloud and agnostic of orchestration layer may take the steam out of Kubernetes just like how Kubernetes took the winds off Docker momentum.
Considerations for IT Decision Makers
This makes the decision much easier for IT decision makers and it helps them consolidate their platform choices without worrying about whether the platform supports Kubernetes or not. If your organization has already invested in Docker Platform, this makes it easy to have a mixed environment where Kubernetes can be used for managing dev and test clusters and Docker Swarm for production. The next version of Docker Enterprise Edition and Docker Community Edition will make this easier for your organization. If you are not a Docker shop and want to have a choice in the container orchestration, it makes sense to go with Docker Platform. Otherwise, there are other choices from established vendors like Red Hat OpenShift or Pivotal’s CloudFoundry Platform. Between Red Hat OpenShift and Pivotal CloudFoundry, the decision is mostly cultural. If you are an IT-centric organization, Red Hat OpenShift Container Platform is well suited for your needs. If you are a developer focussed organization, Red Hat’s OpenShift Online or OpenShift Dedicated or Pivotal’s CloudFoundry are better options. Depending on the tolerance level of the organization for betting on startups, there are other options like Mesosphere DCOS, Rancher Labs, Heptio and many others. But if your end goal is to embrace Functions as a Service, you could still use containers to encapsulate the backend services but we would strongly recommend that you bet on multi-cloud, container orchestration agnostic platforms. It doesn’t make sense to embrace Kubernetes just for using FaaS.
Docker’s move into Kubernetes is the next logical step for them after they failed to capitalize on the momentum behind their container mindshare. This also makes them a much easier acquisition target as every big company has bet their modern stack strategy on Kubernetes. It will be interesting to see where Docker goes from here as Steve Singh takes full control with the newer round of funding expected to happen soon.
RackN, the startup based in Austin, had launched their platform in beta last week. RackN aims to simplify infrastructure provisioning through a layered approach which decouples provision, control, and orchestration, giving users more flexibility without losing simplicity. In this research note, we will analyze their platform.
Data center infrastructure provisioning is an old trade helping enterprises provision data centers to meet their IT needs. However, with the advent of cloud, large enterprises and data center providers want to provision their data centers just like how Amazon and Google are provisioning their own data centers. The flexibility and speed enjoyed by the web-scale cloud providers give them a unique advantage in ROI which traditional data centers cannot match. However, for the past few years, there are tools (some are new, and others are an evolution of the traditional provisioning tools) that are enabling seamless provision of data center infrastructure leading to a new category called Bare Metal as a Service. This is partly due to the evolution of data center priorities to match the web-scale cloud providers and also due to the success of containers in the enterprise. Baremetal is a more natural fit for containers than virtualized environments (which are more of a stop-gap arrangement to fill a gap) and the success of Kubernetes has brought new found interest in Baremetal as a Service.
RackN platform is focussed on infrastructure automation but takes a more layered approach to automation. It decouples provisioning, management, and orchestration into different layers, thereby simplifying the processes while also giving customers flexibility on the orchestration tools they want to use. This composable approach to automation coupled with a powerful workflow feature based on their library, helps anyone get started with provisioning in 5 minutes and realizing tremendous ROI in the process. RackN platform brings a level of automation to underlying infrastructure that makes data center provisioning more competitive in this cloud world.
Cobbler, Foreman, Canonical MaaS, Matchbox
RackN is in an interesting situation with tremendous hype in containers and the performance advantage of Baremetal as a Service in the container dominated world. Their advantage with simplicity puts them in an advantageous position compared to all their competitors. But they need to gain mindshare (and eventually market share) to compete effectively
Disclosure: RackN was Rishidot Research client in the past
Oracle., the enterprise giant of the legacy era, hosted their annual user conference Oracle OpenWorld last week shed some more light on their cloud strategy. Oracle made some announcements focussed on cloud computing, Artificial Intelligence, and Blockchain but it came out more like an organization trying to jumpstart their vehicle to catch up with competition than a thought leader pushing innovation. Oracle is almost a decade late into the cloud game and their efforts to compete is still focused on marketing than showcasing any substance. In this analysis, let us dive into Oracle’s strategy for the modern enterprise stack
After dismissing cloud for the better part of the decade and then calling their legacy enterprise applications as cloud, Oracle started focussing on Infrastructure as a Service to take on AWS, Microsoft Azure, Google Cloud and IBM Bluemix. They built an infrastructure service from the ground up, tapping into AWS and Azure engineers, focussing on Compute, Storage, and Network. Then they expanded their offerings to include containers. Here is Rishidot Research’s SWOT analysis on Oracle IaaS strategy earlier this year.
Oracle OpenWorld 2017 Announcements
Oracle made many announcements at this year’s OpenWorld and we are highlighting some of the important ones on their cloud offerings
Oracle is building a infrastructure as a service offering with compute, storage and network. They are also adding container services to the mix. Compared to the top three cloud providers, AWS, Microsoft Azure and Google Cloud, Oracle Cloud is still at a barebones stage when it comes to the depth of their offering. We expected a bunch of higher order services on top of their IaaS but we didn’t see any announcements for newer services or even a coherent roadmap to match the depth of services in the other three providers. Oracle spent the news cycles around OpenWorld focussing on a strategy that is more about reducing their bleeding than convincing newer customers about Oracle Cloud as the infrastructure for innovation. They spent way too much time in the Larry Ellison keynote on their pricing strategy compared to AWS than showcasing innovation that could make their competitors sweat. Even their pricing strategy was more about convincing the customers of Oracle database and applications to use their IaaS than enticing newer customers to start embracing their cloud. We think that the pricing strategy is more old-fashioned and focussed on enabling their salespeople to close big deals than a pricing strategy for the modern era.
It is important for the market to have Oracle as a strong player but, to compete effectively, Oracle has to go at full speed to build depth in the services they offer on top of IaaS. Building iteratively is not going to either help them close the gap with the top three providers or in giving confidence to decision makers that betting on Oracle IaaS is a smart choice. Between now and the next Oracle OpenWorld, I would love to see Oracle add a wide range of higher order services so that enterprise customers can really innovate on top of Oracle cloud. Modern enterprise CIOs are more focussed on innovation than cost savings or iterative performance improvements. They need a powerful infrastructure on top of which their developers can innovate. It is critical for Oracle leadership to understand this need and build a compelling offering to outcompete AWS, Azure and Google Cloud.
Oracle’s container strategy is on the right path but the lack of higher order services is going to hinder the developer adoption of their container service. They do offer a suite of tools to manage the containerized applications from development to production but it is still barebones and they have their work cut out in making this offering more compelling as Amazon ECS or Google Container Engine.
I am glad to see Oracle talking about AI and Blockchains as a part of their modern stack and I am hoping that they have a production ready set of tools available by next OpenWorld.
Recommendations For Enterprise Decision Makers
If you are an Oracle Customer wanting to migrate your applications to the cloud, it makes complete sense to consider Oracle IaaS for your migration needs. However, this is recommended for the migration of existing applications than building any net new applications. They have limited set of services for building next gen applications. Wait for their offerings to mature before using Oracle cloud for newer applications.
If Oracle tech stack is not critical for your applications, AWS, Microsoft Azure or Google Cloud have a wide range of services needed for the modern applications. We strongly recommend these providers for your next gen applications at this point. Oracle can still evolve fast to compete with these providers by increasing the breadth and depth of their higher order services but they are not there yet.
In spite of their late start, Oracle has shown seriousness and commitment towards a more coherent cloud strategy. They still have a long way to go before they can catch up with their competitors. Right now, their IaaS is quite attractive for migrating existing applications built on Oracle stack because of the aggressive pricing but their cloud is not recommended for net new applications. This may change between now and next OracleWorld if they accelerate rapidly, either by building or acquiring companies, to offer higher order services. We will have to wait and see. Rishidot Research recommends enterprise decision makers to closely watch their roadmap for the next year before betting their strategy on Oracle Cloud.
Insight Engines, the San Francisco based startup focused on making machine data actionable, announced the general availability of Cyber Security Investigator and, also, showcased how Amazon Alexa can be tapped to query from Cyber Security Investigator. In this note, we will do an analysis on this announcement.
AI in enterprise is relatively new. Even though enterprises are slowly embracing machine learning and other AI models to dig deeper into their customer data and make business decisions, there is very little progress in using AI to take advantage of machine data. There are plenty of analytics solutions that helps Operations teams optimize their decision making process. But the market is still in infancy when it comes to using ML to automagically do operations or use AI technologies like NLP to develop a better user experience for the machine data. Imagine how DevOps can be done more optimally if developers or even other stakeholders like business users can take advantage of NLP to interact directly with machine data. These are just beginning and we can’t imagine what AI can do to autonomic computing with our current understanding of the landscape.
Insight Engines is an interesting startup in the up and coming field of AI in enterprise IT. It is too early in the market but offers potential opportunities for enterprises to do IT more optimally and optimize the use of human power in house by involving more stakeholders and by reducing the learning curve.
Yesterday, we hosted a Virtual Panel on VMworld 2017 talking about the news that came out of VMworld 2017 in Las Vegas last week. The panelists are:
We discussed many topics ranging from the recent cloud announcements by VMware, Multi Cloud Strategy, Pivotal Container Service, Enterprise use of Kubernetes and whether BOSH can emerge as a standard for infrastructure services orchestration. Watch the video below.
Yesterday, at VMworld 2017, Pivotal Inc announced Pivotal Container Service and a partnership with Google for Hybrid Cloud. Pivotal Container Service (PKS) is made available based on Kubo project which integrates Kubernetes with BOSH. The infrastructure orchestration plane for CloudFoundry. Pivotal Container Service if the commercial offering for Kubo and the partnership with Google will allow customers of Pivotal Container Service to use a hybrid environment between VMware infrastructure inside the data center and Google Container Engine (GKE) on the public cloud.
What it means for Pivotal
Pivotal CloudFoundry (PCF) has gained good enterprise traction in the past several years (which is reflected, in some sense, on the revenue claims made by Pivotal for the past few years). However, the momentum around Docker containers since 2014 and the momentum around Kubernetes project in the past two years are adding pressure on Pivotal as they march towards their rumored IPO. Pivotal CloudFoundry platform got their traction in the market due to the developer centric approach they took in building their platform. Even though Kubernetes has its operational roots, some of the developer centric platform offerings in the Kubernetes ecosystem, like Red Hat’s OpenShift and others, is creating competition to PCF in the market. Based on our conversations with enterprise decision makers and our conversations with vendors, it is pretty evident that Pivotal sales teams face questions about Kubernetes as they go into the market. Clearly, Pivotal needs a response to the Kubernetes story.
One way to compete is to highlight the technical strength of CloudFoundry platform against Kubernetes and find a way to sell into enterprise accounts. But the momentum in Kubernetes is too strong for Pivotal to spend their sales cycles fighting Kubernetes. A smarter move is to support Kubernetes and add a layer of abstraction to hide its complexities. This is exactly what Pivotal is trying to achieve with this announcement. The key thing to notice here is BOSH as the glue to Kubernetes world (more about it later).
What it means to VMware
Ever since public clouds pulled the rug from underneath VMware, they are struggling with a credible public cloud story. Finally, they are narrowing down on a hybrid cloud story with a multi-cloud component. Moreover, containers are turning out to be an Achilles heel in VMware’s cloud playbook. The reality facing VMware is to have a credible story involving public cloud and containers. In spite of recent announcements in VMworld, they are far away from having a credible story on this front. With Pivotal’s investment in BOSH and VMware’s efforts to build a credible hybrid story in the multi-cloud world, there is an opportunity in front of them. They have started with a partnership with AWS where VMware’s Cloud Foundation is available for customers to use. Let us cut the slack here and call out the customers who will benefit from this partnership. They are the existing VMware customers wanting to go the AWS route without much of disruption. If VMware has to gain interest in new customers who are doubling down on the cloud, they need to go much beyond v-services on AWS and other public cloud providers. They need to make vSphere the orchestration plane for infrastructure services (be it public or private). Selling multi-cloud infrastructure services is difficult because of the user experience problem with multi-cloud infrastructure. This becomes even more damning when VMware is not one of the public cloud providers in this multi-cloud world. This is where Pivotal’s investment on BOSH comes handy. If Pivotal, through their application platform route, can make BOSH the standard for orchestrating infrastructure services in the multi-cloud world, VMware with their v-services as front end can stay as a credible player attracting newer customers focussed on digital transformation.
What it means to Kubernetes
To be blunt, nothing. Kubernetes community doesn’t care how the software is packaged. Whether it is VMware or Red Hat or AWS or Microsoft or any of the startups in the community, all they care is about having Kubernetes in more places. But I am long arguing that Kubernetes is the Google’s trojan horse in the enterprise to eventually get the enterprise workloads on Google Cloud. This strategy is nothing new to Google. They successfully used Chromecast as the trojan horse to gain market share in the home media room market. It is a similar gameplay with the enterprises. Google doesn’t care who takes Kubernetes into enterprise data centers. All they care is to provide an easy on-ramp to Google Cloud in their competition against AWS and Azure. This partnership with Pivotal provides them another opportunity.
What it means to Enterprise Customers
If you are a satisfied VMware shop, your IT modernization story runs through VMware and you are well-taken care off. If you are a Pivotal customer, you just put a check against Kubernetes in the Modern Enterprise checklist. If you are a customer wondering what is right for you, here are your choices:
In short, Pivotal’s offering is a credible path to a digital transformation involving a multi-cloud story but it is important for you to decide if you want BOSH to be your multi-cloud orchestration engine.
Virtual Panel on VMworld: We are hosting a Virtual panel to discuss the announcements at VMworld on Tuesday, Sept 5th 2017 at 11 AM PST. You can watch it live here.
Information Technology is always focussed on abstractions as a way to remove complexity and simplify IT use for end users. IT evolution can be directly correlated to the quest to abstract at higher layers. Whether it is an evolution from binary programming to programming in higher level languages or virtualizing the servers for operations efficiencies or an evolution from IaaS to PaaS for developer productivity, the quest to abstract away complexity, at times at a cost of losing some flexibility, has been the path of IT evolution. In this post, I will talk about how abstractions play a role in the multi-cloud world.
Abstraction & Operational Efficiencies
Starting with server virtualization, the focus is on using the software on top of physical infrastructure to remove operational complexities and provide a managementplane for operations staff to operate at scale. Cloud computing took it to the next level by removing the bottleneck in provisioning, thereby improving developer productivity along with operational and resource efficiencies. Beyond compute, SDN and NFV helped abstract physical networks and made them programmable. Software Defined Storage decoupled the underlying physical storage from the end users and making it easy for developers to access storage.
Multi-Cloud brings in certain additional operational complexities to users but there are various efforts going on to help reduce operations pain. Cloud management offerings remove significant operational pains with Multi-Cloud by bringing a control plane between the user and the underlying clouds. Projects like BOSH from CloudFoundry are also attempting to simplify the complexity associated with Multi-Cloud.
Abstraction & Developer Productivity
We have always seen developer focussed abstraction bringing in increased productivity and agility in the software delivery process. Whether it is the evolution of higher level languages in the early days of computing or a platform abstraction encapsulating runtimes and libraries or PaaS. improving the developer productivity has always been a key driving point for IT. Platform as a Service offerings like Heroku, Engine Yard, CloudFoundry and OpenShift enabled abstraction at a level above infrastructure, encapsulating various runtimes, middleware and developer services along with infrastructure, and giving the developers an API to push their application code. Everything else happened automagically and this approach to an abstraction on top of compute helped organizations maximize both operational efficiencies and developer productivity.
Let us now look at how abstractions help developers in the multi-cloud world. With platforms like OpenShift (or other Kubernetes based platforms) and CloudFoundry, organizations can take advantage of multi-cloud while giving the developers the same interface to push their applications. These platforms bring a level of simplicity while also providing a seamless portability of applications as these platforms can be deployed in multiple clouds and also on premise. Portability of the platforms across clouds along with a standardized API that abstracts the compute in a multi-cloud scenario is a no-brainer for any Modern Enterprise. Projects like Kubernetes and CloudFoundry are leading the way in providing this flexibility to developers in a multi-cloud world without having to learn about their services and the associated APIs. Since every application has a touch point with the compute, this abstraction over compute helps increase developer productivity in a multi-cloud scenario.
Most applications have a touch point with storage and, in a multi-cloud world where storage offered by various cloud providers are different, an abstraction offering a standardized API is essential. S3 defined a standard for object storage in AWS cloud but there are no S3 alternatives available for other clouds. Minio brings in an abstraction, running natively on all the cloud providers on top of their storage offerings and providing an S3 like API for developers.
Among the three key infrastructure components, compute, storage and network, very few applications have a touch point with the network. Gaming or streaming applications needing multicast setup or other niche applications requiring access to load balancers or other infrastructure components are the types of applications that require access to the networking layer. Some of the legacy applications that are forklifted to the cloud will also require access to the network. SDN offerings like OpenContrail or VMware’s NSX which also includes assets from NFV player Nicira which they acquired are perfect examples of abstractions which allow developers a simplified interface without worrying about the complexities of the underlying physical networks. For most other applications, a platform abstraction like CloudFoundry or Kubernetes can encapsulate the (virtual) networking layer and remove network entirely from the “field of view” of the developers.
In a nutshell
The abstractions are critical in a multi-cloud world to maximize developer productivity and keep the business agility unimpacted by the complexities of multi-cloud. If you are an enterprise customer planning a multi-cloud strategy, figuring out the right developer abstraction, along with the right operations management layer is going to be critical.
Disclaimer: Minio is a client of Rishidot Research