• Skip to primary navigation
  • Skip to main content
Rishidot Research

Rishidot Research

deciphering the modern enterprise

  • Home
  • About Us
    • Meet The Team
  • Research
    • Research Agenda
    • Research Data
  • Services
  • Blog
  • Stacksense
  • AISutra
  • Rishidot TV
  • Modern Enterprise Podcast
  • Contact Us
    • Information For AR/PR Representing Vendors
  • Show Search
Hide Search

Cloud Computing

Taking Stock of Public Cloud Vendors

Krishnan Subramanian · January 2, 2018 · Leave a Comment

A very happy new year from Rishidot Research. As we enter 2018, the pressure on CIOs to modernize their IT is going to increase significantly. Any further delay will hurt the business bottom line. At Rishidot Research, we have our hands full on a research agenda focussed on Cloud Native Computing landscape and CIOs will find our research very valuable for their modernization strategy. As a first step, and also to set the context for our discussions in the coming months, we will take stock of the major public cloud vendors and where they stand against their competition.

Amazon Web Services (AWS) is the leading cloud vendor by a huge margin. The scale and the momentum in the recent AWS Re:Invent 2017 is a clear indication of this trend. Now AWS’ user conference is comparable to the traditional IT vendors and the number of CIOs and other decision makers I came across in this event indicates that enterprises are eager to embrace public clouds. It is only natural to start our analysis with AWS.

Amazon Web Services

  • AWS leads all the cloud providers in the sheer number of services they offer to customers. Their push towards serverless which started two years back is now moving to other areas such as containers (AWS Fargate), Data Services (AWS Aurora Serverless), etc.. At Rishidot Research, we deeply believe in higher order abstractions playing a critical role in enterprise IT (we were only of the early believers in PaaS and ran a conference dedicated to it) and we strongly advocate the “serverless” abstractions as the path forward. AWS serverless abstractions other than Lambda is still in early stages and we would expect the services to become even more seamless and autonomic (eg: Without having to describe the limits on resources. AWS should handle that seamlessly with better ways to limit runaway costs)
  • Even though they are slow to push container services and Machine Learning, they are pushing hard in these two areas. AWS Fargate is how container services should be and might provide a competition to many Cars and PaaS vendors in the market. I would characterize their approach to “giving choice to customers” as an underdog marketing approach in areas where they are not a leader in the market. Both containers and ML are areas where mindshare is still not there with AWS. By touting “user choice”, AWS is trying to catch up with other cloud providers
  • One of the criticism I heard from users during AWS Re:Invent 2017 conference is on the interoperability of their services. I would expect AWS to focus on fixing the issues and solving the pain points faced by users of multiple AWS services. I would also like to see Amazon cut down the complexity in the consumption of services. The sheer number of services offered by Amazon makes it a daunting task for decision makers as they plot their strategy. Anything Amazon can do to tame this complexity beast will be helpful. A good example is AWS Fargate service. At Rishidot Research, we believe in the composability of the different layers in the IT stack but it shouldn’t come at the cost of increased complexity to consume the services
  • Both in terms of their service offerings and the customer momentum, AWS has the lead in the market. One trend I observed during the recent Re:Invent is that the size of the IT team makes a difference in organizations going all in with AWS. Large businesses with smaller IT teams are more open to going all in with AWS. If you are an enterprise decision maker wanting to go all in with a single cloud provider, the size of your IT team will impact the decision making process.

Microsoft Azure

  • Microsoft Azure is steadily increasing their public cloud marketplace. The main Azure development is driven mostly by the existing relationship between Microsoft and the IT decision makers but they are changing minds among OSS developers and the younger generation of developers by a strong OSS push. Being the largest contributor in terms of the number of lines of code (because of .NET being open sourced), Microsoft has gained the critical credibility needed to make Azure palatable to OSS developers. By releasing services like CosmosDB, Microsoft has been working hard to gain developer adoption to Azure cloud services
  • Microsoft may not be a leader in Containers but they are investing heavily in hiring Kubernetes talent. Many pundits (including myself) have linked AWS’s embrace of Kubernetes and CNCF as an indication of standardization around Kubernetes but Microsoft’s investment is critical because they are now in a position to neutralize Google from pushing Kubernetes in a direction suitable for them
  • I would like to see Microsoft focus more on making their services more “serverless”. They have necessary components including service fabric and others to move in that direction. I am expecting some announcements in this direction from Microsoft in the next Build conference
  • Microsoft is still an underdog when it comes to ML and AI workloads. They do have a solid technology to make Azure an attractive destination for such workloads. I am yet to see a coherent go to market strategy in this case
  • With Azure Stack (AS), Microsoft gave a great story for hybrid cloud/edge computing in last year’s Build conference. The use case with Carnival Cruise Line is a perfect example of Azure Stack. The way they are integrating serverless technologies with Azure Stack will make AS an attractive option for edge computing use cases

Google Cloud

  • After a slow start, Google Cloud got attention through two open source projects, Kubernetes and Tensorflow. With Kubernetes, Google is uniquely positioned to help enterprises “run like Google”. Even though they showcased some customers in last year’s Google Cloud user conference, they are yet to publicly demonstrate continued success with enterprises. I am hoping to see more customers in this year’s conference
  • Even though Google got early momentum with Kubernetes, AWS and Azure have since caught up with Google in terms of both mindshare (in the case of Azure) and market share (in the case of AWS). I would love to see Google showcasing technology that will make their cloud more attractive than both AWS and Microsoft when it comes to container workloads
  • Google cloud is a clear leader in ML and AI workloads on the cloud. They took a more opinionated approach with Tensorflow and it is paying off, mainly due to the success of Tensorflow as an OSS project. They need to demonstrate that they can capitalize on this early success this year
  • Google has an advantage in Big Data services but AWS and Azure are catching up fast

IBM Cloud

  • IBM started its cloud push much earlier than Oracle and even before Google or Microsoft showed their seriousness towards enterprise adoption of cloud. In spite of their acquisitions around cloud data centers, data services and even DevOps, they are still lagging behind the top three cloud providers in both mindshare and market share (at least, with AWS and Microsoft). More than anything else, there is absolutely no clarity on IBM’s cloud journey. After betting on OpenStack and CloudFoundry early on and, now, Kubernetes, they are yet to demonstrate a clear path towards success in the cloud. In 2018, I expect to see a more coherent cloud story from them
  • IBM Watson was supposed to help IBM gain on cloud computing. Even though there are some customer stories based on Watson and IBM Cloud, we need to hear more in 2018

Oracle Cloud

  • Oracle was the last to enter infrastructure as a Service business among the top cloud vendors. They are still in early stages even though they have made some announcements related to containers and container orchestration. I expect to see them take a deeper plunge in the Kubernetes ecosystem even though they are yet to demonstrate that they can work well with other vendors in an open source project.
  • They need to shore up higher order services if they have to compete effectively with AWS and Azure. They cannot just rely on their database service as the path to cloud success and they need to compete with AWS on the breadth and depth of higher order services. Looking forward to hearing from them on this topic in 2018

We are also closely tracking both Alibaba cloud and Huawei cloud. We do notice that Alibaba cloud is fast adding new features but we are waiting to hear from them on their US traction. We will include these two cloud providers in our future analysis.

Disclosure: AWS, Microsoft and Google paid for travel and stay to attend their user conferences in 2017

Multi-Cloud Is Inevitable

Krishnan Subramanian · June 6, 2017 · Leave a Comment

Picture Credit: Rackspace

In the early days of cloud computing, I was hoping for a more federated model of cloud providers because of my strong belief that oligarchy of cloud providers is bad for customers. I was hoping that open source infrastructure software like OpenStack, CloudStack, etc will lower the barrier for service providers to compete with Amazon Web Services (AWS). For reasons beyond the scope of this blog post, it didn’t happen and AWS gained a significant lead in the IaaS market. But the last two years are reshaping the cloud computing market. With an aggressive push from Microsoft and Google, along with public cloud initiatives of Oracle and IBM, we are beginning to see a Multi Cloud world.

Has Amazon slowed down?

Not at all. Rather, they are doubling up on their cloud push. It can be seen from the new services they offer every year (especially during the re:invent conference) and their aggressive hiring. Amazon has been setting trends in the market. Whether it is easy to use object storage like S3 or powerful database services or the newer AWS Lambda based Functions as a Service, AWS has been plowing ahead in full speed. But what has changed is the speed with which other cloud providers are executing. A multi cloud marketplace is starting to emerge.

What changed?

First, and foremost, the competitors to AWS started executing well. Not only Microsoft and Google made their cloud easily consumable, they started beefing up the number of powerful higher order services available to their cloud customers. Examples include Google Spanner and Microsoft CosmosDB. But, more importantly, platforms like CloudFoundry and Kubernetes made it easy to use multiple cloud providers with an abstraction on top of the infrastructure. They removed the friction in multi cloud use cases.

When you combine these developments with the fact that the enterprise cloud market is only starting to shape up, the market is still wide open for all the cloud providers. Since the user experience with multi-cloud is also becoming more and more seamless, we will be seeing a future where multi cloud will be the norm.

What about the Hybrid cloud?

Hybrid Cloud is a different beast that helps apps bridge between the data center and public cloud. The use cases are different. Disaster Recovery, Cloud Bursting, BiModal approach to applications are the right use cases for Hybrid cloud. With multi-cloud, one can go all in with cloud native applications but with different cloud providers. One major trend I am seeing with organizations embracing multi cloud is about using different cloud providers for different workloads, driven mainly by the strengths of cloud providers offering services. For example, many organizations are considering Google cloud for big data and machine learning applications while they use their web applications using MySQL at AWS. Unlike the Hybrid cloud trend, the multi cloud trend is driven mainly by which cloud is better suited for a specific kind of application.

Summing Up

The multi cloud is real and the trend is driven by the varied strengths of cloud providers based on the higher order services they offer and the ability to right size the infrastructure resources. But one of the key factors for organizations to watch out in the multi cloud world is about how they manage their data. The one who controls the destiny of their data will emerge successfully in the multi cloud world.

On Robustness And Resiliency

Krishnan Subramanian · January 24, 2013 · 10 Comments

RisBlog1When you talk about cloud computing with the enterprises and tell them how cloud requires a different approach to designing applications, I get the biggest pushback from them. Since most of the large enterprises are used to the idea that expensive and powerful hardware that seldom fails is the only way to build robustness into their IT (which ensures business continuity), they are appalled by the new way of designing applications for the cloud. They feel that they are being forced to subscribe to a completely new paradigm in order to take advantage of the cloud. In spite of the marketing gimmicks from the traditional vendors, they understand that cloud computing is more about resiliency than robustness and it bothers many of the enterprise IT managers. They really have difficulty changing their mindset from “failure is not an option” to “failure is not a problem”.

I was recently watching one of Clay Shirky’s talks at the Singularity University and he was trying to highlight the example of cell phone towers to explain the difference between robustness and resiliency in the crowdsourced world. He was trying to highlight the difference between the robustness needed for the survival of Encyclopedia Britannica and the resiliency needed in the case of Wikipedia. It got me excited to make another attempt at explain the enterprise community on the need to shift their thinking from robustness to resiliency. In short, I want to argue that this mental shift is not a new paradigm which cloud forces upon the enterprise IT but, instead, it is an old well tested idea for dealing with scale.

The example which I am going to pick is the cell phone tower example Clay Shirky used in his talk. When you look at the construction of cell phone towers, its base is broad and built with the idea of robustness but, after a certain height, the tower is not built for robustness. Instead, it is built for resiliency against heavy winds. It is not just expensive to ensure robustness at such great heights but also practically impossible. Once the construction engineers understood this difficulty, they relied on the idea of resiliency for building tall towers that has become the backbone for mission critical networks. They didn’t give up on the idea of tall towers because they are faced with a mission critical problem. Rather, they figured out a way to make these towers resilient for winds.. It is no brainer to know that construction industry were trained to focus on robustness while building smaller towers/buildings. But when it came to taller towers or building (scale), they figured that it makes sense to focus on resiliency than robustness. In short, it not only helped save tons of money (economics) but also helped to innovate faster (agility) than waiting for technology to improve so that they can achieve robustness at such heights.

The point I am trying to highlight is that it doesn’t make sense to be married to the traditional IT paradigm of robustness. It worked well when it came to legacy applications. However, for applications at scale, it doesn’t make sense to wait for infrastructure at scale that also offers the robustness of the traditional world. I am not saying we can never achieve success building robust infrastructure at scale. I am just arguing that it doesn’t make sense to wait for such an infrastructure when designing applications for resiliency can help any organization innovate faster. I am also not advocating that one should rely on hardware or data centers that fail every other hour. I am only emphasizing on the need to make the mental shift as history has already shown us that one can trust resiliency over robustness for mission critical needs.

Again, I make it a point to emphasize that it is perfectly ok to shop around for service providers who offer some level of robustness through SLAs. Even then, it is important to understand and accept the fact that servers fail and designing the apps for failure is the right approach for building modern applications. Whether we like it or not, legacy applications are on their way out. Globalized nature of the economy, mobile and social are pushing organizations to slowly (eventually) move away from legacy applications to modern applications. It is important for the enterprise IT managers to understand and accept this fact, change the mindset quickly and start embracing “modernity” in their IT. Organizations waiting patiently for robust infrastructure at scale will eventually end up getting disrupted.

Anuta Networks Unveils its nCloudX Platform

lmacvittie · January 22, 2013 · Leave a Comment

Anuta Networks unveils a solution to more effectively manage cloud and SDN-enabled networks

anuta Anuta Networks, a Silicon Valley-based self-proclaimed "network services virtualization" startup, is entering the market with its inaugural offering, the Anuta nCloudX Platform.

The solution essentially provides network service abstraction and orchestration for cloud service providers and large enterprises engaged in the implementation of private clouds.

Anuta says of its initial offering:

The breakthrough Anuta Networks nCloudX platform simplifies and automates the complete lifecycle of network services in complex, heterogeneous networks across multi-vendor, multi-device, multi-protocol, and multi-hypervisor offerings for physical and virtual devices in both private cloud and public cloud deployments. The solution reduces network services delivery time from weeks to a few hours, building efficient software automation on top of a combination of both the existing hardware-controlled networks and the new generation of SDN technologies.

One of the concerns with regard to potential SDN adoption is, of course, the potential requirement to overhaul the entire network. Many SDN solutions today are highly disruptive, which may result in a more than tepid welcome in established data centers.

Solutions such as Anuta’s nCloudX Platform, however, aim to alleviate the pain-points associated with managing data centers comprised of multiple technologies and platforms by abstracting services from legacy and programmable network devices. The platform then provides integration points with Cloud Management Platforms and SDN controllers to enable implementation of more comprehensive cloud and data center orchestration flows.

Anuta’s nCloudX Platform follows the popular "class of provisioning" and "cataloging" schemes prevalent in Cloud Management Platform offerings and adds drag-and-drop topological design capabilities to assist in designing flexible, virtual networks from top to bottom. Its integration with OpenStack takes the form of a Quantum plug-in, essentially taking over the duties of multiple Quantum plug-ins for legacy and programmable devices. Given the pace at which support for many network devices in Quantum is occurring, alternative methods of integration may provide early adopters with more complete options for implementation.

Its ability to integrate with SDN controllers from Nicira (VMware) and Big Switch Networks enable organizations to pursue a transitory, hybrid strategy with respect to SDN, minimizing potential disruption and providing a clear migration path toward virtualized networks. Its cataloging and design services enable rapid on-boarding of new tenants and a simpler means of designing what are otherwise complex, virtualized networks.

While much of Anuta Networks’ focus lies on cloud, its ability to virtualize network services is likely to provide value to organizations seeking to automate its network provisioning and management systems, regardless of their adoption of cloud or SDN-related technologies.

anuta-logo

Anuta Networks was founded by Chandu Guntakala, President & CEO, Srini Beereddy, CTO, and Praveen Vengalam, Vice President of Engineering, Anuta Networks. It focuses on the delivery of network services virtualization in the context of cloud and SDN-related architectures.

How CA is Automating the Cloud

lmacvittie · December 13, 2012 · Leave a Comment

CA Technologies’ AppLogic Suite enables Automated Service Delivery, #Cloud Style #devops

I had an opportunity to sit down with CA Technologies and talk cloud with a focus on cloud management last month and discovered a robust portfolio of automation and orchestration solutions.

CA has been a staple name in the enterprise for more than a decade so it’s no surprise they understand the need for as turnkey solutions as possible. Pre-integrated and pre-packaged with plenty of easy buttons is desirable, as long as there remains room for customization. The fine folks at CA are focused on hybrid service delivery these days, comprising physical and virtual devices across public and private environments unified by "one point of control." This includes a catalog of services (a registry, if you’re coming from a SOA perspective) as well as pre-integrated and pre-tested process flows that enable rapid deployment of automated processes.

AppLogic is its cloud management platform (CMP) designed as a turnkey solution for service providers. It relies on an environment comprised of commodity components and attempts to minimize the amount of code required to build, maintain and use a cloud environment.

Its framework is capable of executing scripts to perform mundane deployment and configuration tasks such as mounting a volume or configuring a VLAN. It’s model is similar to that of CloudStack and OpenStack but uses its own language, ADL (Application Definition Language). The advantage for CA here is in its visual editor tool for constructing flows and tasks, something lacking in other efforts.

It claims to provide an entire "cloud stack" including:

  • Server management
  • Embedded SAN
  • SDN
  • Resource quotas
  • Design studio
  • Metering
  • Security

Its claim to SDN is much like other, pre-existing solutions that leverage an automated and dynamic configuration paradigm. It creates connections automatically between virtual machines and the Internet, leverages its own DPI for security and consistent latency both ingress and egress from the virtual machine. Network virtualization is really CA’s SDN game here, as the fabric created by AppLogic takes advantage of its own packet encapsulation to enable route domain isolation, security, and bandwidth enforcement.

Its embedded SAN was interesting in that it’s a completely software-based construct that simulates a block level SAN comprised of DAS.

While CA is aiming at service providers with AppLogic, its easy to believe that a large enough enterprise would find value in a cloud management platform that is more fully fleshed out than competing offerings.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Subscribe to Modern Enterprise Newsletter & get notified about our research




© 2021 · Rishidot Research