Posts by lmacvittie:

    Anuta Networks Unveils its nCloudX Platform

    January 22nd, 2013

    Anuta Networks unveils a solution to more effectively manage cloud and SDN-enabled networks

    anuta Anuta Networks, a Silicon Valley-based self-proclaimed "network services virtualization" startup, is entering the market with its inaugural offering, the Anuta nCloudX Platform.

    The solution essentially provides network service abstraction and orchestration for cloud service providers and large enterprises engaged in the implementation of private clouds.

    Anuta says of its initial offering:

    The breakthrough Anuta Networks nCloudX platform simplifies and automates the complete lifecycle of network services in complex, heterogeneous networks across multi-vendor, multi-device, multi-protocol, and multi-hypervisor offerings for physical and virtual devices in both private cloud and public cloud deployments. The solution reduces network services delivery time from weeks to a few hours, building efficient software automation on top of a combination of both the existing hardware-controlled networks and the new generation of SDN technologies.

    One of the concerns with regard to potential SDN adoption is, of course, the potential requirement to overhaul the entire network. Many SDN solutions today are highly disruptive, which may result in a more than tepid welcome in established data centers.

    Solutions such as Anuta’s nCloudX Platform, however, aim to alleviate the pain-points associated with managing data centers comprised of multiple technologies and platforms by abstracting services from legacy and programmable network devices. The platform then provides integration points with Cloud Management Platforms and SDN controllers to enable implementation of more comprehensive cloud and data center orchestration flows.

    Anuta’s nCloudX Platform follows the popular "class of provisioning" and "cataloging" schemes prevalent in Cloud Management Platform offerings and adds drag-and-drop topological design capabilities to assist in designing flexible, virtual networks from top to bottom. Its integration with OpenStack takes the form of a Quantum plug-in, essentially taking over the duties of multiple Quantum plug-ins for legacy and programmable devices. Given the pace at which support for many network devices in Quantum is occurring, alternative methods of integration may provide early adopters with more complete options for implementation.

    Its ability to integrate with SDN controllers from Nicira (VMware) and Big Switch Networks enable organizations to pursue a transitory, hybrid strategy with respect to SDN, minimizing potential disruption and providing a clear migration path toward virtualized networks. Its cataloging and design services enable rapid on-boarding of new tenants and a simpler means of designing what are otherwise complex, virtualized networks.

    While much of Anuta Networks’ focus lies on cloud, its ability to virtualize network services is likely to provide value to organizations seeking to automate its network provisioning and management systems, regardless of their adoption of cloud or SDN-related technologies.

    anuta-logo

    Anuta Networks was founded by Chandu Guntakala, President & CEO, Srini Beereddy, CTO, and Praveen Vengalam, Vice President of Engineering, Anuta Networks. It focuses on the delivery of network services virtualization in the context of cloud and SDN-related architectures.

    No Comments "

    How CA is Automating the Cloud

    December 13th, 2012

    CA Technologies’ AppLogic Suite enables Automated Service Delivery, #Cloud Style #devops

    I had an opportunity to sit down with CA Technologies and talk cloud with a focus on cloud management last month and discovered a robust portfolio of automation and orchestration solutions.

    CA has been a staple name in the enterprise for more than a decade so it’s no surprise they understand the need for as turnkey solutions as possible. Pre-integrated and pre-packaged with plenty of easy buttons is desirable, as long as there remains room for customization. The fine folks at CA are focused on hybrid service delivery these days, comprising physical and virtual devices across public and private environments unified by "one point of control." This includes a catalog of services (a registry, if you’re coming from a SOA perspective) as well as pre-integrated and pre-tested process flows that enable rapid deployment of automated processes.

    AppLogic is its cloud management platform (CMP) designed as a turnkey solution for service providers. It relies on an environment comprised of commodity components and attempts to minimize the amount of code required to build, maintain and use a cloud environment.

    Its framework is capable of executing scripts to perform mundane deployment and configuration tasks such as mounting a volume or configuring a VLAN. It’s model is similar to that of CloudStack and OpenStack but uses its own language, ADL (Application Definition Language). The advantage for CA here is in its visual editor tool for constructing flows and tasks, something lacking in other efforts.

    It claims to provide an entire "cloud stack" including:

    • Server management
    • Embedded SAN
    • SDN
    • Resource quotas
    • Design studio
    • Metering
    • Security

    Its claim to SDN is much like other, pre-existing solutions that leverage an automated and dynamic configuration paradigm. It creates connections automatically between virtual machines and the Internet, leverages its own DPI for security and consistent latency both ingress and egress from the virtual machine. Network virtualization is really CA’s SDN game here, as the fabric created by AppLogic takes advantage of its own packet encapsulation to enable route domain isolation, security, and bandwidth enforcement.

    Its embedded SAN was interesting in that it’s a completely software-based construct that simulates a block level SAN comprised of DAS.

    While CA is aiming at service providers with AppLogic, its easy to believe that a large enough enterprise would find value in a cloud management platform that is more fully fleshed out than competing offerings.

    No Comments "

    Homomorphic Encryption finds a Home in the cloud

    November 1st, 2012

    Porticor, which earlier this year unveiled its split-key encryption technology for securing cloud data has taken the next step in its quest to assure users of the security of data in the cloud. In addition to adding VMware private cloud to its portfolio of supported environments (previously it supported only Amazon environments) it announced that it has introduced homomorphic encryption into the equation, which further secures one of the least often (and yet most important) aspects of cryptography – the security of cryptographic keys.

    Where split-key technology assured the security of data by only allowing the full (and secret) key to be derived algorithmically from the two halves of the keys, homomorphic encryption ensures that the actual keys are no longer stored anywhere. Joining the keys is accomplished algorithmically and produces an encrypted symmetric key that is specific to a single resource, such as a disk volume or S3 object.

    Porticor can secure a fairly impressive list of data objects, including:

    • EBS
    • VMDK
    • MySQL
    • Oracle
    • SQL Server
    • MongoDB
    • Cassandra
    • Linux, Unix (NFS)
    • Windows (CIFS)
    • AWS S3

    porticor-homomorphic-announcement

    The split-key technology is used when data is stored, and homomorphic techniques are used when data is accessed. Keys are always encrypted in the cloud, and control is maintained by the customer – not the key management or cloud service provider.

    The addition of partially homomorphic encryption techniques allows for two very important security features to its portfolio of cloud encryption services:

    1. The master key is never exposed, making it nigh unto impossible to steal

    2. A compromise involving one object does not afford attackers access to other objects as each is secured using its own unique encrypted symmetric key 

    This second benefit is important, particularly as access to systems is often accomplished via a breach onto a single, internal system. Gaining access to or control over one system in a larger network has been a primary means of gaining a foothold "inside" as a means to further access the intended target, often data stores. The 2012 DATA BREACH INVESTIGATIONS REPORT noted that "94% of all data compromised involved servers." The 18% increase in this statistic over the previous years’ findings make the security of individual systems – not just from outsider agents but inside agents as well – a significant contributor to data breaches and one in need of serious attention.

    While new to the security scene and relatively untested as to its ability to withstand the rigorous attention and zealous attempts to crack as other cryptographic algorithms and techniques, Porticor offers the analysis and proof of its homomorphic techniques via Dr. Alon Rosen, a cryptography expert from the School of Computer Science at the Herzliya Interdisciplnary Center.

    Regardless, the problems Porticor is attempting to address are real. Key management in the cloud is too often overlooked and storing full keys anywhere – even on-premise in the data center – can be a breach waiting to happen. By splitting key management responsibility but assigning control to the customer, Porticor provides a higher level of trust over traditional techniques in the overarching cryptographic framework required to securely store and manage data stored in public cloud computing environments.

    2 Comments "

    Intel DPDK and Cloud

    October 2nd, 2012

    If you follow Intel and the nuggets of information it releases regarding progress on its processors, you might be familiar with its DPDK (Data Plane Development Kit). The idea behind the DPDK is to enable bypass of software-based network stacks and allow access directly to the data plane, enabling a nearly zero-copy environment from network to CPU. Without getting too deep in the weeds, the ability to bypass the OS processing that forwards packets through a system is a lot like what network manufacturers have done for years – enabling direct access to hardware, which results in much higher performance. 

    This brings to mind one of the arguments for (and against) certain hypervisor implementations. Those that are paravirtualized require a modified version of the operating system, which can be off-putting to some customers. In much the same way, applications desiring to take advantage of Intel’s DPDK will need a modified version of the operating system that includes the necessary drivers and libraries.

    Not an insurmountable requirement, by any stretch of the imagination, but one that should be taken into consideration nonetheless as the performance gains of having (more) direct access to bare metal is considered worth it, particularly for networking-oriented applications.

    How much gain? Errata Security claims a 100 to 1 difference, while Intel has claimed even higher gains:

    You might be skeptical at this point, especially if you’ve benchmarked a commodity Intel x86 system recently. They come nowhere near network wirespeeds. The problem here isn’t the hardware, but the software. The network stack in today’s operating systems is painfully slow, incurring thousands of clock cycles of overhead. This overhead is unnecessary. Custom drivers (like PF_RING or DPDK) incur nearly zero cycles of overhead. Even the simple overhead of 100 cycles for reading the packet from memory into the CPU cache is avoided.

    A standard Linux system that struggles at forwarding 500,000 packets-per-second can forward packets at a rate a 50 million packets-per-second using these custom drivers. Intel has the benchmarks to prove this. This is a 100 to 1 performance difference, an unbelievable number until you start testing it yourself.

    — Errata Security: Software networks: commodity x86 vs. network processors

    The (POTENTIAL) IMPACT on CLOUD

    While the Intel DPDK is easily interpreted as a boon for SDN and network virtualization proponents as well as telecommunications, the question as to how the Intel DPDK will interact with virtualization technology must be asked because of its potential impact on cloud. The premise of virtualization implies a layer of abstraction between the hardware and applications deployed within the environment, the purpose being to allow sharing of hardware. This runs counter to the premise of DPDK, which is to bypass such intervening software and allow direct access to the hardware. Cloud, which is almost universally built atop some form of virtualization technology, seems inherently incompatible then with Intel’s DPDK. It turns out it is, with some minor modifications.

    A recent post by Scott Lowe explains.

    To help with intra-VM communication, Intel DPDK offers several benefits. First, DPDK provides optimized pass-through support. Second, DPDK offers SR-IOV support and allows L2 switching in hardware on Intel’s network interface cards (estimated to be 5-6x more performant than the soft switch). Third, DPDK provides optimized vNIC drivers for Xen, KVM, and VMware. [emphasis added]

    What Intel has been working on is repliace [sic] the VirtIO drivers with DPDK-enabled VirtIO drivers, and use DPDK to replace the Linux bridging utilities with a DPDK-enabled forwarding application. The DPDK-enabled forwarding application is a “zero copy” operation, thus reducing latency and processing load when forwarding packets. Intel is also creating shims between DPDK and Open vSwitch, so that an OVS controller can update Open vSwitch, which can then update the DPDK forwarding app to modify or manipulate forwarding tables.

    The former resolves the issue of sharing hardware (and likely produces fewer performance gains than having sole access but should still see improvement) and the latter addresses what would become another bottleneck if the former is implemented, the virtual switch.

    Ultimately, however, what such modifications do is introduce dependence on specific hardware resources in order to accommodate a very real need – performance.Not all hardware is capable of supporting DPDK – it is Intel specific, after all – which means careful attention to hardware capabilities would be required in order to manage myriad workload types in a cloud computing environment. In a cloud environment, built on the premise of commoditized and broadly applicable hardware and software this has the effect of reversing commoditization.

    Cloud’s benefits of efficiency and lower costs are highly dependent on the ability to leverage commoditized hardware, available for repurposing across a wide variety of workload types. The introduction of hardware and images specifically designed to enable DPDK-capable services would complicate the provisioning process and threaten to decrease efficiency while increasing costs.

    Cloud could still benefit from such progress, however, if providers take advantage of DPDK-enabled network appliances to build its underlying infrastructure – its support foundations. Without the pressures of seemingly random provisioning requirements and with a more controlled environment, providers could certainly benefit from improved performance that might allow consolidation or higher throughput. This could become a selling point, a differentiator, to the market for a cloud to be built on the premise of "bare metal access" and its performance enhancing characteristics.

    Given the right impetus, they might also find a premium cloud service market hungry for the performance gains and more importantly, willing to pay for it.

    3 Comments "

    Observations from Cloud Connect Chicago

    September 17th, 2012

    cloud-connect-logo-2012 Last week saw the inauguration of CloudConnect Chicago and it was great to see both established and newer speakers taking the stage. The event felt a lot like the inaugural event in Santa Clara; more intimate, more buy-side than sell-side, and of course a focus on cloud.

    Some general observations from the event:

    SDN is BUBBLING into CLOUD

    It’s not necessarily an overt message, but it’s there. SDN – or at least it’s core "decouple and abstract" premise – is definitely rising through the layers of cloud. Speaking to ProfitBricks, for example, showed the way in which the assumptions we draw upon to design L2 architectures may be the most disrupted by SDN while the L3 (IP) network architecture might remain largely untouched. While many vendors are approaching SDN with new L3 architectures and protocols, ProfitBricks has run with the idea that the same "decouple and abstract" premise of SDN that provides value up the stack can also provide significant advantages down the stack.

    Given that many of the challenges SDN is designed to address are more pronounced in cloud computing environments than traditional data centers, this is no surprise. SDN is currently quickly moving up the stack in terms of hype, so expect to see at least marketing in the cloud computing demesne start to take advantage of its somewhat nebulous definition as well.

    CLOUD CONFUSION CONTINUES

    There is still a lot of confusion attached to the word "cloud" on the buy-side, especially when prefixed by modifiers like "private" "public" and "hybrid". Customers are being inundated with self-serving definitions that, while based loosely on NIST definitions, are outside what most experts would consider at least typical. Even associated terms like "elasticity", long considered a staple benefit of cloud, are being stretched thin to include processes that clearly fall outside the implied definition of "just in time" flexible capacity. 

    Faster provisioning and reducing operational complexity resonated well, however, no matter how far afield the definition of cloud might have gotten. The notion of scheduled elasticity fits with these interests, as enterprises desire the flexibility of cloud as a way to address periodic (and anticipated) increases in capacity needs without maintaining and incurring the costs of an over-provisioned infrastructure.

    IDENTITY and ACCESS CONTROL

    There continues to be awareness of the issues surrounding identity and access control, particularly as it applies to SaaS, and the need to integrate such services with existing data center processes. While adoption of IaaS remains less broad, SaaS usage is continuing to expand with a significant majority of customers taking advantage of SaaS in some aspect of business and operations. This is leading to an increased awareness of the potential risks and challenges for managing access to these systems, incurring a desire in customers to reassert their governance.

    Anticipate the arrival of turn-key solutions in the form of cloud brokers that streamline managing identity and access control for SaaS in the near future as demand continues to escalate in the face of continued SaaS adoption.

     

    If you missed the event, you can enjoy the keynote presentations online.

    No Comments "