A new platform allows organizations to deploy application workloads across a distributed edge as a though it were a single Kubernetes cluster. The Kubernetes Edge Interface (KEI) from Section lets development teams already building Kubernetes applications continue using familiar tools and workflows, such as kubectl or Helm, yet deploy their application to a superior multi-cloud, multi-region and multi-provider network.
With the KEI, Section said teams can interact with deployed applications as though running on a single cluster, and its patented adaptive edge engine (AEE) employs policy-driven controls to automatically tune, shape and optimize application workloads in the background across Section’s composable edge cloud.
The Kubernetes API is the most popular method for developers to orchestrate and control containers. Section’s KEI extends the Kubernetes API to connect and implement important Kubernetes resources within the Section edge platform, letting developers move existing applications to the edge.
By using familiar tooling and workflows for both deployment and management, this makes it simpler to distribute containers to multiple locations (multi-cluster/multi-provider/multi-region). In addition, edge presence requirements specified via KEI are translated into policy-driven controls for the composable edge cloud via AEE, which takes a simple application workload policy such as “run containers where there are at least 20 HTTP requests per second” and continuously find and execute the optimal edge orchestration accordingly.
The CEO of Section, Stewart McGrath, said, “Edge deployment is simply better than centralized data centers or single clouds in most every important metric – performance, scale, efficiency, resilience, usability. Yet organizations historically put off edge adoption because it’s been complicated. With Section’s KEI, teams don’t have to change tools or workflows; the distributed edge effectively becomes a cluster of Kubernetes clusters and our AEE automation and composable edge cloud handles the rest.”
KEI simplifies edge deployment and management and provides control so that developers can:
- Configure service discovery, routing users to the best container instance.
- Define complex applications, such as composite applications that consist of multiple containers.
- Define system resource allocations.
- Define scaling factors, such as the number of containers per location, and what signals should be used to scale in and out.
- Enforce compliance requirements such as geographic boundaries or other network properties.
- Maintain application code, configuration and deployment manifests in an organization’s own code management systems and image registries.
- Control how the adaptive edge engine schedules containers, performs health management, and routes traffic.
The Section edge platform is the company’s edge as a service offering, allows organizations to deploy, scale and protect containers at the edge, so they can focus on perfecting their applications and less on managing networks. In addition to KEI and AEE, the composable edge cloud consists of a federation of multiple compute providers (including AWS, Azure, GCP, Digital Ocean, Lumen, Equinix, RackCorp and even custom cloud infrastructure) to deliver reliability, scalability and edge reach.
- Why the future of embedded software lies in containers
- How containers improve the way we develop software
- Doing more with your Docker container
- Understanding and using No-OS and platform drivers
- Mobile edge computing powering virtualized 5G networks and industry