Doug Mitchell, VP of Delivery at RevUnit, shares his knowledge and perspective on the combination of these two powerful Google Cloud Platform technologies.
Anthos is a hybrid and multi-cloud application management platform. Anthos accomplishes this through open standards allowing for running applications unmodified within on-premise hardware or cloud providers.
Kubernetes is an open source container orchestration platform. Containers decouple, or remove the need for, an application to care about where it is run. Containers contain images which include everything needed to run an application such as code, libraries, and default configuration file. By bundling and managing containers, Kubernetes allows for declarative (manual) or automated execution of deployments, load balancing, rollbacks, self healing, as well as other features.
Through Anthos components and Kubernetes container orchestration, organizations can optimize their application for performance, resiliency, cost, security, and more with consistent and extensible technologies. This equips the organization to focus their platform admins, service reliability engineers, and devops engineers towards a single technology and stack apply their knowledge across environments.
Additionally, by using open source technology customers have the flexibility to choose their cloud provider(s) and not get locked into a single vendor.
You can run your containerized microservice applications on a simple technology stack in virtually any environment regardless of location, cluster, or cloud provider. Manage the infrastructure, containers, services, policies, and achieve single-point observability.
Be sure to check which features are available from Anthos relative to your cloud environment
There are a host of use cases Anthos enables:
Let's highlight the last use case.
The edge of your organization’s network is, well, at the edge. Meaning operations are distanced from high capacity, optimized storage and computing your business needs. This is never more acute than with computer vision. The use cases of having vision- enabled intelligence are endless. These use cases are often most valuable near the action itself. Sending operational video and image data to the cloud has tradeoffs. There are compliance restrictions, bandwidth constraints, data upload cost implications, network latency limitations, and even connectivity variability. All in all, computer vision is highly valuable to organizations, but the power needed to run it can be costly.
Enter edge computing. These high-performance devices are deployed at your network edge to be as close as possible to your vision-enabled sensors. The machine learning computer vision models receive and process the video and image data near the source. Data remains local to the compliance region. Inference data from the models are uploaded to operational storage for downstream analysis. This data push can be batched to take advantage of connectivity windows or operational downtime. And you can control exactly what data needs to go to the cloud and which is not necessary. By having the computer vision compute resources near the source alerts can be triggered in a meaningful amount of time because the reduced network latency in the facility.
But how do you manage the deployment, security, scaling, and observability of your edge applications centrally? How can you train your computer vision model and deploy the improvements broadly? Enter Anthos + Kubernetes. These powerful platforms ensure you have maximum control and observability while optimizing localized resources. Imagine: the ability to train and deploy your computer vision model across locations, time zones, facility types, compliance regions, and even employee workstyles. We’ve seen the benefits of combining these platforms together, and think you will, too.