The development of cloud native applications has led to shift-left development with a higher iteration frequency of applications, thereby giving rise to the need for GitOps. This article will introduce how to use the Argo project, including Argo CD and Argo Rollouts, in conjunction with Istio to achieve GitOps and canary deployment. There is a demo to show you how to implement GitOps using multi-cluster Istio deployed on Tetrate Service Express (also applicable to Tetrate Service Bridge).
The deployment architecture diagram of the demo in this article is shown in Figure 1. If you are already familiar with the deployment strategies and Argo projects introduced in this article, you can skip directly to the demo section.
Deployment Strategies
First, I will briefly introduce the two deployment strategies supported by Argo Rollouts, which can achieve zero-downtime deployment.
The steps of blue-green deployment and canary deployment are shown in Figure 2.
- Blue-green deployment deploys new app versions in separate environments without disrupting the production environment. The production environment is “blue,” and the new version environment is “green.” Traffic shifts gradually from blue to green once green is stable. Issues can be rolled back to minimize impact. High availability and zero-downtime deployment are advantages.
- Canary deployment gradually introduces new versions or features to production. The new version is first deployed to a small group of users called “canary users”. The development team monitors feedback and performance indicators from canary users to evaluate the stability and reliability of the new feature. If no problems arise, more users are gradually added until all users use the new version. If a problem is found, it can be quickly rolled back or fixed to avoid negative effects on the entire user group. A canary deployment quickly identifies problems and adjusts to a small impact area.
The main difference between blue-green and canary deployment is the deployment method and the extent of the changes. Blue-green deployment is ideal for major upgrades, deploying the entire application in a new environment before switching. Canary deployment gradually introduces new versions or features, making it ideal for small-scale changes such as adding or modifying a single feature. In terms of application scenarios, blue-green deployment is best suited for systems that require high availability and zero-downtime deployment, while canary deployment is appropriate for systems that require rapid verification of new features or versions.
Kubernetes Deployment Strategy
In Kubernetes, the Deployment resource is a key tool for managing application deployment and updates. It allows for a declarative definition of an application’s expected state and implements release strategies through controller functionality. The architecture of Deployment is illustrated in Figure 3, with the colored squares representing pods of different versions.
The release strategy can be configured in the spec field of Deployment. Here are some common release policy options:
- Management of ReplicaSet: Deployment uses ReplicaSet to create and manage replicas of an application. The desired number of replicas can be specified by setting the
spec.replicas
field. During the release process, the Kubernetes controller ensures that the number of replicas of the new version’s ReplicaSet gradually increases when created and the number of replicas of the old version’s ReplicaSet gradually decreases when deleted to achieve a smooth switch. - Rolling update policy: Deployment supports multiple rolling update policies, which can be selected by setting the
spec.strategy.type
field. Common policies include:- RollingUpdate: The default policy updates replicas gradually at a certain interval. The number of replicas that are unavailable simultaneously and the number of additionally available replicas can be controlled by setting the
spec.strategy.rollingUpdate.maxUnavailable
andspec.strategy.rollingUpdate.maxSurge
fields. - Recreate: This policy first deletes all replicas of the old version during the update process and then creates replicas of the new version. This policy will cause the application to be temporarily unavailable during the update.
- RollingUpdate: The default policy updates replicas gradually at a certain interval. The number of replicas that are unavailable simultaneously and the number of additionally available replicas can be controlled by setting the
- Version control: Deployment sets labels for each version’s ReplicaSet through the
spec.template.metadata.labels
field so the controller can track and manage them accurately. This way, multiple versions of ReplicaSet can coexist, and the number of replicas in each version can be accurately controlled.
These configuration options enable Deployment to achieve various release strategies. Updating the spec field of the Deployment object will trigger the release of a new version. The Kubernetes controller automatically handles replica creation, update and deletion according to the specified policy to achieve smooth application updates and deployment strategies.
Implementing GitOps with Argo CD
You can use Deployment to manage release strategies manually but to achieve automation, we also need to use GitOps tools such as Argo CD.
ArgoCD is a GitOps-based continuous delivery tool used to automate and manage the deployment of Kubernetes applications. It provides some key help to improve the efficiency and reliability of application deployment. It offers resources for Kubernetes application deployment, including declarative configuration stored in a Git repository, continuous deployment with customizable synchronization policies, state comparison and automatic repair, and multi-environment management.
Compared to Deployment resource objects, Argo CD provides more advanced features and workflows that complement the capabilities of native Kubernetes resource objects:
GitOps-based configuration management: Argo CD stores application configuration in a Git repository, enabling GitOps-based configuration management. This approach ensures that configuration changes are traceable, auditable, and can be integrated with existing CI/CD pipelines.
Automated deployment and continuous delivery: Argo CD can automatically detect configuration changes in the Git repository and deploy applications to Kubernetes environments, enabling automated deployment and continuous delivery.State management and automatic recovery: Argo CD continuously monitors and compares applications to their expected states. If inconsistencies are detected, it automatically recovers and ensures that the application state remains consistent with the expected state.
Using Istio to Achieve Fine-Grained Traffic Routing
Although Argo CD can implement GitOps, it operates on Kubernetes Deployment and controls traffic routing through replica numbers. To achieve fine-grained traffic routing, service meshes like Istio are used.
Istio achieves finer-grained traffic routing and application release through the following methods:
VirtualService: Istio uses VirtualService to define traffic routing rules. By configuring VirtualService, traffic can be routed and distributed based on request attributes such as headers, paths, weights, etc., directing requests to different service instances or versions.
DestinationRule: Istio’s DestinationRule allows for implementing advanced application deployment policies, such as canary deployment or blue-green deployment, by specifying different traffic weights between service versions.
Traffic control and policies: Istio provides rich traffic control and policy capabilities such as traffic limiting, fault injection, timeout settings, retry mechanisms, etc. These features help applications achieve higher-level load balancing, fault tolerance, and reliability requirements.
Compared to ArgoCD and Kubernetes Deployment objects, Istio provides the following advantages in application deployment:
Fine-grained traffic routing control: Istio provides richer traffic routing capabilities, enabling flexible routing and distribution based on a variety of request attributes, allowing for finer-grained traffic control and management.
Advanced release policy support: Istio’s DestinationRule can specify traffic weights between different versions of service instances, supporting advanced application release policies such as canary release and blue-green deployment. This makes version management and release of applications more flexible and controllable.
Powerful traffic control and policy capabilities: Istio provides rich traffic control and policy capabilities such as traffic limiting, fault injection, timeout settings, retry mechanisms, etc. These features help applications achieve higher-level load balancing, fault tolerance, and reliability requirements.
Combining Istio with Argo Rollouts can fully leverage the advantages of Istio’s fine-grained traffic routing. Let’s now have a demo together. In our demo, we will use the Kubernetes and Istio environments provided by TSE, implement GitOps using ArgoCD, and implement canary release using Argo Rollouts.
Demo
The software versions used in our demo are:
- Kubernetes v1.24.14
- Istio v1.15.7
- ArgoCD v2.7.4
- Argo Rollouts v1.5.1
- TSE Preview2
We will use Istio’s VirtualService and DestinationRule to implement traffic grouping routing based on Subset and use Argo CD Rollouts for progressive release.
Deploy Argo CD and Argo Rollouts
I have created a Kubernetes cluster and added it to TSE in advance, and TSE will automatically install Istio control plane for the cluster. We also need to install Argo CD and Argo Rollouts:
# Install Argo CD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Install Argo CD CLI on macOS
brew install argocd
# Change the service type of argocd-server to LoadBalancer
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
# Get the ArgoCD UI address
ARGOCD_ADDR=$(kubectl get svc argocd-server -n argocd -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
# Login using Argo CD CLI, see https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli to get password
argocd login $ARGOCD_ADDR --skip-test-tls --grpc-web --insecure
# Install Argo Rollouts
kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/download/latest/install.yaml
# Install rollouts plugin on macOS
curl -LO https://github.com/argoproj/argo-rollouts/releases/download/v1.5.0/kubectl-argo-rollouts-darwin-amd64
chmod +x ./kubectl-argo-rollouts-darwin-amd64
sudo mv ./kubectl-argo-rollouts-darwin-amd64 /usr/local/bin/kubectl-argo-rollouts
Note: This feature is not applicable to TSE Bridge Mode, so we will use TSE Direct Mode to achieve progressive release.
What Is Bridge Mode and Direct Mode?
Direct Mode and Bridge Mode are two modes in TSE for the control plane to issue configuration. They are suitable for flow, security and gateway group configuration modes. BRIDGED mode is a minimalist mode that allows users to quickly configure the most commonly used features in the service mesh using Tetrate-specific APIs, while DIRECT mode provides greater flexibility for advanced users, allowing them to configure using Istio APIs directly.
Next, deploy the Rollouts Dashboard:
git clone https://github.com/argoproj/argo-rollouts.git
kustomize build manifests/dashboard-install|kubectl apply -n argo-rollouts -f -
kubectl port-forward svc/argo-rollouts-dashboard -n argo-rollouts 3100:3100
You can now access the Rollouts Dashboard at https://localhost:3100/rollouts/.
Deploy Bookinfo Application
We have prepared the configuration file for the Bookinfo application (saved in the tse-gitops-demo repository), and you can also fork it to your own account and replace it with your own repository. Run the following command to deploy the Bookinfo application:
argocd app create bookinfo-app --repo https://github.com/tetrateio/tse-gitops-demo.git --path application --dest-server https://kubernetes.default.svc --dest-namespace bookinfo --sync-policy automated
Note: Set replicas to 1 in reviews Deployment and then create Argo Rollouts to increase the number of instances of the reviews service. Set it to 0 after the Rollout deployment is complete. For details, see Migrating from Deployment to Rollouts.
Now you can open the Argo CD UI in your browser, as shown in Figure 4.
If you find that the application status is not in sync, you can run the following command or click the SYNC button in the UI.
argocd app sync bookinfo-app
Implementing Fine-Grained Traffic Management with Istio
First, let’s use Argo CD to create Istio-related resource objects:
argocd app create bookinfo-tse-conf --repo https://github.com/tetrateio/tse-gitops-demo.git --path argo/tse --dest-server https://kubernetes.default.svc --dest-namespace bookinfo --sync-policy automated
# Check the creation status
argocd app get bookinfo-tse-conf
Converting Deployment to Rollout
Suppose we want to release a new version of the reviews
service. We will use canary deployment to achieve zero downtime updates, with the following steps:
- Create a Rollout that references the reviews Deployment previously deployed in the Bookinfo application;
- Reduce the
replicas
of thereviews
Deployment to 0; - Send traffic to the reviews service to achieve automatic canary deployment progress.
You can view the Rollout and AnalysisTemplate configurations used in this demo on GitHub. Run the following command to deploy reviews-rollout:
argocd app create reviews-rollout --repo https://github.com/tetrateio/tse-gitops-demo.git --path argo/rollout --dest-server https://kubernetes.default.svc --dest-namespace bookinfo --sync-policy automated
Note: We can use the argocd
command to deploy or use kubectl
apply. It is recommended to use argocd
because you can view the deployment status in Argo CD UI and Argo Rollouts Dashboard at the same time and manage the deployment using the argocd
command.
Set the number of replicas of reviews deployment to 0:
kubectl scale deployment reviews --replicas=0 -n bookinfo
View the status of the reviews rollouts in the Argo Rollouts Dashboard, and use the following command to send traffic to the reviews service for a period of time:
export GATEWAY_HOSTNAME=$(kubectl -n bookinfo get service tsb-gateway-bookinfo -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
while 1;do curl -H "Host: bookinfo.tetrate.com" http://$GATEWAY_HOSTNAME/api/v1/products/1/reviews;sleep 3;done
You will see responses from pods with different rollouts-pod-template-hash
labels in the output, which proves that canary deployment is effective. After about 10 minutes, the Argo Rollouts Dashboard you see will be as shown in Figure 5.
From Figure 5, canary deployment is progressing smoothly and has reached the third step. This is because the apdex
(Application Performance Index) indicator of the reviews
service is normal. You can use Postman to submit GraphQL queries to SkyWalking to verify this, as shown in Figure 6.
The GraphQL query statement we built is as follows:
query ReadMetricsValues {
readMetricsValues(condition: {
name: "service_apdex", entity: {scope: Service, serviceName: "canary|reviews|bookinfo|cluster-1|-", normal: true}
}, duration: {
start: "2023-07-13 0812",
end: "2023-07-13 0813",
step: MINUTE
}) {
label
values {
values {
id
value
}
}
}
}
This statement queries the apdex
indicator of the canary|reviews|bookinfo|cluster-1|- service
from UTC 2023-07-13 8:12
to 2023 8:13
for two minutes and obtains the following results:
{
"data": {
"readMetricsValues": {
"label": null,
"values": {
"values": [
{
"id": "service_apdex_202307130812_Y2FuYXJ5fHJldmlld3N8Ym9va2luZm98Y2x1c3Rlci0xfC0=.1",
"value": 10000
},
{
"id": "service_apdex_202307130813_Y2FuYXJ5fHJldmlld3N8Ym9va2luZm98Y2x1c3Rlci0xfC0=.1",
"value": 10000
}
]
}
}
}
}
The value of the apdex
indicator is greater than 9900 (the threshold configured in the successCondition
of AnalysisTemplate), so Rollouts will progress smoothly. You can also click Promote manually on the Argo Rollouts Dashboard to promote it, or run the following command:
kubectl argo rollouts promote reviews-rollout -n bookinfo
Clean Up
Delete the deployed Argo CD Apps and Rollouts:
argocd app delete -y reviews-rollout
argocd app delete -y bookinfo-tse-conf
argocd app delete -y bookinfo-app
Understanding Argo Rollouts
When integrating with Istio, Argo Rollouts supports traffic splitting based on VirtualService and Subset, as shown in Figure 7.
The table below provides a detailed comparison of these two traffic segmentation methods.
Type | Applicable Scenario | Resource Object | Principle |
Host-level Traffic Split | Applicable for accessing different versions of services based on hostname; | 2 Services, 1 VirtualService, 1 Rollout; | Rollout injects the rollouts-pod-template-hash label into the ReplicaSet and selects pods with these labels by updating the selector in the Service; |
Subset-level Traffic Split | Applicable for accessing different services based on labels; | 1 Service, 1 VirtualService, 1 DestinationRule 1 Rollout; | Rollout injects the rollouts-pod-template-hash label into the ReplicaSet and selects pods with these labels by updating the selector in the DestinationRule; |
- modify the VirtualService
spec.http[].route[].weight
to match the current desired canary weight - modify the DestinationRule
spec.subsets[].labels
to contain therollouts-pod-template-hash
label of the canary and stable ReplicaSets
Visit Argo Rollouts documentation for details on using Istio for traffic management.
Summary
This article introduces how to use the Argo project and Istio to achieve GitOps and canary deployment. First, we use Argo CD to achieve GitOps and then Argo Rollout and SkyWalking to achieve automated canary deployment. The demo shows that the Istio deployed by TSE is fully compatible with the open-source version. There are many features of TSE worth exploring. Visit the Tetrate website for more information.