Canary Deployments
The API Gateway supports canary deployments by splitting traffic between a stable version and a canary version of a service. This is implemented through Kong upstreams with weighted targets, allowing gradual rollout of new service versions with controllable traffic percentages.
Canary Configuration
CanaryConfig Properties
| Property | Type | Default | Description |
|---|---|---|---|
stableHost | String | required | Hostname of the stable service version |
stablePort | int | required | Port of the stable service |
canaryHost | String | required | Hostname of the canary service version |
canaryPort | int | required | Port of the canary service |
canaryWeight | int | 10 | Percentage of traffic routed to canary (0-100) |
Configure Canary Traffic Split
Endpoint: POST /api/v1/gateway/services/:serviceName/canary
curl -X POST http://localhost:8080/api/v1/gateway/services/ai-service/canary \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${TOKEN}" \
-d '{
"stableHost": "ai-service-v1.tenant-acme",
"stablePort": 8000,
"canaryHost": "ai-service-v2.tenant-acme",
"canaryPort": 8000,
"canaryWeight": 10
}'This routes 90% of traffic to the stable version and 10% to the canary version.
How It Works
When configureCanaryTraffic is called, the GatewayManagementService performs these steps:
- Creates or updates a Kong upstream named
<serviceName>-upstreamwith theround-robinalgorithm - Adds the stable target with weight
100 - canaryWeight(e.g., weight 90) - Adds the canary target with weight
canaryWeight(e.g., weight 10)
Kong distributes traffic proportionally based on the target weights within the upstream.
Traffic Flow
Client Request
|
v
Kong Route --> Kong Service --> Kong Upstream (ai-service-upstream)
|
+------------+------------+
| |
Stable Target Canary Target
(weight: 90) (weight: 10)
ai-service-v1:8000 ai-service-v2:8000Progressive Rollout Strategy
A typical canary deployment follows a progressive weight increase:
| Stage | Canary Weight | Duration | Action |
|---|---|---|---|
| 1 | 5% | 15 minutes | Initial smoke test |
| 2 | 10% | 30 minutes | Monitor error rates |
| 3 | 25% | 1 hour | Validate performance |
| 4 | 50% | 2 hours | Load testing |
| 5 | 100% | -- | Full promotion |
At each stage, update the canary weight:
curl -X POST http://localhost:8080/api/v1/gateway/services/ai-service/canary \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${TOKEN}" \
-d '{
"stableHost": "ai-service-v1.tenant-acme",
"stablePort": 8000,
"canaryHost": "ai-service-v2.tenant-acme",
"canaryPort": 8000,
"canaryWeight": 25
}'Rollback
To roll back a canary deployment, set the canary weight to 0:
curl -X POST http://localhost:8080/api/v1/gateway/services/ai-service/canary \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${TOKEN}" \
-d '{
"stableHost": "ai-service-v1.tenant-acme",
"stablePort": 8000,
"canaryHost": "ai-service-v2.tenant-acme",
"canaryPort": 8000,
"canaryWeight": 0
}'Canary deployments work at the Kong upstream level. The service route must point to the upstream (not directly to a target host) for traffic splitting to take effect.