MeshTrace
This policy uses new policy matching algorithm. Do not combine with TrafficTrace.
This policy enables publishing traces to a third party tracing solution.
Tracing is supported over HTTP, HTTP2, and gRPC protocols. You must explicitly specify the protocol for each service and data plane proxy you want to enable tracing for.
Kuma currently supports the following trace exposition formats:
- zipkintraces in this format can be sent to many different tracing backends
- datadog
Services still need to be instrumented to preserve the trace chain across requests made across different services.
You can instrument with a language library of your choice (for zipkin and for datadog). For HTTP you can also manually forward the following headers:
- x-request-id
- x-b3-traceid
- x-b3-parentspanid
- x-b3-spanid
- x-b3-sampled
- x-b3-flags
TargetRef support matrix
| targetRef | Allowed kinds | 
|---|---|
| targetRef.kind | Mesh,MeshSubset,MeshService,MeshServiceSubset | 
To learn more about the information in this table, see the matching docs.
Configuration
Sampling
Most of the time setting only overall is sufficient. random and client are for advanced use cases.
You can configure sampling settings equivalent to Envoy’s:
The value is always a percentage and is between 0 and 100.
Example:
sampling:
  overall: 80
  random: 60
  client: 40
Tags
You can add tags to trace metadata by directly supplying the value (literal) or by taking it from a header (header).
Example:
tags:
  - name: team
    literal: core
  - name: env
    header:
      name: x-env
      default: prod
  - name: version
    header:
      name: x-version
If a value is missing for header, default is used.
If default isn’t provided, then the tag won’t be added.
Backends
Datadog
You can configure a Datadog backend with a url and splitService.
Example:
datadog:
  url: http://my-agent:8080 # Required. The url to reach a running datadog agent
  splitService: true # Default to false. If true, it will split inbound and outbound requests in different services in Datadog
The splitService property determines if Datadog service names should be split based on traffic direction and destination.
For example, with splitService: true and a backend service that communicates with a couple of databases,
you would get service names like backend_INBOUND, backend_OUTBOUND_db1, and backend_OUTBOUND_db2 in Datadog.
Zipkin
In most cases the only field you’ll want to set in url.
Example:
zipkin:
  url: http://jaeger-collector:9411/api/v2/spans # Required. The url to a zipkin collector to send traces to 
  traceId128bit: false # Default to false which will expose a 64bits traceId. If true, the id of the trace is 128bits
  apiVersion: httpJson # Default to httpJson. It can be httpJson, httpProto and is the version of the zipkin API
  sharedSpanContext: false # Default to true. If true, the inbound and outbound traffic will share the same span. 
OpenTelemetry
The only field you can set is endpoint.
Example:
openTelemetry:
  endpoint: otel-collector:4317 # Required. Address of OpenTelemetry collector
Examples
Zipkin
Simple example:
apiVersion: kuma.io/v1alpha1
kind: MeshTrace
metadata:
  name: default
  namespace: kuma-system
  labels:
    kuma.io/mesh: default # optional, defaults to `default` if unset
spec:
  targetRef:
    kind: Mesh
  default:
    backends:
      - type: Zipkin
        zipkin:
          url: http://jaeger-collector.mesh-observability:9411/api/v2/spans
Full example:
apiVersion: kuma.io/v1alpha1
kind: MeshTrace
metadata:
  name: default
  namespace: kuma-system
  labels:
    kuma.io/mesh: default # optional, defaults to `default` if unset
spec:
  targetRef:
    kind: Mesh
  default:
    backends:
      - type: Zipkin
        zipkin:
          url: http://jaeger-collector.mesh-observability:9411/api/v2/spans
          apiVersion: httpJson
    tags:
      - name: team
        literal: core
      - name: env
        header:
          name: x-env
          default: prod
      - name: version
        header:
          name: x-version
    sampling:
      overall: 80
      random: 60
      client: 40
Apply the configuration with kubectl apply -f [..].
Datadog
This assumes a Datadog agent is configured and running. If you haven’t already check the Datadog observability page.
Simple Example:
apiVersion: kuma.io/v1alpha1
kind: MeshTrace
metadata:
  name: default
  namespace: kuma-system
  labels:
    kuma.io/mesh: default # optional, defaults to `default` if unset
spec:
  targetRef:
    kind: Mesh
  default:
    backends:
      - type: Datadog
        datadog:
          url: http://trace-svc.default.svc.cluster.local:8126
Full Example:
apiVersion: kuma.io/v1alpha1
kind: MeshTrace
metadata:
  name: default
  namespace: kuma-system
  labels:
    kuma.io/mesh: default # optional, defaults to `default` if unset
spec:
  targetRef:
    kind: Mesh
  default:
    backends:
      - type: Datadog
        datadog:
          url: http://trace-svc.default.svc.cluster.local:8126
          splitService: true
    tags:
      - name: team
        literal: core
      - name: env
        header:
          name: x-env
          default: prod
      - name: version
        header:
          name: x-version
    sampling:
      overall: 80
      random: 60
      client: 40
where trace-svc is the name of the Kubernetes Service you specified when you configured the Datadog APM agent.
Apply the configuration with kubectl apply -f [..].
OpenTelemetry
This assumes a OpenTelemetry collector is configured and running. If you haven’t already check the OpenTelementry operator.
Simple Example:
apiVersion: kuma.io/v1alpha1
kind: MeshTrace
metadata:
  name: default
  namespace: kuma-system
  labels:
    kuma.io/mesh: default # optional, defaults to `default` if unset
spec:
  targetRef:
    kind: Mesh
  default:
    backends:
      - type: OpenTelemetry
        openTelemetry:
          endpoint: otel-collector:4317
Full example:
type: MeshTrace
name: default
mesh: default
spec:
  targetRef:
    kind: Mesh
  default:
    backends:
      - type: OpenTelemetry
        openTelemetry:
          endpoint: otel-collector:4317
    tags:
      - name: team
        literal: core
      - name: env
        header:
          name: x-env
          default: prod
      - name: version
        header:
          name: x-version
    sampling:
      overall: 80
      random: 60
      client: 40
where otel-collector is the name of the Kubernetes Service for OTel exporter.
Apply the configuration with kubectl apply -f [..].
Targeting parts of the infrastructure
While usually you want all the traces to be sent to the same tracing backend,
you can target parts of a Mesh by using a finer-grained targetRef and a designated backend to trace different paths of our service traffic.
This is especially useful when you want traces to never leave a world region, or a cloud, for example.
In this example, we have two zones east and west, each of these with their own Zipkin collector: east.zipkincollector:9411/api/v2/spans and west.zipkincollector:9411/api/v2/spans.
We want dataplane proxies in each zone to only send traces to their local collector.
To do this, we use a TargetRef kind value of MeshSubset to filter which dataplane proxy a policy applies to.
West only policy:
type: MeshTrace
name: trace-west
mesh: default
spec:
  targetRef:
    kind: MeshSubset
    tags:
      kuma.io/zome: west
  default:
    backends:
      - type: Zipkin
        zipkin:
          url: http://west.zipkincollector:9411/api/v2/spans
East only policy:
type: MeshTrace
name: trace-east
mesh: default
spec:
  targetRef:
    kind: MeshSubset
    tags:
      kuma.io/zome: east
  default:
    backends:
      - type: Zipkin
        zipkin:
          url: http://east.zipkincollector:9411/api/v2/spans