-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Invalid schema: #/properties/admin/properties/address" starting envoy trying to run contour with an AWS NLB + SNI #136
Comments
Thanks for trying contour. The tldr is you need to use the -v2-grpc version of the deployment manifest because SNI is only supported when using the yaml format.
Sorry if I got any of these details wrong (phone) I’ll comment more when I get to my desk.
… On 10 Jan 2018, at 08:00, Cody Maloney ***@***.***> wrote:
I'm trying to get Contour running behind an AWS NLB (Contour on every node as a DaemonSet, nlb routing to it), with SNI enabled / available. Currently when I try to start contour, the envoy container in the pod fails with the message below:
2018-01-09T20:49:44.029128893Z [2018-01-09 20:49:44.028][1][info][main] source/server/server.cc:185] initializing epoch 0 (hot restart version=9.200.16384.127)
2018-01-09T20:49:44.030959585Z [2018-01-09 20:49:44.030][1][critical][main] source/server/server.cc:72] error initializing configuration '/config/contour.yaml': JSON at lines 0-0 does not conform to schema.
2018-01-09T20:49:44.030973102Z Invalid schema: #/properties/admin/properties/address
2018-01-09T20:49:44.030976606Z Schema violation: type
2018-01-09T20:49:44.0309796Z Offending document key: #/admin/address
Contour container logs:
2018-01-09T20:43:55.775906688Z 2018/01/09 20:43:55 args: [serve --incluster]
2018-01-09T20:43:55.782874808Z 2018/01/09 20:43:55 watch(endpoints): started
2018-01-09T20:43:55.783410261Z 2018/01/09 20:43:55 buffer.loop: started
2018-01-09T20:43:55.783419089Z 2018/01/09 20:43:55 watch(services): started
2018-01-09T20:43:55.783936195Z 2018/01/09 20:43:55 watch(secrets): started
2018-01-09T20:43:55.784631385Z 2018/01/09 20:43:55 watch(ingresses): started
2018-01-09T20:43:55.786339245Z 2018/01/09 20:43:55 JSONAPI: started, listening on 127.0.0.1:8000
2018-01-09T20:43:55.786347783Z 2018/01/09 20:43:55 gRPCAPI: started
2018-01-09T20:43:55.86227247Z 2018/01/09 20:43:55 translator: ignoring secret heptio-contour/default-token-05mhw
2018-01-09T20:43:55.862301572Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/default-token-lgw16
2018-01-09T20:43:55.862306113Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/disruption-controller-token-787dh
2018-01-09T20:43:55.862309095Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/dns-controller-token-3qbkn
2018-01-09T20:43:55.862312027Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/endpoint-controller-token-7pjr6
2018-01-09T20:43:55.862314978Z 2018/01/09 20:43:55 translator: ignoring secret default/dockerhub
2018-01-09T20:43:55.862318415Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/horizontal-pod-autoscaler-token-dm6g6
2018-01-09T20:43:55.862321293Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/namespace-controller-token-986qf
2018-01-09T20:43:55.862324032Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/pod-garbage-collector-token-x47b4
2018-01-09T20:43:55.862326813Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/route-controller-token-tvztc
2018-01-09T20:43:55.862329492Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/generic-garbage-collector-token-2vpzp
2018-01-09T20:43:55.862332127Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/job-controller-token-bnz1h
2018-01-09T20:43:55.862334812Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/kube-proxy-token-9btpt
2018-01-09T20:43:55.862340266Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/certificate-controller-token-0h541
2018-01-09T20:43:55.862342995Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/kube-dns-autoscaler-token-7d49c
2018-01-09T20:43:55.862345631Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/persistent-volume-binder-token-c95mq
2018-01-09T20:43:55.862348354Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/replication-controller-token-7vcrk
2018-01-09T20:43:55.862350988Z 2018/01/09 20:43:55 translator: ignoring secret default/default-token-125k8
2018-01-09T20:43:55.862353641Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/replicaset-controller-token-kmx9x
2018-01-09T20:43:55.862356323Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/statefulset-controller-token-fv7zh
2018-01-09T20:43:55.862370014Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/ttl-controller-token-4bdqx
2018-01-09T20:43:55.865076651Z 2018/01/09 20:43:55 translator: ignoring secret heptio-contour/contour-token-4b0b9
2018-01-09T20:43:55.865087362Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/daemon-set-controller-token-n87hl
2018-01-09T20:43:55.865090432Z 2018/01/09 20:43:55 translator: ignoring secret kube-public/default-token-thhgt
2018-01-09T20:43:55.865093169Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/deployment-controller-token-p584n
2018-01-09T20:43:55.86509593Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/kube-dns-token-srg21
2018-01-09T20:43:55.865098933Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/resourcequota-controller-token-t7nx6
2018-01-09T20:43:55.865101645Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/service-account-controller-token-c4cvg
2018-01-09T20:43:55.865104417Z 2018/01/09 20:43:55 translator: ignoring secret default/ecllegacysecrets
2018-01-09T20:43:55.865107049Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/controller-discovery-token-p114x
2018-01-09T20:43:55.865109683Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/cronjob-controller-token-9lt8t
2018-01-09T20:43:55.865112418Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/node-controller-token-rsjgq
2018-01-09T20:43:55.865115016Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/service-controller-token-2fm8l
2018-01-09T20:43:55.865117706Z 2018/01/09 20:43:55 translator: ignoring secret kube-system/attachdetach-controller-token-4pcls
my 02-contour.yaml (adapted from the DaemonSet and grpc v2 examples:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: contour
name: contour
namespace: heptio-contour
spec:
selector:
matchLabels:
app: contour
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: contour
spec:
hostNetwork: true
containers:
- image: docker.io/envoyproxy/envoy-alpine:latest
name: envoy
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
command: ["envoy"]
args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info"]
volumeMounts:
- name: contour-config
mountPath: /config
- image: gcr.io/heptio-images/contour:master
imagePullPolicy: Always
ports:
- containerPort: 8000
name: contour
name: contour
command: ["contour"]
args: ["serve", "--incluster"]
volumeMounts:
- name: contour-config
mountPath: /config
initContainers:
- image: gcr.io/heptio-images/contour:master
imagePullPolicy: Always
name: envoy-initconfig
command: ["contour"]
args: ["bootstrap", "/config/contour.yaml"]
volumeMounts:
- name: contour-config
mountPath: /config
volumes:
- name: contour-config
emptyDir: {}
dnsPolicy: ClusterFirst
serviceAccountName: contour
terminationGracePeriodSeconds: 30
---
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I think I have the bits to make it use yaml format in there (took the command lines from the -v2-grpc onces). Definitely possible I missed one of the flags though, but here's what I merged /copied across (which I believe are all the same as what I have above). envoy flags: https://github.com/heptio/contour/blob/master/deployment/deployment-grpc-v2/02-contour.yaml#L27 and the containerPort for 8443. |
Yup, that looks right. Does it work? If not, what output do you get?
…On 10 January 2018 at 08:50, Cody Maloney ***@***.***> wrote:
I think I have the bits to make it use yaml format in there (took the
command lines from the -v2-grpc onces). Definitely possible I missed one of
the flags though, but here's what I merged /copied across (which I believe
are all the same as what I have above).
envoy flags: https://github.com/heptio/contour/blob/master/
deployment/deployment-grpc-v2/02-contour.yaml#L27
contour flags: https://github.com/heptio/contour/blob/master/
deployment/deployment-grpc-v2/02-contour.yaml#L35
and bootstrap doing /config/contour.yaml
and the containerPort for 8443.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#136 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAAcAwljK8bmk-Jdzd6OiRN-kF3c2EJoks5tI99CgaJpZM4RYbud>
.
|
The contour starts (logs above). Envoy doesn't however:
|
My guess is your initContainer is still somehow referencing json, please
check
https://github.com/heptio/contour/blob/master/deployment/deployment-grpc-v2/02-contour.yaml#L44
…On 10 January 2018 at 09:19, Cody Maloney ***@***.***> wrote:
The contour starts (logs above). Envoy doesn't however:
2018-01-09T22:17:18.644237455Z [2018-01-09 22:17:18.634][1][info][main] source/server/server.cc:185] initializing epoch 0 (hot restart version=9.200.16384.127)
2018-01-09T22:17:18.644268369Z [2018-01-09 22:17:18.637][1][critical][main] source/server/server.cc:72] error initializing configuration '/config/contour.yaml': JSON at lines 0-0 does not conform to schema.
2018-01-09T22:17:18.644275108Z Invalid schema: #/properties/admin/properties/address
2018-01-09T22:17:18.644278305Z Schema violation: type
2018-01-09T22:17:18.644281345Z Offending document key: #/admin/address
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#136 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAAcA1agcyEvc1UCp8bUKslcBewz7fhqks5tI-YAgaJpZM4RYbud>
.
|
@davecheney deploying the deployment-grpc-v2 manifests as currently written gives me the same issue. Perhaps envoy changed something which broke it?
|
I think the issue is: https://github.com/heptio/contour/blob/a44fab202b3326e16d170f54fb513aad93a04eb4/internal/envoy/config.go#L145 needs to be |
definitely envoy doesn't like the default yaml config file, trying to figure out why |
Envoy doesn't like the "dynamic_resources" section / removing that from the yaml makes envoy start properly, trying to figure out what envoy changed / how it should work now |
Can you try to recover the contents of `/config/contour.yaml` from a failed
container. That will make it clear to me where the failure is.
…On 11 January 2018 at 13:04, Cody Maloney ***@***.***> wrote:
I think the issue is: https://github.com/heptio/contour/blob/
a44fab2/internal/envoy/config.go#L145
needs to be tcp://, seeing if I can't get contour building a container
locally and pushing to test / debug.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#136 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAAcA30PBTH56KyX4t98L1ZknTLbqywAks5tJWwmgaJpZM4RYbud>
.
|
dynamic_resources:
lds_config:
api_config_source:
api_type: GRPC
cluster_name: [xds_cluster]
cds_config:
api_config_source:
api_type: GRPC
cluster_name: [xds_cluster]
static_resources:
clusters:
- name: xds_cluster
connect_timeout: { seconds: 5 }
type: STATIC
hosts:
- socket_address:
address: 127.0.0.1
port_value: 8001
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
admin:
access_log_path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 9001 I mounted that into a |
feeding latest envoy the sample config: https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/v2_overview#dynamic also seems to fail, so pretty sure it's envoy broken not contour |
The problem is this is file is yaml formatted, which is as expected, but
envoy is trying to process it as JSON, possibly because it fails to
validate as yaml.
…On 11 January 2018 at 13:46, Cody Maloney ***@***.***> wrote:
dynamic_resources:
lds_config:
api_config_source:
api_type: GRPC
cluster_name: [xds_cluster]
cds_config:
api_config_source:
api_type: GRPC
cluster_name: [xds_cluster]static_resources:
clusters:
- name: xds_cluster
connect_timeout: { seconds: 5 }
type: STATIC
hosts:
- socket_address:
address: 127.0.0.1
port_value: 8001
lb_policy: ROUND_ROBIN
http2_protocol_options: {}admin:
access_log_path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 9001
I mounted that into a docker.io/envoyproxy/envoy-alpine:latest container
locally, and envoy gives the same error with that config file. In a local
copy I've been editing the dynamic_resources section, which looks like it
needs to change with: https://www.envoyproxy.io/
docs/envoy/latest/api-v2/grpc_service.proto.html#envoy-api-msg-grpcservice
which wants either envoy_grpc or google_grpc sub-keys. Looks like the
config format changed some...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#136 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAAcA_YXOeNHcHk7femdKcqPrNXKyieFks5tJXXrgaJpZM4RYbud>
.
|
Thanks. I've just confirmed the same thing locally. I suspect the envoy
devs have changed the format of the configuration file recently. I'll check
their commits.
…On 11 January 2018 at 13:51, Dave Cheney ***@***.***> wrote:
The problem is this is file is yaml formatted, which is as expected, but
envoy is trying to process it as JSON, possibly because it fails to
validate as yaml.
On 11 January 2018 at 13:46, Cody Maloney ***@***.***>
wrote:
> dynamic_resources:
> lds_config:
> api_config_source:
> api_type: GRPC
> cluster_name: [xds_cluster]
> cds_config:
> api_config_source:
> api_type: GRPC
> cluster_name: [xds_cluster]static_resources:
> clusters:
> - name: xds_cluster
> connect_timeout: { seconds: 5 }
> type: STATIC
> hosts:
> - socket_address:
> address: 127.0.0.1
> port_value: 8001
> lb_policy: ROUND_ROBIN
> http2_protocol_options: {}admin:
> access_log_path: /dev/null
> address:
> socket_address:
> address: 127.0.0.1
> port_value: 9001
>
> I mounted that into a docker.io/envoyproxy/envoy-alpine:latest container
> locally, and envoy gives the same error with that config file. In a local
> copy I've been editing the dynamic_resources section, which looks like
> it needs to change with: https://www.envoyproxy.io/docs
> /envoy/latest/api-v2/grpc_service.proto.html#envoy-api-msg-grpcservice
> which wants either envoy_grpc or google_grpc sub-keys. Looks like the
> config format changed some...
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#136 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAAcA_YXOeNHcHk7femdKcqPrNXKyieFks5tJXXrgaJpZM4RYbud>
> .
>
|
Upstream issue for missing documentation change, envoyproxy/envoy#2347 |
@cmaloney thanks for your patience. The fix has been committed to master and is available in the In testing on my cluster this change has fixed the invalid configuration. Please reopen if the issue persists for you. |
Updates projectcontour#136 Backport of the grpc_services addition from projectcontour#136. Signed-off-by: Dave Cheney <[email protected]>
Backport fix fir #136 to release-0.2
I'm trying to get Contour running behind an AWS NLB (Contour on every node as a DaemonSet, nlb routing to it), with SNI enabled / available. Currently when I try to start contour, the
envoy
container in the pod fails with the message below:Contour container logs:
my 02-contour.yaml (adapted from the DaemonSet and grpc v2 examples:
The text was updated successfully, but these errors were encountered: