Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump elasticsearch and kibana to 5.4.0 #45589

Merged
merged 1 commit into from
May 26, 2017

Conversation

it-svit
Copy link

@it-svit it-svit commented May 10, 2017

What this PR does / why we need it: Updates elasticsearch and kibana docker image assets to 5.4.0 version
Release note:

Upgrade Elasticsearch Addon to v5.4.0

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label May 10, 2017
@k8s-reviewable
Copy link

This change is Reviewable

@k8s-ci-robot
Copy link
Contributor

Hi @it-svit. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with @k8s-bot ok to test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 10, 2017
@k8s-github-robot k8s-github-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. release-note-label-needed labels May 10, 2017
@it-svit
Copy link
Author

it-svit commented May 10, 2017

Please recheck CLA

@piosz
Copy link
Member

piosz commented May 10, 2017

@k8s-bot ok to test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 10, 2017
@crassirostris crassirostris self-requested a review May 10, 2017 14:11
@crassirostris crassirostris assigned crassirostris and unassigned piosz May 10, 2017
@crassirostris
Copy link

@it-svit Thanks a lot for you PR! I'll build the image and test it manually, but overall LGTM

@it-svit
Copy link
Author

it-svit commented May 10, 2017

It might be useful for testing:
https://hub.docker.com/r/itsvit/kibana/
https://hub.docker.com/r/itsvit/elasticsearch/

Also kubernetes manifests should be updated a bit too.

kibana-controller.yaml

# This file should be kept in sync with cluster/addons/fluentd-elasticsearch/kibana-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
    spec:
      containers:
      - name: kibana-logging
        image: itsvit/kibana:v5.4.0
        imagePullPolicy: Always
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 500m
          requests:
            cpu: 500m
        env:
          - name: "ELASTICSEARCH_URL"
            value: "http://elasticsearch-logging:9200"
          - name: "KIBANA_BASE_URL"
            value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
          - name: "KIBANA_HOST"
            value: "0.0.0.0"
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP

@crassirostris
Copy link

@it-svit I haven't noticed one detail. Could you please update the image version in the Makefile to respect the ES version? In both images

@crassirostris
Copy link

@it-svit While testing, I've encountered a problem:

[2017-05-11T18:15:00,292][INFO ][o.e.n.Node               ] [bf8abf860414] initializing ...
[2017-05-11T18:15:00,345][INFO ][o.e.e.NodeEnvironment    ] [bf8abf860414] using [1] data paths, mounts [[/data (/dev/mapper/dhcp--100--104--85--171--vg-usr+local+google)]], net usable_space [137.8gb], net total_space [400.7gb], spins? [possibly], types [ext4]
[2017-05-11T18:15:00,346][INFO ][o.e.e.NodeEnvironment    ] [bf8abf860414] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-05-11T18:15:00,348][INFO ][o.e.n.Node               ] [bf8abf860414] node name [bf8abf860414], node ID [CjwqslhMSUuI6383ddfFxQ]
[2017-05-11T18:15:00,348][INFO ][o.e.n.Node               ] [bf8abf860414] version[5.4.0], pid[1], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Linux/3.13.0-108-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111/25.111-b14]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [aggs-matrix-stats]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [ingest-common]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [lang-expression]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [lang-groovy]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [lang-mustache]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [lang-painless]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [percolator]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [reindex]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [transport-netty3]
[2017-05-11T18:15:00,985][INFO ][o.e.p.PluginsService     ] [bf8abf860414] loaded module [transport-netty4]
[2017-05-11T18:15:00,986][INFO ][o.e.p.PluginsService     ] [bf8abf860414] no plugins loaded
[2017-05-11T18:15:02,219][INFO ][o.e.d.DiscoveryModule    ] [bf8abf860414] using discovery type [zen]
[2017-05-11T18:15:02,554][INFO ][o.e.n.Node               ] [bf8abf860414] initialized
[2017-05-11T18:15:02,554][INFO ][o.e.n.Node               ] [bf8abf860414] starting ...
[2017-05-11T18:15:02,692][INFO ][o.e.t.TransportService   ] [bf8abf860414] publish_address {192.168.9.2:9300}, bound_addresses {[::]:9300}
[2017-05-11T18:15:02,697][INFO ][o.e.b.BootstrapChecks    ] [bf8abf860414] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2017-05-11T18:15:02,701][INFO ][o.e.n.Node               ] [bf8abf860414] stopping ...
[2017-05-11T18:15:02,717][INFO ][o.e.n.Node               ] [bf8abf860414] stopped
[2017-05-11T18:15:02,717][INFO ][o.e.n.Node               ] [bf8abf860414] closing ...
[2017-05-11T18:15:02,726][INFO ][o.e.n.Node               ] [bf8abf860414] closed

Have you seen it?

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 16, 2017
@it-svit
Copy link
Author

it-svit commented May 16, 2017

I've noticed that Elasticsearch base image was changed to alpine and it caused conflict.
It is necessary to resolve it firstly.

Regarding vm.max_map_count issue.
Yes, the problem is related to defaut vm.max_map_count setting in the OS.
https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

I'm using CoreOS based kubernetes deployment via kube-aws (https://github.com/kubernetes-incubator/kube-aws)

I've added the following options to the cloud-init files:

  units:
    - name: sysctl-update.service
      command: start
      content: |
        [Unit]
        Description=Applies linux kernel parameters
        Before=multi-user.target

        [Service]
        Restart=on-failure
        RemainAfterExit=true
        ExecStart=/usr/sbin/sysctl --system

        [Install]
        WantedBy=multi-user.target
write_files:
  - path: /etc/sysctl.d/vm.conf
    owner: root:root
    permissions: 0644
    content: |
      vm.max_map_count=262144

@k8s-github-robot k8s-github-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 16, 2017
@crassirostris
Copy link

@it-svit Sorry for the late response

Yeah, I understand the problem, but I think it's a bad experience for users to have to change something on their host machines to make this deployment work. Do you think it's possible to configure a container in such a way so that it works on the host without this change?

@crassirostris
Copy link

/cc @coffeepac

@coffeepac
Copy link
Contributor

@crassirostris @it-svit this can be solved if you launch a privileged initialization container that can up the limit on the host machine by mounting /proc and writing to the limits file manually. I'm not sure that's an appropriate solution to propagate in the kubernetes repo though, it seems like a bad actor moment.

This general problem is being talked about here I am unsure what the appropriate path forward is but re-launching nodes to pick up a new cloud config seems like a non-starter. I'm open to ideas about how to make this workable.

@it-svit
Copy link
Author

it-svit commented May 22, 2017

@crassirostris Yes, I think that generally containers should depend on OS specific configuration.
@coffeepac Thak you for suggestion, it may be the option.

I've found possible solution but have not tested it yet.
It seems there is a way of limiting Elasticsearch exists.
http://stackoverflow.com/questions/18132719/how-to-change-elasticsearch-max-memory-size

I'll try it and share result with you.

@crassirostris
Copy link

@it-svit Thanks a lot, appreciate your efforts!

Yes, I think that generally containers should depend on OS specific configuration

Should or should not?

@it-svit
Copy link
Author

it-svit commented May 23, 2017

@crassirostris Should not, of course.

@crassirostris
Copy link

@it-svit What's up?

@it-svit
Copy link
Author

it-svit commented May 24, 2017

@crassirostris I'm sorry, but the Pull Request was closed accidentally by some github's internal logic. I was making force push in my repository at that moment.

@it-svit it-svit reopened this May 24, 2017
@it-svit
Copy link
Author

it-svit commented May 24, 2017

@crassirostris @coffeepac It seems, Elasticsearch 5.4.0 doesn't have option to set vm.max_map_count requirement less than 262144.

I've applied changes recommended by @coffeepac. I've added sysctl -w vm.max_map_count=262144 to the elasticsearch container start script run.sh and launched pod in privileged mode. It helped.

Here is elasticsearch kubernetes manifest which I've used.

# This file should be kept in sync with cluster/addons/fluentd-elasticsearch/es-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: elasticsearch-logging-v1
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 2
  selector:
    k8s-app: elasticsearch-logging
    version: v1
  template:
    metadata:
      annotations:
      labels:
        k8s-app: elasticsearch-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - image: itsvit/elasticsearch:v5.4.0-alpine
        imagePullPolicy: Always
        name: elasticsearch-logging
        securityContext:
          privileged: true
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: es-persistent-storage
          mountPath: /data
      volumes:
      - name: es-persistent-storage
        emptyDir: {}

@coffeepac
Copy link
Contributor

@it-svit I was unable to get sysctl calls to function from inside a running container when the host OS is containerOS, which is why I resorted to editing the file in /proc directly. What host OS did you try this on?

@crassirostris is running as a privileged pod and reconfiguring the host machine an okay thing? Its what I do but I'm not sure we want to share that generally.

also, @it-svit needs a release note ("Upgrade Elasticsearch Addon to v5.4.0" is sufficient) and your CLA check is still failing.

@k8s-github-robot k8s-github-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-label-needed labels May 25, 2017
@it-svit
Copy link
Author

it-svit commented May 25, 2017

@coffeepac I'm using CoreOS as host OS.
I've added a release note to the first message.

I wonder, why is my CLA failing.
Here is my CLA document history. It was completed at 05/10/2017.
http://joxi.ru/D2Pe48ghGDWpA3

@it-svit
Copy link
Author

it-svit commented May 25, 2017

@coffeepac I've noticed, that my email address [email protected] was attached to the "kb-itsvit" account instead of "it-svit". I fixed that. Please recheck CLA.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels May 25, 2017
@coffeepac
Copy link
Contributor

@it-svit awesome! if it works on the minimal containerOS it should work everywhere. great.

Thanks for the work and off it goes.

Copy link
Contributor

@coffeepac coffeepac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@crassirostris
Copy link

@coffeepac I have this problem on my list

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 25, 2017
@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: crassirostris, it-svit

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 25, 2017
@coffeepac
Copy link
Contributor

@crassirostris I can approve but not lgtm. thanks for looking into it.

@k8s-github-robot
Copy link

Automatic merge from submit-queue (batch tested with PRs 46124, 46434, 46089, 45589, 46045)

@k8s-github-robot k8s-github-robot merged commit 3439941 into kubernetes:master May 26, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants