Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chectl server:deploy fails on fresh minikube installation #21862

Closed
scarint opened this issue Dec 4, 2022 · 7 comments
Closed

chectl server:deploy fails on fresh minikube installation #21862

scarint opened this issue Dec 4, 2022 · 7 comments
Labels
area/chectl Issues related to chectl, the CLI of Che kind/question Questions that haven't been identified as being feature requests or bugs. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach

Comments

@scarint
Copy link

scarint commented Dec 4, 2022

Summary

Not entirely sure if this is a Che issue or minikube or somewhere between them.

Following instructions here: https://www.eclipse.org/che/docs/stable/administration-guide/installing-che-on-minikube/

After executing chectl server:deploy --platform minikube I get the following

scarint@gameserver:~$ chectl server:deploy --platform minikube
› Current Kubernetes context: 'minikube'
  ✔ Verify Kubernetes API...[OK]
  ✔ Looking for an already existing Eclipse Che instance
    ✔ Verify if Eclipse Che is deployed into namespace "eclipse-che"...[Not Found]
  ↓ Check if OIDC Provider installed [skipped]
    → Dex will be automatically installed as OIDC Identity Provider
  ✔ ✈️  Minikube preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify if minikube is installed
    ✔ Verify if minikube is running
    ↓ Start minikube [skipped]
      → Minikube is already running.
    ✔ Check Kubernetes version: [1.25]
    ✔ Verify if minikube ingress addon is enabled
    ↓ Enable minikube ingress addon [skipped]
      → Ingress addon is already enabled.
    ✔ Retrieving minikube IP and domain for ingress URLs...[192.168.49.2.nip.io]
    ✔ Checking minikube version...[1.28.0]
  ✔ Following Eclipse Che logs
    ✔ Start following logs...[OK]
  ❯ Cert Manager v1.8.2
    ⠙ Install Cert Manager
      Wait for Cert Manager
    Install Dev Workspace operator
    Create Namespace eclipse-che
    Deploy Dex
    Deploy Eclipse Che

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^
AxiosError: connect ECONNREFUSED 0.0.0.0:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
  port: 443,
  address: '0.0.0.0',
  syscall: 'connect',
  code: 'ECONNREFUSED',
  errno: -111,
  config: {
    transitional: {
      silentJSONParsing: true,
      forcedJSONParsing: true,
      clarifyTimeoutError: false
    },
    adapter: [Function: httpAdapter],
    transformRequest: [ [Function (anonymous)] ],
    transformResponse: [ [Function: transformResponse] ],
    timeout: 0,
    xsrfCookieName: 'XSRF-TOKEN',
    xsrfHeaderName: 'X-XSRF-TOKEN',
    maxContentLength: -1,
    maxBodyLength: -1,
    env: {
      FormData: [Function: FormData] {
        LINE_BREAK: '\r\n',
        DEFAULT_CONTENT_TYPE: 'application/octet-stream'
      }
    },
    validateStatus: [Function: validateStatus],
    headers: {
      Accept: 'application/json, text/plain, */*',
      'Content-Type': 'application/json',
      'user-agent': 'analytics-node/6.2.0',
      'Content-Length': 528
    },
    auth: { username: 'recqCyrVeDBAEdmzvfkHPcclYLc63TM6' },
    method: 'post',
    url: 'https://api.segment.io/v1/batch',
    data: '{"batch":[{"anonymousId":"52a32137-b9d5-425f-b4b3-0457231cec60","traits":{"timezone":"America/Los_Angeles","os_name":"linux","os_version":"5.10.0-19-amd64","os_distribution":"Debian","locale":"en-US"},"type":"identify","context":{"library":{"name":"analytics-node","version":"6.2.0"}},"_metadata":{"nodeVersion":"16.13.2"},"timestamp":"2022-12-04T22:43:00.607Z","messageId":"node-d072b9f094678048a116826aa78888f6-36f18818-3463-4b18-b9cf-939f77715dd0"}],"timestamp":"2022-12-04T22:43:00.608Z","sentAt":"2022-12-04T22:43:00.608Z"}',
    'axios-retry': { retryCount: 3, lastRequestTime: 1670193782170 }
  },
  request: <ref *3> Writable {
    _writableState: WritableState {
      objectMode: false,
      highWaterMark: 16384,
      finalCalled: false,
      needDrain: false,
      ending: false,
      ended: false,
      finished: false,
      destroyed: false,
      decodeStrings: true,
      defaultEncoding: 'utf8',
      length: 0,
      writing: false,
      corked: 0,
      sync: true,
      bufferProcessing: false,
      onwrite: [Function: bound onwrite],
      writecb: null,
      writelen: 0,
      afterWriteTickInfo: null,
      buffered: [],
      bufferedIndex: 0,
      allBuffers: true,
      allNoop: true,
      pendingcb: 0,
      constructed: true,
      prefinished: false,
      errorEmitted: false,
      emitClose: true,
      autoDestroy: true,
      errored: null,
      closed: false,
      closeEmitted: false,
      [Symbol(kOnFinished)]: []
    },
    _events: [Object: null prototype] {
      response: [Function: handleResponse],
      error: [Function: handleRequestError],
      socket: [Function: handleRequestSocket]
    },
    _eventsCount: 3,
    _maxListeners: undefined,
    _options: {
      maxRedirects: 21,
      maxBodyLength: 10485760,
      protocol: 'https:',
      path: '/v1/batch',
      method: 'POST',
      headers: {
        Accept: 'application/json, text/plain, */*',
        'Content-Type': 'application/json',
        'user-agent': 'analytics-node/6.2.0',
        'Content-Length': 528
      },
      agent: undefined,
      agents: { http: undefined, https: undefined },
      auth: 'recqCyrVeDBAEdmzvfkHPcclYLc63TM6:',
      hostname: 'api.segment.io',
      port: null,
      nativeProtocols: {
        'http:': {
          _connectionListener: [Function: connectionListener],
          METHODS: [
            'ACL',         'BIND',       'CHECKOUT',
            'CONNECT',     'COPY',       'DELETE',
            'GET',         'HEAD',       'LINK',
            'LOCK',        'M-SEARCH',   'MERGE',
            'MKACTIVITY',  'MKCALENDAR', 'MKCOL',
            'MOVE',        'NOTIFY',     'OPTIONS',
            'PATCH',       'POST',       'PROPFIND',
            'PROPPATCH',   'PURGE',      'PUT',
            'REBIND',      'REPORT',     'SEARCH',
            'SOURCE',      'SUBSCRIBE',  'TRACE',
            'UNBIND',      'UNLINK',     'UNLOCK',
            'UNSUBSCRIBE'
          ],
          STATUS_CODES: {
            '100': 'Continue',
            '101': 'Switching Protocols',
            '102': 'Processing',
            '103': 'Early Hints',
            '200': 'OK',
            '201': 'Created',
            '202': 'Accepted',
            '203': 'Non-Authoritative Information',
            '204': 'No Content',
            '205': 'Reset Content',
            '206': 'Partial Content',
            '207': 'Multi-Status',
            '208': 'Already Reported',
            '226': 'IM Used',
            '300': 'Multiple Choices',
            '301': 'Moved Permanently',
            '302': 'Found',
            '303': 'See Other',
            '304': 'Not Modified',
            '305': 'Use Proxy',
            '307': 'Temporary Redirect',
            '308': 'Permanent Redirect',
            '400': 'Bad Request',
            '401': 'Unauthorized',
            '402': 'Payment Required',
            '403': 'Forbidden',
            '404': 'Not Found',
            '405': 'Method Not Allowed',
            '406': 'Not Acceptable',
            '407': 'Proxy Authentication Required',
            '408': 'Request Timeout',
            '409': 'Conflict',
            '410': 'Gone',
            '411': 'Length Required',
            '412': 'Precondition Failed',
            '413': 'Payload Too Large',
            '414': 'URI Too Long',
            '415': 'Unsupported Media Type',
            '416': 'Range Not Satisfiable',
            '417': 'Expectation Failed',
            '418': "I'm a Teapot",
            '421': 'Misdirected Request',
            '422': 'Unprocessable Entity',
            '423': 'Locked',
            '424': 'Failed Dependency',
            '425': 'Too Early',
            '426': 'Upgrade Required',
            '428': 'Precondition Required',
            '429': 'Too Many Requests',
            '431': 'Request Header Fields Too Large',
            '451': 'Unavailable For Legal Reasons',
            '500': 'Internal Server Error',
            '501': 'Not Implemented',
            '502': 'Bad Gateway',
            '503': 'Service Unavailable',
            '504': 'Gateway Timeout',
            '505': 'HTTP Version Not Supported',
            '506': 'Variant Also Negotiates',
            '507': 'Insufficient Storage',
            '508': 'Loop Detected',
            '509': 'Bandwidth Limit Exceeded',
            '510': 'Not Extended',
            '511': 'Network Authentication Required'
          },
          Agent: [Function: Agent] { defaultMaxSockets: Infinity },
          ClientRequest: [Function: ClientRequest],
          IncomingMessage: [Function: IncomingMessage],
          OutgoingMessage: [Function: OutgoingMessage],
          Server: [Function: Server],
          ServerResponse: [Function: ServerResponse],
          createServer: [Function: createServer],
          validateHeaderName: [Function: __node_internal_],
          validateHeaderValue: [Function: __node_internal_],
          get: [Function: get],
          request: [Function: request],
          maxHeaderSize: [Getter],
          globalAgent: [Getter/Setter]
        },
        'https:': {
          Agent: [Function: Agent],
          globalAgent: Agent {
            _events: [Object: null prototype],
            _eventsCount: 2,
            _maxListeners: undefined,
            defaultPort: 443,
            protocol: 'https:',
            options: [Object: null prototype],
            requests: [Object: null prototype] {},
            sockets: [Object: null prototype],
            freeSockets: [Object: null prototype] {},
            keepAliveMsecs: 1000,
            keepAlive: false,
            maxSockets: Infinity,
            maxFreeSockets: 256,
            scheduling: 'lifo',
            maxTotalSockets: Infinity,
            totalSocketCount: 1,
            maxCachedSessions: 100,
            _sessionCache: [Object],
            [Symbol(kCapture)]: false
          },
          Server: [Function: Server],
          createServer: [Function: createServer],
          get: [Function: get],
          request: [Function: request]
        }
      },
      pathname: '/v1/batch'
    },
    _ended: false,
    _ending: true,
    _redirectCount: 0,
    _redirects: [],
    _requestBodyLength: 528,
    _requestBodyBuffers: [
      {
        data: Buffer(528) [Uint8Array] [
          123,  34,  98,  97, 116,  99, 104,  34,  58,  91, 123,  34,
           97, 110, 111, 110, 121, 109, 111, 117, 115,  73, 100,  34,
           58,  34,  53,  50,  97,  51,  50,  49,  51,  55,  45,  98,
           57, 100,  53,  45,  52,  50,  53, 102,  45,  98,  52,  98,
           51,  45,  48,  52,  53,  55,  50,  51,  49,  99, 101,  99,
           54,  48,  34,  44,  34, 116, 114,  97, 105, 116, 115,  34,
           58, 123,  34, 116, 105, 109, 101, 122, 111, 110, 101,  34,
           58,  34,  65, 109, 101, 114, 105,  99,  97,  47,  76, 111,
          115,  95,  65, 110,
          ... 428 more items
        ],
        encoding: undefined
      }
    ],
    _onNativeResponse: [Function (anonymous)],
    _currentRequest: <ref *1> ClientRequest {
      _events: [Object: null prototype] {
        response: [Function: bound onceWrapper] {
          listener: [Function (anonymous)]
        },
        abort: [Function (anonymous)],
        aborted: [Function (anonymous)],
        connect: [Function (anonymous)],
        error: [Function (anonymous)],
        socket: [Function (anonymous)],
        timeout: [Function (anonymous)]
      },
      _eventsCount: 7,
      _maxListeners: undefined,
      outputData: [],
      outputSize: 0,
      writable: true,
      destroyed: false,
      _last: true,
      chunkedEncoding: false,
      shouldKeepAlive: false,
      maxRequestsOnConnectionReached: false,
      _defaultKeepAlive: true,
      useChunkedEncodingByDefault: true,
      sendDate: false,
      _removedConnection: false,
      _removedContLen: false,
      _removedTE: false,
      _contentLength: null,
      _hasBody: true,
      _trailer: '',
      finished: false,
      _headerSent: true,
      _closed: false,
      socket: <ref *2> TLSSocket {
        _tlsOptions: {
          allowHalfOpen: undefined,
          pipe: false,
          secureContext: SecureContext { context: SecureContext {} },
          isServer: false,
          requestCert: true,
          rejectUnauthorized: true,
          session: undefined,
          ALPNProtocols: undefined,
          requestOCSP: undefined,
          enableTrace: undefined,
          pskCallback: undefined,
          highWaterMark: undefined,
          onread: undefined,
          signal: undefined
        },
        _secureEstablished: false,
        _securePending: false,
        _newSessionPending: false,
        _controlReleased: true,
        secureConnecting: true,
        _SNICallback: null,
        servername: null,
        alpnProtocol: null,
        authorized: false,
        authorizationError: null,
        encrypted: true,
        _events: [Object: null prototype] {
          close: [
            [Function: onSocketCloseDestroySSL],
            [Function],
            [Function: onClose],
            [Function: socketCloseListener]
          ],
          end: [ [Function: onConnectEnd], [Function: onReadableStreamEnd] ],
          newListener: [Function: keylogNewListener],
          connect: [ [Function], [Function], [Function] ],
          secure: [Function: onConnectSecure],
          session: [Function (anonymous)],
          free: [Function: onFree],
          timeout: [Function: onTimeout],
          agentRemove: [Function: onRemove],
          error: [Function: socketErrorListener],
          drain: [Function: ondrain]
        },
        _eventsCount: 11,
        connecting: false,
        _hadError: true,
        _parent: null,
        _host: 'api.segment.io',
        _readableState: ReadableState {
          objectMode: false,
          highWaterMark: 16384,
          buffer: BufferList { head: null, tail: null, length: 0 },
          length: 0,
          pipes: [],
          flowing: true,
          ended: false,
          endEmitted: false,
          reading: true,
          constructed: true,
          sync: false,
          needReadable: true,
          emittedReadable: false,
          readableListening: false,
          resumeScheduled: false,
          errorEmitted: true,
          emitClose: false,
          autoDestroy: true,
          destroyed: true,
          errored: Error: connect ECONNREFUSED 0.0.0.0:443
              at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
            errno: -111,
            code: 'ECONNREFUSED',
            syscall: 'connect',
            address: '0.0.0.0',
            port: 443
          },
          closed: true,
          closeEmitted: true,
          defaultEncoding: 'utf8',
          awaitDrainWriters: null,
          multiAwaitDrain: false,
          readingMore: false,
          dataEmitted: false,
          decoder: null,
          encoding: null,
          [Symbol(kPaused)]: false
        },
        _maxListeners: undefined,
        _writableState: WritableState {
          objectMode: false,
          highWaterMark: 16384,
          finalCalled: false,
          needDrain: false,
          ending: false,
          ended: false,
          finished: false,
          destroyed: true,
          decodeStrings: false,
          defaultEncoding: 'utf8',
          length: 793,
          writing: true,
          corked: 0,
          sync: false,
          bufferProcessing: false,
          onwrite: [Function: bound onwrite],
          writecb: [Function (anonymous)],
          writelen: 793,
          afterWriteTickInfo: null,
          buffered: [],
          bufferedIndex: 0,
          allBuffers: true,
          allNoop: true,
          pendingcb: 1,
          constructed: true,
          prefinished: false,
          errorEmitted: true,
          emitClose: false,
          autoDestroy: true,
          errored: Error: connect ECONNREFUSED 0.0.0.0:443
              at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
            errno: -111,
            code: 'ECONNREFUSED',
            syscall: 'connect',
            address: '0.0.0.0',
            port: 443
          },
          closed: true,
          closeEmitted: true,
          [Symbol(kOnFinished)]: []
        },
        allowHalfOpen: false,
        _sockname: null,
        _pendingData: [
          {
            chunk: 'POST /v1/batch HTTP/1.1\r\n' +
              'Accept: application/json, text/plain, */*\r\n' +
              'Content-Type: application/json\r\n' +
              'user-agent: analytics-node/6.2.0\r\n' +
              'Content-Length: 528\r\n' +
              'Host: api.segment.io\r\n' +
              'Authorization: Basic cmVjcUN5clZlREJBRWRtenZma0hQY2NsWUxjNjNUTTY6\r\n' +
              'Connection: close\r\n' +
              '\r\n',
            encoding: 'latin1',
            callback: [Function: nop]
          },
          {
            chunk: [Buffer [Uint8Array]],
            encoding: 'buffer',
            callback: [Function (anonymous)]
          },
          allBuffers: false
        ],
        _pendingEncoding: '',
        server: undefined,
        _server: null,
        ssl: null,
        _requestCert: true,
        _rejectUnauthorized: true,
        parser: null,
        _httpMessage: [Circular *1],
        [Symbol(res)]: TLSWrap {
          _parent: TCP {
            reading: [Getter/Setter],
            onconnection: null,
            [Symbol(owner_symbol)]: [Circular *2],
            [Symbol(handle_onclose)]: [Function: done]
          },
          _parentWrap: undefined,
          _secureContext: SecureContext { context: SecureContext {} },
          reading: false,
          onkeylog: [Function: onkeylog],
          onhandshakestart: {},
          onhandshakedone: [Function (anonymous)],
          onocspresponse: [Function: onocspresponse],
          onnewsession: [Function: onnewsessionclient],
          onerror: [Function: onerror],
          [Symbol(owner_symbol)]: [Circular *2]
        },
        [Symbol(verified)]: false,
        [Symbol(pendingSession)]: null,
        [Symbol(async_id_symbol)]: 531,
        [Symbol(kHandle)]: null,
        [Symbol(kSetNoDelay)]: false,
        [Symbol(lastWriteQueueSize)]: 0,
        [Symbol(timeout)]: null,
        [Symbol(kBuffer)]: null,
        [Symbol(kBufferCb)]: null,
        [Symbol(kBufferGen)]: null,
        [Symbol(kCapture)]: false,
        [Symbol(kBytesRead)]: 0,
        [Symbol(kBytesWritten)]: 0,
        [Symbol(connect-options)]: {
          rejectUnauthorized: true,
          ciphers: 'TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA256:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!SRP:!CAMELLIA',
          checkServerIdentity: [Function: checkServerIdentity],
          minDHSize: 1024,
          maxRedirects: 21,
          maxBodyLength: 10485760,
          protocol: 'https:',
          path: null,
          method: 'POST',
          headers: {
            Accept: 'application/json, text/plain, */*',
            'Content-Type': 'application/json',
            'user-agent': 'analytics-node/6.2.0',
            'Content-Length': 528
          },
          agent: undefined,
          agents: { http: undefined, https: undefined },
          auth: 'recqCyrVeDBAEdmzvfkHPcclYLc63TM6:',
          hostname: 'api.segment.io',
          port: 443,
          nativeProtocols: { 'http:': [Object], 'https:': [Object] },
          pathname: '/v1/batch',
          _defaultAgent: Agent {
            _events: [Object: null prototype],
            _eventsCount: 2,
            _maxListeners: undefined,
            defaultPort: 443,
            protocol: 'https:',
            options: [Object: null prototype],
            requests: [Object: null prototype] {},
            sockets: [Object: null prototype],
            freeSockets: [Object: null prototype] {},
            keepAliveMsecs: 1000,
            keepAlive: false,
            maxSockets: Infinity,
            maxFreeSockets: 256,
            scheduling: 'lifo',
            maxTotalSockets: Infinity,
            totalSocketCount: 1,
            maxCachedSessions: 100,
            _sessionCache: [Object],
            [Symbol(kCapture)]: false
          },
          host: 'api.segment.io',
          servername: 'api.segment.io',
          _agentKey: 'api.segment.io:443:::::::::::::::::::::',
          encoding: null,
          singleUse: true
        }
      },
      _header: 'POST /v1/batch HTTP/1.1\r\n' +
        'Accept: application/json, text/plain, */*\r\n' +
        'Content-Type: application/json\r\n' +
        'user-agent: analytics-node/6.2.0\r\n' +
        'Content-Length: 528\r\n' +
        'Host: api.segment.io\r\n' +
        'Authorization: Basic cmVjcUN5clZlREJBRWRtenZma0hQY2NsWUxjNjNUTTY6\r\n' +
        'Connection: close\r\n' +
        '\r\n',
      _keepAliveTimeout: 0,
      _onPendingData: [Function: nop],
      agent: Agent {
        _events: [Object: null prototype] {
          free: [Function (anonymous)],
          newListener: [Function: maybeEnableKeylog]
        },
        _eventsCount: 2,
        _maxListeners: undefined,
        defaultPort: 443,
        protocol: 'https:',
        options: [Object: null prototype] { path: null },
        requests: [Object: null prototype] {},
        sockets: [Object: null prototype] {
          'api.segment.io:443:::::::::::::::::::::': [ [TLSSocket] ]
        },
        freeSockets: [Object: null prototype] {},
        keepAliveMsecs: 1000,
        keepAlive: false,
        maxSockets: Infinity,
        maxFreeSockets: 256,
        scheduling: 'lifo',
        maxTotalSockets: Infinity,
        totalSocketCount: 1,
        maxCachedSessions: 100,
        _sessionCache: { map: {}, list: [] },
        [Symbol(kCapture)]: false
      },
      socketPath: undefined,
      method: 'POST',
      maxHeaderSize: undefined,
      insecureHTTPParser: undefined,
      path: '/v1/batch',
      _ended: false,
      res: null,
      aborted: false,
      timeoutCb: null,
      upgradeOrConnect: false,
      parser: null,
      maxHeadersCount: null,
      reusedSocket: false,
      host: 'api.segment.io',
      protocol: 'https:',
      _redirectable: [Circular *3],
      [Symbol(kCapture)]: false,
      [Symbol(kNeedDrain)]: false,
      [Symbol(corked)]: 0,
      [Symbol(kOutHeaders)]: [Object: null prototype] {
        accept: [ 'Accept', 'application/json, text/plain, */*' ],
        'content-type': [ 'Content-Type', 'application/json' ],
        'user-agent': [ 'user-agent', 'analytics-node/6.2.0' ],
        'content-length': [ 'Content-Length', 528 ],
        host: [ 'Host', 'api.segment.io' ],
        authorization: [
          'Authorization',
          'Basic cmVjcUN5clZlREJBRWRtenZma0hQY2NsWUxjNjNUTTY6'
        ]
      }
    },
    _currentUrl: 'https://recqCyrVeDBAEdmzvfkHPcclYLc63TM6:@api.segment.io/v1/batch',
    [Symbol(kCapture)]: false
  }
}
scarint@gameserver:~$

Relevant information

Running on bare metal: Dell Poweredge R620. Dual Xeon E5-2637, 64GB RAM, Debian 11.

I am brand new to Kubernetes, and trying everything I can think of to get Che running. I've had this same error on Fedora in a VM, Ubuntu in a VM, and Windows 10 (using Docker in the Linux flavors, Hyper-V with Windows)

scarint@gameserver:~$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

scarint@gameserver:~$ minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | 3rd party (Ambassador)         |
| auto-pause                  | minikube | disabled     | Google                         |
| cloud-spanner               | minikube | disabled     | Google                         |
| csi-hostpath-driver         | minikube | disabled     | Kubernetes                     |
| dashboard                   | minikube | enabled ✅   | Kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | Kubernetes                     |
| efk                         | minikube | disabled     | 3rd party (Elastic)            |
| freshpod                    | minikube | disabled     | Google                         |
| gcp-auth                    | minikube | disabled     | Google                         |
| gvisor                      | minikube | disabled     | Google                         |
| headlamp                    | minikube | disabled     | 3rd party (kinvolk.io)         |
| helm-tiller                 | minikube | disabled     | 3rd party (Helm)               |
| inaccel                     | minikube | disabled     | 3rd party (InAccel             |
|                             |          |              | [[email protected]])            |
| ingress                     | minikube | enabled ✅   | Kubernetes                     |
| ingress-dns                 | minikube | disabled     | Google                         |
| istio                       | minikube | disabled     | 3rd party (Istio)              |
| istio-provisioner           | minikube | disabled     | 3rd party (Istio)              |
| kong                        | minikube | disabled     | 3rd party (Kong HQ)            |
| kubevirt                    | minikube | disabled     | 3rd party (KubeVirt)           |
| logviewer                   | minikube | disabled     | 3rd party (unknown)            |
| metallb                     | minikube | disabled     | 3rd party (MetalLB)            |
| metrics-server              | minikube | enabled ✅   | Kubernetes                     |
| nvidia-driver-installer     | minikube | disabled     | Google                         |
| nvidia-gpu-device-plugin    | minikube | disabled     | 3rd party (Nvidia)             |
| olm                         | minikube | disabled     | 3rd party (Operator Framework) |
| pod-security-policy         | minikube | disabled     | 3rd party (unknown)            |
| portainer                   | minikube | disabled     | 3rd party (Portainer.io)       |
| registry                    | minikube | disabled     | Google                         |
| registry-aliases            | minikube | disabled     | 3rd party (unknown)            |
| registry-creds              | minikube | disabled     | 3rd party (UPMC Enterprises)   |
| storage-provisioner         | minikube | enabled ✅   | Google                         |
| storage-provisioner-gluster | minikube | disabled     | 3rd party (Gluster)            |
| volumesnapshots             | minikube | disabled     | Kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

Windows 10 yields a slightly different failure, but the rest seems the same.

PS C:\WINDOWS\system32> chectl server:deploy -p minikube
› Current Kubernetes context: 'minikube'
  √ Verify Kubernetes API...[1.25]
  > Minikube preflight checklist
    √ Verify if kubectl is installed...[OK]
    √ Verify if minikube is installed...[OK]
    | Verify if minikube is running
      Enable minikube ingress addon
      Retrieving minikube IP and domain for ingress URLs
      Checking minikube version

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^
AxiosError: getaddrinfo ENOENT api.segment.io
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26) {
  hostname: 'api.segment.io',
  syscall: 'getaddrinfo',
  code: 'ENOENT',
  errno: -4058,
  config: {
    transitional: {
      silentJSONParsing: true,
      forcedJSONParsing: true,
      clarifyTimeoutError: false
    },
@scarint scarint added the kind/question Questions that haven't been identified as being feature requests or bugs. label Dec 4, 2022
@che-bot che-bot added the status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. label Dec 4, 2022
@tolusha
Copy link
Contributor

tolusha commented Dec 5, 2022

I can suggest the following:

  1. Try deploy cert-manager manually by following the guide https://cert-manager.io/docs/installation/
  2. Deploy Eclipse Che with --skip-cert-manager flag

@scarint
Copy link
Author

scarint commented Dec 5, 2022

scarint@gameserver:~$ chectl server:deploy --platform minikube --skip-cert-manager
› Current Kubernetes context: 'minikube'
  ✔ Verify Kubernetes API...[OK]
  ✔ Looking for an already existing Eclipse Che instance
    ✔ Verify if Eclipse Che is deployed into namespace "eclipse-che"...[Not Found]
  ↓ Check if OIDC Provider installed [skipped]
    → Dex will be automatically installed as OIDC Identity Provider
  ✔ ✈️  Minikube preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify if minikube is installed
    ✔ Verify if minikube is running
    ↓ Start minikube [skipped]
      → Minikube is already running.
    ✔ Check Kubernetes version: [1.25]
    ✔ Verify if minikube ingress addon is enabled
    ↓ Enable minikube ingress addon [skipped]
      → Ingress addon is already enabled.
    ✔ Retrieving minikube IP and domain for ingress URLs...[192.168.49.2.nip.io]
    ✔ Checking minikube version...[1.28.0]
  ✔ Following Eclipse Che logs
    ✔ Start following logs...[OK]
  ↓ Cert Manager v1.8.2 [skipped]
  ❯ Install Dev Workspace operator
    ✔ Create Namespace devworkspace-controller...[OK]
    ⠙ Create Dev Workspace operator resources
      Wait for Dev Workspace operator
    Create Namespace eclipse-che
    Deploy Dex
    Deploy Eclipse Che

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^
AxiosError: connect ECONNREFUSED 0.0.0.0:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
  port: 443,
  address: '0.0.0.0',
  syscall: 'connect',
  code: 'ECONNREFUSED',
  errno: -111,
scarint@gameserver:~$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
namespace/cert-manager unchanged
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
scarint@gameserver:~$ chectl server:deploy --platform minikube
› Current Kubernetes context: 'minikube'
  ✔ Verify Kubernetes API...[OK]
  ✔ Looking for an already existing Eclipse Che instance
    ✔ Verify if Eclipse Che is deployed into namespace "eclipse-che"...[Not Found]
  ↓ Check if OIDC Provider installed [skipped]
    → Dex will be automatically installed as OIDC Identity Provider
  ❯ ✈️  Minikube preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify if minikube is installed
    ✔ Verify if minikube is running
    ↓ Start minikube [skipped]
      → Minikube is already running.
    ✔ Check Kubernetes version: [1.25]
    ✔ Verify if minikube ingress addon is enabled
    ↓ Enable minikube ingress addon [skipped]
      → Ingress addon is already enabled.
    ⠙ Retrieving minikube IP and domain for ingress URLs
      Checking minikube version

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^
AxiosError: connect ECONNREFUSED 0.0.0.0:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
  port: 443,
  address: '0.0.0.0',
  syscall: 'connect',
  code: 'ECONNREFUSED',
  errno: -111,

And again with --skip-cert-manager:

scarint@gameserver:~$ chectl server:deploy --platform minikube --skip-cert-manager
› Current Kubernetes context: 'minikube'
  ✔ Verify Kubernetes API...[OK]
  ✔ Looking for an already existing Eclipse Che instance
    ✔ Verify if Eclipse Che is deployed into namespace "eclipse-che"...[Not Found]
  ↓ Check if OIDC Provider installed [skipped]
    → Dex will be automatically installed as OIDC Identity Provider
  ✔ ✈️  Minikube preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify if minikube is installed
    ✔ Verify if minikube is running
    ↓ Start minikube [skipped]
      → Minikube is already running.
    ✔ Check Kubernetes version: [1.25]
    ✔ Verify if minikube ingress addon is enabled
    ↓ Enable minikube ingress addon [skipped]
      → Ingress addon is already enabled.
    ✔ Retrieving minikube IP and domain for ingress URLs...[192.168.49.2.nip.io]
    ✔ Checking minikube version...[1.28.0]
  ✔ Following Eclipse Che logs
    ✔ Start following logs...[OK]
  ↓ Cert Manager v1.8.2 [skipped]
  ❯ Install Dev Workspace operator
    ✔ Create Namespace devworkspace-controller...[Exists]
    ⠸ Create Dev Workspace operator resources
      Wait for Dev Workspace operator
    Create Namespace eclipse-che
    Deploy Dex
    Deploy Eclipse Che

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^
AxiosError: connect ECONNREFUSED 0.0.0.0:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
  port: 443,
  address: '0.0.0.0',
  syscall: 'connect',
  code: 'ECONNREFUSED',
  errno: -111,

@tolusha
Copy link
Contributor

tolusha commented Dec 5, 2022

I see, it fails again on the step when tries to reach github.com server. Is github.com available?
But anyway, it is possible to workaround that step as well:

  1. kubectl apply -f https://raw.githubusercontent.com/devfile/devworkspace-operator/main/deploy/deployment/kubernetes/combined.yaml
  2. add --skip-devworkspace-operator to a chectl command

@scarint
Copy link
Author

scarint commented Dec 5, 2022

Gets a little bit farther, it seems. Failing at creating Dex certificate, now.

scarint@gameserver:~$ ping github.com
PING github.com (140.82.113.4) 56(84) bytes of data.
64 bytes from lb-140-82-113-4-iad.github.com (140.82.113.4): icmp_seq=1 ttl=46 time=96.0 ms
64 bytes from lb-140-82-113-4-iad.github.com (140.82.113.4): icmp_seq=2 ttl=46 time=72.8 ms
64 bytes from lb-140-82-113-4-iad.github.com (140.82.113.4): icmp_seq=3 ttl=46 time=77.9 ms
64 bytes from lb-140-82-113-4-iad.github.com (140.82.113.4): icmp_seq=4 ttl=46 time=80.5 ms
^C
--- github.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 72.794/81.771/95.955/8.641 ms
scarint@gameserver:~$ chectl server:deploy -p minikube --skip-devworkspace-operator
› Current Kubernetes context: 'minikube'
  ✔ Verify Kubernetes API...[OK]
  ✔ Looking for an already existing Eclipse Che instance
    ✔ Verify if Eclipse Che is deployed into namespace "eclipse-che"...[Not Found]
  ↓ Check if OIDC Provider installed [skipped]
    → Dex will be automatically installed as OIDC Identity Provider
  ✔ ✈️  Minikube preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify if minikube is installed
    ✔ Verify if minikube is running
    ↓ Start minikube [skipped]
      → Minikube is already running.
    ✔ Check Kubernetes version: [1.25]
    ✔ Verify if minikube ingress addon is enabled
    ↓ Enable minikube ingress addon [skipped]
      → Ingress addon is already enabled.
    ✔ Retrieving minikube IP and domain for ingress URLs...[192.168.49.2.nip.io]
    ✔ Checking minikube version...[1.28.0]
  ✔ Following Eclipse Che logs
    ✔ Start following logs...[OK]
  ✔ Cert Manager v1.8.2
    ✔ Install Cert Manager...[Exists]
    ✔ Wait for Cert Manager...[OK]
  ↓ Install Dev Workspace operator [skipped]
  ✔ Create Namespace eclipse-che...[OK]
  ❯ Deploy Dex
    ⠸ Create namespace: dex
      Create issuer dex-selfsigned
      Create certificate: dex-selfsigned
      Create issuer dex
      Create certificate: dex
      Save Dex certificate
      Add Dex certificate to Eclipse Che certificates bundle
      Create Dex service account
      Create Dex cluster role
      Create Dex cluster role binding
      Create Dex service
      Create Dex ingress
      Generate Dex username and password
      Create Dex configmap
      Create Dex deployment
      Wait for Dex is ready
      Configure API server
    Deploy Eclipse Che

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^
AxiosError: connect ECONNREFUSED 0.0.0.0:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
  port: 443,
  address: '0.0.0.0',
  syscall: 'connect',
  code: 'ECONNREFUSED',
  errno: -111,

After step 1 above:

scarint@gameserver:~$ chectl server:deploy -p minikube --skip-devworkspace-operator
› Current Kubernetes context: 'minikube'
  ✔ Verify Kubernetes API...[OK]
  ✔ Looking for an already existing Eclipse Che instance
    ✔ Verify if Eclipse Che is deployed into namespace "eclipse-che"...[Not Found]
  ↓ Check if OIDC Provider installed [skipped]
    → Dex will be automatically installed as OIDC Identity Provider
  ✔ ✈️  Minikube preflight checklist
    ✔ Verify if kubectl is installed
    ✔ Verify if minikube is installed
    ✔ Verify if minikube is running
    ↓ Start minikube [skipped]
      → Minikube is already running.
    ✔ Check Kubernetes version: [1.25]
    ✔ Verify if minikube ingress addon is enabled
    ↓ Enable minikube ingress addon [skipped]
      → Ingress addon is already enabled.
    ✔ Retrieving minikube IP and domain for ingress URLs...[192.168.49.2.nip.io]
    ✔ Checking minikube version...[1.28.0]
  ✔ Following Eclipse Che logs
    ✔ Start following logs...[OK]
  ✔ Cert Manager v1.8.2
    ✔ Install Cert Manager...[Exists]
    ✔ Wait for Cert Manager...[OK]
  ↓ Install Dev Workspace operator [skipped]
  ✔ Create Namespace eclipse-che...[Exists]
  ❯ Deploy Dex
    ✔ Create namespace: dex...[Exists]
    ✔ Create issuer dex-selfsigned...[Exists]
    ⠹ Create certificate: dex-selfsigned
      Create issuer dex
      Create certificate: dex
      Save Dex certificate
      Add Dex certificate to Eclipse Che certificates bundle
      Create Dex service account
      Create Dex cluster role
      Create Dex cluster role binding
      Create Dex service
      Create Dex ingress
      Generate Dex username and password
      Create Dex configmap
      Create Dex deployment
      Wait for Dex is ready
      Configure API server
    Deploy Eclipse Che

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^
AxiosError: connect ECONNREFUSED 0.0.0.0:443
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {
  port: 443,
  address: '0.0.0.0',
  syscall: 'connect',
  code: 'ECONNREFUSED',
  errno: -111,

@l0rd l0rd added area/chectl Issues related to chectl, the CLI of Che area/install Issues related to installation, including offline/air gap and initial setup status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach and removed status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. labels Dec 6, 2022
@tolusha tolusha removed the area/install Issues related to installation, including offline/air gap and initial setup label Dec 6, 2022
@tolusha
Copy link
Contributor

tolusha commented Dec 7, 2022

To be honest I have no idea what is going on.
chectl simply applies resources on the k8s cluster and it fails for some reason.
Could you try to create a namespace via kubectl for instance? (kubectl create namespace test)
Also pls check all pods, maybe api server is being restarted continuously: kubectl get pods -A

@scarint
Copy link
Author

scarint commented Dec 7, 2022

The "error" I'm getting certainly doesn't make any sense to me either; it appears to just vomit out a config file of sorts...

Running your suggestions. Doesn't look like anything is restarting unexpectedly.

scarint@gameserver:~$ kubectl create namespace test
namespace/test created
scarint@gameserver:~$ kubectl get pods -A
NAMESPACE                 NAME                                               READY   STATUS      RESTARTS        AGE
cert-manager              cert-manager-74d949c895-2r849                      1/1     Running     0               2d3h
cert-manager              cert-manager-cainjector-d9bc5979d-mh8rw            1/1     Running     0               2d3h
cert-manager              cert-manager-webhook-84b7ddd796-wtqvc              1/1     Running     0               2d3h
devworkspace-controller   devworkspace-controller-manager-55fc7f998f-fhhr4   2/2     Running     0               2d
devworkspace-controller   devworkspace-webhook-server-7fb4f755db-52wsc       2/2     Running     0               2d
ingress-nginx             ingress-nginx-admission-create-dxbt6               0/1     Completed   0               2d17h
ingress-nginx             ingress-nginx-admission-patch-kk8b4                0/1     Completed   1               2d17h
ingress-nginx             ingress-nginx-controller-5959f988fd-87d7d          1/1     Running     1 (2d11h ago)   2d17h
kube-system               coredns-565d847f94-pd6jv                           1/1     Running     1 (2d11h ago)   2d17h
kube-system               etcd-minikube                                      1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-apiserver-minikube                            1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-controller-manager-minikube                   1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-proxy-7t6gp                                   1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-scheduler-minikube                            1/1     Running     1 (2d11h ago)   2d17h
kube-system               metrics-server-769cd898cd-qsh9d                    1/1     Running     2 (2d11h ago)   2d17h
kube-system               storage-provisioner                                1/1     Running     3 (2d11h ago)   2d17h
kubernetes-dashboard      dashboard-metrics-scraper-b74747df5-kmwl5          1/1     Running     0               2d11h
kubernetes-dashboard      kubernetes-dashboard-57bbdc5f89-54fxr              1/1     Running     0               2d11h
scarint@gameserver:~$ kubectl create namespace test
namespace/test created

Then ran chectl server:deploy -p minikuge again. Same result. Checked pods again:

scarint@gameserver:~$ kubectl get pods -A
NAMESPACE                 NAME                                               READY   STATUS      RESTARTS        AGE
cert-manager              cert-manager-74d949c895-2r849                      1/1     Running     0               2d3h
cert-manager              cert-manager-cainjector-d9bc5979d-mh8rw            1/1     Running     0               2d3h
cert-manager              cert-manager-webhook-84b7ddd796-wtqvc              1/1     Running     0               2d3h
devworkspace-controller   devworkspace-controller-manager-55fc7f998f-fhhr4   2/2     Running     0               2d
devworkspace-controller   devworkspace-webhook-server-7fb4f755db-52wsc       2/2     Running     0               2d
ingress-nginx             ingress-nginx-admission-create-dxbt6               0/1     Completed   0               2d17h
ingress-nginx             ingress-nginx-admission-patch-kk8b4                0/1     Completed   1               2d17h
ingress-nginx             ingress-nginx-controller-5959f988fd-87d7d          1/1     Running     1 (2d11h ago)   2d17h
kube-system               coredns-565d847f94-pd6jv                           1/1     Running     1 (2d11h ago)   2d17h
kube-system               etcd-minikube                                      1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-apiserver-minikube                            1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-controller-manager-minikube                   1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-proxy-7t6gp                                   1/1     Running     1 (2d11h ago)   2d17h
kube-system               kube-scheduler-minikube                            1/1     Running     1 (2d11h ago)   2d17h
kube-system               metrics-server-769cd898cd-qsh9d                    1/1     Running     2 (2d11h ago)   2d17h
kube-system               storage-provisioner                                1/1     Running     3 (2d11h ago)   2d17h
kubernetes-dashboard      dashboard-metrics-scraper-b74747df5-kmwl5          1/1     Running     0               2d11h
kubernetes-dashboard      kubernetes-dashboard-57bbdc5f89-54fxr              1/1     Running     0               2d11h

Further details. Trying to think of anything else that may matter. Focusing only on my Linux box, but can do whatever in Windows, too. Whatever works best for getting the information to troubleshoot.

This was all done to install Che. This server has only been running AMP, and only what was required for the game servers. No other services, so mostly sitting idle. No local firewall. Full internet connectivity, with DNS server hosted on LAN. The TCP errors make me want to think it's network...but i realize that could just be what shows up, not the actual error.

Docker installed via apt: `apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin' from instructions here: https://docs.docker.com/engine/install/debian/

Exactly minikube start command: minikube start --addons=ingress --vm=true --memory=16384 --cpus=4 --driver=docker

scarint@gameserver:~$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

scarint@gameserver:~$ minikube update-check
CurrentVersion: v1.28.0
LatestVersion: v1.28.0
chectl: Updating CLI... already on latest version: 7.57.0

Just for kicks, ran `chectl server:deploy -p minikube --debug``` and got the same result, which I expected.

I had an error before that I tracked down to #4172, so have tried with kubernetes 1.23.12 and 1.23.14 with the same results in a VM, though I don't recall which OS in those cases.

A couple times, apparently randomly, Che WOULD deploy; I'd first attributed this error to a pure connection issue, and thought that if I waited for all the minikube/kubernetes services to come up and stabilize, it would be fine. That doesn't appear to be the case, as you can see by the outputs above.

Checked apt, and there was an update to containerd.io available (1.6.11-1, had 1.6.10-1). Applied that, restarted minikube (same command as above). No change.

@che-bot
Copy link
Contributor

che-bot commented Jun 5, 2023

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 5, 2023
@che-bot che-bot closed this as completed Jun 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/chectl Issues related to chectl, the CLI of Che kind/question Questions that haven't been identified as being feature requests or bugs. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach
Projects
None yet
Development

No branches or pull requests

4 participants