Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-plexing single browser websocket to multiple backend websockets #1802

Open
illume opened this issue Mar 13, 2024 · 6 comments · Fixed by #2459
Open

multi-plexing single browser websocket to multiple backend websockets #1802

illume opened this issue Mar 13, 2024 · 6 comments · Fixed by #2459
Assignees
Labels
backend Issues related to the backend frontend Issues related to the frontend multi Multi cluster aggregated view

Comments

@illume
Copy link
Collaborator

illume commented Mar 13, 2024

In #1373 we investigated websocket pool max limit that browsers have. Which we quickly encounter.

One solution to this problem is to have a backend service which takes one websocket from the browser and makes multiple websocket connections via the backend to kubernetes API server.

We already use the backend server to proxy K8s API requests.

Architecture diagram:

graph LR
    A[Browser] <-->|WebSocket| B[headlamp_server]
    B <-->|WebSocket| C[Kubernetes API Server]
    B <-->|WebSocket| D[Kubernetes API Server]
    B <-->|WebSocket| E[Kubernetes API Server]
Loading

Some related websocket links to multi plexing and proxying

@illume illume added backend Issues related to the backend frontend Issues related to the frontend labels Mar 13, 2024
@knrt10
Copy link
Contributor

knrt10 commented Apr 2, 2024

So far, my findings on the issue have been this.

  1. Create a Function to Open a New Websocket Connection to the Cluster from the Backend:
    We can use the gorilla/websocket package to handle websocket connections. We can create a function like openWebsocketConnectionToCluster that utilizes this package to establish a websocket connection to the Kubernetes cluster. This function would typically involve upgrading an HTTP connection to a websocket connection using the Upgrade method provided by the gorilla/websocket package.

    import "github.com/gorilla/websocket"
    
    func openWebsocketConnectionToCluster(clusterURL string) (*websocket.Conn, error) {
        // Dial websocket connection to the cluster URL
        conn, _, err := websocket.DefaultDialer.Dial(clusterURL, nil)
        if err != nil {
            return nil, err
        }
        return conn, nil
    }
  2. Create a Map to Store Active Websocket Connections:
    We can use a map to store active websocket connections, where the keys are the URLs or anything unique like userID + clustername + url of the clusters and the values are the websocket connections.

    var activeConnections = make(map[string]*websocket.Conn)
  3. Create a New Websocket Endpoint in the Backend (/websocket):
    We would create a new route handler for the /websocket endpoint. This handler would upgrade incoming HTTP requests to websocket connections.

    func websocketHandler(w http.ResponseWriter, r *http.Request) {
        // Upgrade HTTP connection to websocket
        conn, err := websocket.Upgrade(w, r, nil, 1024, 1024)
        if err != nil {
            http.Error(w, "Could not upgrade to websocket", http.StatusBadRequest)
            return
        }
    
        // Handle websocket connection
        // (Code for handling connection would typically go here)
    }
    
    func main() {
        http.HandleFunc("/websocket", websocketHandler)
    }
  4. Frontend Makes a Request to the Backend Websocket Endpoint:
    From the frontend, we would make an HTTP request to the /websocket endpoint of the backend server.

  5. Upgrade the Request:
    Upon receiving the HTTP request at the /websocket endpoint, the backend server automatically upgrades the connection to a websocket connection using the Upgrade method provided by the gorilla/websocket package.

  6. Frontend Sends Cluster Name and URL:
    As part of the HTTP request payload or query parameters, the frontend includes the cluster name and URL to which it wants to connect.

  7. Backend Checks Map for Existing Websocket Connection to That URL:
    The backend server retrieves the cluster URL from the request and checks the activeConnections map to determine if there is already an established connection to that URL.

  8. If Connection Exists, Use It; Otherwise, Create and Store It in the Map:
    If a websocket connection to the specified URL already exists in the activeConnections map, the backend server reuses that connection. Otherwise, it invokes the openWebsocketConnectionToCluster function to establish a new connection and stores it in the activeConnections map, associating it with the provided cluster URL.

  9. Once the Connection from Backend to Cluster Is Made, Listen Continuously and Send All the Responses to the Frontend:
    After establishing the websocket connection to the Kubernetes cluster from the backend, the backend server enters a loop where it continuously listens for incoming messages or events from the cluster. As messages arrive, the backend server forwards them to the frontend through the established websocket connection.

I think we can efficiently manage multiple websocket connections to Kubernetes clusters, ensuring seamless communication between the frontend and the clusters while minimizing resource usage and overhead.

WDTY cc @illume?

@illume
Copy link
Collaborator Author

illume commented Apr 2, 2024

For 2,

there is going to be one web socket connection right? In this case it does not make sense to me that there is a cluster name used as a key.

@illume
Copy link
Collaborator Author

illume commented Apr 2, 2024

  1. This will need to make sure all data matches before reuse. What happens if a connection without a token reuses one with one? So there needs to be some checks that all parameters for the connection are the same before reusing a connection.

@illume
Copy link
Collaborator Author

illume commented Apr 2, 2024

Other than those two points I’m not sure of it looks good to me.

@knrt10
Copy link
Contributor

knrt10 commented Apr 2, 2024

For 2,

there is going to be one web socket connection right? In this case it does not make sense to me that there is a cluster name used as a key.

Yes only 1 connection would be there. We will use combination of something unique like userID + clustername + url

  1. This will need to make sure all data matches before reuse. What happens if a connection without a token reuses one with one? So there needs to be some checks that all parameters for the connection are the same before reusing a connection.

yes, good catch. Will think of something regarding this too.

knrt10 added a commit that referenced this issue Oct 22, 2024
This adds websocket multiplexer to the backend. Frontend now make a
single websocket call to the backend. Once that connection is
established with the backend it will send message to the backend with
appropriate data. Backend will open multiple websockets and act as a
proxy to the frontend. It will make request to k8s server and return
data to frontend.

This also adds retry logic if the connection is broken between frontend
and backend.

Fixes: #1802

Signed-off-by: Kautilya Tripathi <[email protected]>
knrt10 added a commit that referenced this issue Oct 23, 2024
This adds websocket multiplexer to the backend. Frontend now make a
single websocket call to the backend. Once that connection is
established with the backend it will send message to the backend with
appropriate data. Backend will open multiple websockets and act as a
proxy to the frontend. It will make request to k8s server and return
data to frontend.

This also adds retry logic if the connection is broken between frontend
and backend.

Fixes: #1802

Signed-off-by: Kautilya Tripathi <[email protected]>
@illume illume closed this as completed in bf2ec6d Oct 24, 2024
guydomb pushed a commit to Hello-Heart/headlamp that referenced this issue Oct 27, 2024
This adds websocket multiplexer to the backend. Frontend now make a
single websocket call to the backend. Once that connection is
established with the backend it will send message to the backend with
appropriate data. Backend will open multiple websockets and act as a
proxy to the frontend. It will make request to k8s server and return
data to frontend.

This also adds retry logic if the connection is broken between frontend
and backend.

Fixes: headlamp-k8s#1802

Signed-off-by: Kautilya Tripathi <[email protected]>
vyncent-t pushed a commit that referenced this issue Nov 4, 2024
This adds websocket multiplexer to the backend. Frontend now make a
single websocket call to the backend. Once that connection is
established with the backend it will send message to the backend with
appropriate data. Backend will open multiple websockets and act as a
proxy to the frontend. It will make request to k8s server and return
data to frontend.

This also adds retry logic if the connection is broken between frontend
and backend.

Fixes: #1802

Signed-off-by: Kautilya Tripathi <[email protected]>
@illume illume reopened this Jan 17, 2025
@illume illume added the multi Multi cluster aggregated view label Jan 17, 2025
@illume
Copy link
Collaborator Author

illume commented Jan 17, 2025

We started to draft a test plan for using the multiplexer for single clusters in this release, and for using the multiplexer in the next release for multiple cluster use.

  • What testing should we complete before releasing the multiplexer in this release for single cluster usage?
  • What testing should be completed before releasing the multiplexer for multi cluster usage.

Testing plan to decide when we release

1) Detailed manual test of Headlamp

Like create multi cluster on different cloud providers. Have headlamp flags enabled for everything like dynamic clusters, plugins etc > Also create ingress and stuff and test

We think the normal pre-release manual testing, and the approximately 3 weeks of having the multiplexer enabled in main branch for developers to use day to day will be enough of a detailed test.

So far two developers reported and fixed bugs related to the multiplexer. But no new bugs reported in the last week.

2) Investigate websocket errors/warnings in frontend/ tests

@knrt10 is looking into this. #2753

3) Fix annoying log bug.

it should not prevent it. But it should be a small fix. Once I fix this logs errors ie issue #2753. Will create a PR for that.
web console logs

4) e2e tests for multi clusters

#2460

There are already tests which cover the real time use of headlamp with websockets. But not for multiple clusters.

  • So for this release this should not block using the multiplexer for single clusters.
  • It should block the next release if not complete for using the multiplexer with multiple clusters

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend Issues related to the backend frontend Issues related to the frontend multi Multi cluster aggregated view
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants