Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send only data that is needed on the dashboard #173

Closed
wants to merge 6 commits into from

Conversation

toy
Copy link
Contributor

@toy toy commented Sep 2, 2020

Get list of data ids after all widgets are created and send it as parameter to event stream endpoint so it sends only requested data.
This is especially useful when one app has multiple dashboards which consume different data.

@kinow
Copy link
Member

kinow commented Feb 22, 2021

Hi @toy sorry for the delay to look into this one. I am planning a dev cycle for smashing, and to include this PR in the next release. Just need to test/review it.

@kinow kinow self-assigned this Feb 22, 2021
@toy
Copy link
Contributor Author

toy commented Feb 22, 2021

Merged so there is no conflict in the way

@kinow
Copy link
Member

kinow commented Feb 23, 2021

Merged so there is no conflict in the way

Thanks! Although GH UI's still showing "This branch cannot be rebased due to conflicts"?

@toy
Copy link
Contributor Author

toy commented Feb 24, 2021

Resolving in a merge commit doesn't necessarily help git to resolve same conflicts when rebasing

Copy link
Member

@kinow kinow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current tests appear to point to Nil list of ids.

image

@toy
Copy link
Contributor Author

toy commented Feb 24, 2021

Found out that special handling of Hash setting values by sinatra was the culprit:
https://github.com/sinatra/sinatra/blob/d39b135c67b134bd3c8fea636f244d77422e18bc/lib/sinatra/base.rb#L1283
I'm not sure what would be the right way to handle dashboard events, to allow all of them to pass through as I did, or to add the id of the dashboard to ids to consume.

Copy link
Member

@kinow kinow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good! I'm now testing it installing the gem locally, and testing using existing dashboards.

str << body
end
end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just confirming, I think this isn't necessarily needed for the change, but since it's not used we are good to remove it, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is not used. To note that the way I "inlined" it changes the behaviour as before all events were sent as one blob and with the change events are sent one by one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. Had to read the code again to see which part you were talking about.

If I understand it right, before we started the event stream and pushed all changes at once. Now we iterate the events filtering the ones we want, and pushing one by one.

I cannot think of a better way right now, and at least testing locally, I didn't note any performance issues.

Copy link
Member

@kinow kinow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested the endpoint and appears to be working as expected.

Before:

Screenshot from 2021-02-25 17-02-47

Now:

Screenshot from 2021-02-25 17-00-44

Note that the dashboard has only 2 widgets (only synergy receives data). Before this PR we would still get all the data in the UI, and have to go through what was needed or not.

Now after @toy 's change, the UI should get only what's necessary to display the widgets. That should improve especially when users have tens or hundreds of metrics.

Just trying to figure out if we can improve the tests to verify the new functionality, maybe something like

  • test that /events?ids=synergy brings events only for that ID
  • test that /events?ids=synergy%2Cbuzzwords includes events for 2 IDs
  • test that /events?ids=unknown doesn't retrieve any events

Or something similar. Just to avoid regressions.

Copy link
Member

@kinow kinow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@toy WDYT, something like this.

diff --git a/test/app_test.rb b/test/app_test.rb
index d5a809d..31947c3 100644
--- a/test/app_test.rb
+++ b/test/app_test.rb
@@ -79,6 +79,38 @@ class AppTest < Dashing::Test
     assert_equal 8, parse_data(@connection[0])['value']
   end
 
+  # rubocop:disable Metrics/AbcSize
+  # rubocop:disable Metrics/MethodLength
+  def test_get_events_by_id
+    with_generated_project do
+      post '/widgets/synergy', JSON.generate({ value: 3 })
+      assert_equal 204, last_response.status
+
+      post '/widgets/valuation', JSON.generate({ value: 'a' })
+      assert_equal 204, last_response.status
+
+      get '/events?ids=synergy%2Cvaluation'
+      assert_equal 200, last_response.status
+
+      events = last_response.body.split(/\n+/) # two events, printed with blank lines separating them
+      synergy_event = parse_data(events[0])
+      valuation_event = parse_data(events[1])
+      assert_equal ['synergy', 3], [synergy_event['id'], synergy_event['value']]
+      assert_equal %w[valuation a], [valuation_event['id'], valuation_event['value']]
+    end
+  end
+
+  def test_get_events_by_unknown_id
+    with_generated_project do
+      post '/widgets/synergy', JSON.generate({ value: 3 })
+      assert_equal 204, last_response.status
+
+      get '/events?ids=unknown'
+      assert_equal 200, last_response.status
+      assert_equal '', last_response.body
+    end
+  end
+
   def test_dashboard_events
     post '/dashboards/my_super_sweet_dashboard', JSON.generate({event: 'reload'})
     assert_equal 204, last_response.status

If that looks OK I will just include a commit with this test, and one more for the changelog. Then we should be ready to release it. Or if you have a better suggestion, feel free to commit directly 👍

And big thank you for the PR! 🥇

@toy
Copy link
Contributor Author

toy commented Feb 26, 2021

LGTM

@kinow
Copy link
Member

kinow commented Feb 26, 2021

LGTM

Merging in a few minutes with the new commits. Will leave a message in our Gitter/Matrix chat room so others are aware of this change, and I'll ask them to report in case they have any issue due to this new feature (don't think that will happen 😃 ).

Thanks heaps @toy !

@kinow kinow closed this in 1739bf0 Feb 26, 2021
@kinow
Copy link
Member

kinow commented Feb 26, 2021

Rebased, added 2 commits locally, merged branch with master, and pushed all commits there too 👍 merged! Release is out is 15 minutes, just going through changelog & checklist.

@kinow kinow added this to the 1.3.2 milestone Feb 26, 2021
@mikewalkerblackpatch
Copy link

@kinow @toy
Sorry not sure if this is the correct place to raise an issue for this.
Since this change some of my (very large) dashboards are not returning all the data I think the Get/events?ids request is too big? I hadn't see this issue in v 1.3.0 so came looking here to see whats changes.

I can see the request output when I am running locally, (included below) it's less than half of all the ids I need for that dashboard.
Can you help?

127.0.0.1 - - [02/Mar/2021:13:11:22 +0000] "GET /events?ids=applivery%2Capplivery.sit%2Capplivery.uat%2Capplivery.preprod%2Capplivery.live%2Cclient-api%2Cclient-api.sit%2Cclient-api.uat%2Cclient-api.preprod%2Cclient-api.live%2Ccollection-streams-processor%2Ccollection-streams-processor.sit%2Ccollection-streams-processor.uat%2Ccollection-streams-processor.preprod%2Ccollection-streams-processor.live%2Ce2e-parcel-events-connector%2Ce2e-parcel-events-connector.sit%2Ce2e-parcel-events-connector.uat%2Ce2e-parcel-events-connector.preprod%2Ce2e-parcel-events-connector.live%2Ce2e-tour-reporting-sink-connector%2Ce2e-tour-reporting-sink-connector.sit%2Ce2e-tour-reporting-sink-connector.uat%2Ce2e-tour-reporting-sink-connector.preprod%2Ce2e-tour-reporting-sink-connector.live%2Cemail-sender%2Cemail-sender.sit%2Cemail-sender.uat%2Cemail-sender.preprod%2Cemail-sender.live%2Chht-auth-service%2Chht-auth-service.sit%2Chht-auth-service.uat%2Chht-auth-service.preprod%2Chht-auth-service.live%2Chht-config-api%2Chht-config-api.sit%2Chht-config-api.uat%2Chht-config-api.preprod%2Chht-config-api.live%2Chht-event-feed%2Chht-event-feed.sit%2Chht-event-feed.uat%2Chht-event-feed.preprod%2Chht-event-feed.live%2Chht-parcel-api%2Chht-parcel-api.sit%2Chht-parcel-api.uat%2Chht-parcel-api.preprod%2Chht-parcel-api.live%2Chht-routing-api%2Chht-routing-api.sit%2Chht-routing-api.uat%2Chht-routing-api.preprod%2Chht-routing-api.live%2Chht-tour-api%2Chht-tour-api.sit%2Chht-tour-api.uat%2Chht-tour-api.preprod%2Chht-tour-api.live%2Ckafka-delayed-topics%2Ckafka-delayed-topics.sit%2Ckafka-delayed-topics.uat%2Ckafka-delayed-topics.preprod%2Ckafka-delayed-topics.live%2Cmars-et-services%2Cmars-et-services.sit%2Cmars-et-services.uat%2Cmars-et-services.preprod%2Cmars-et-services.live%2Cmars-opunits-connector%2Cmars-opunits-connector.sit%2Cmars-opunits-connector.uat%2Cmars-opunits-connector.preprod%2Cmars-opunits-connector.live%2Cmars-parcel-feed%2Cmars-parcel-feed.sit%2Cmars-parcel-feed.uat%2Cmars-parcel-feed.preprod%2Cmars-parcel-feed.live%2Cmars-parcel-feed-jdbc-connectors%2Cmars-parcel-feed-jdbc-connectors.sit%2Cmars-parcel-feed-jdbc-connectors.uat%2Cmars-parcel-feed-jdbc-connectors.preprod%2Cmars-parcel-feed-jdbc-connectors.live%2Cmars-parcel-target-hist-connector%2Cmars-parcel-target-hist-connector.sit%2Cmars-parcel-target-hist-connector.uat%2Cmars-parcel-target-hist-connector.preprod%2Cmars-parcel-target-hist-connector.live%2Cparcel-date-processor%2Cparcel-date-processor.sit%2Cparcel-date-processor.uat%2Cparcel-date-processor.preprod%2Cparcel-date-processor.live%2Croute-analysis-connector%2Croute-analysis-connector.sit%2Croute-analysis-connector.uat%2Croute-analysis-connector.preprod%2Croute-analysis-connector.live%2Croute-analysis-tour-data-connector%2Croute-analysis-tour-data-connector.sit%2Croute-analysis-tour-data-connector.uat%2Croute-analysis-tour-data-connector.preprod HTTP/1.1" 200 - 90.1307

@kinow
Copy link
Member

kinow commented Mar 2, 2021

@mikewalkerblackpatch hmm, should have considered that possibility. I think we might have to revert the change, cut a release, and open an issue to investigate having the ids=a,b,c as an optional feature. Any thoughts @toy ?

@toy
Copy link
Contributor Author

toy commented Mar 3, 2021

@mikewalkerblackpatch I think normally it is better to create an issue (there is one 🙂 ), but upto @kinow

About the issue - first I'm impressed, there are 104 ids in the the url and I didn't consider so many widgets.
I'm wondering what exactly happens for those dashboards, as they shouldn't receive any data as too long urls should cause status 414 from the server.
And too long url is not defined by spec, while from a bit of searching it seems that the safe bet is 2000 chars, but for example thin used by default by smashing has a limit of 1024 * 10 for query string, so maybe a solution is to check and tweak configs for servers/proxy-servers on the way to smashing.

@kinow It is still a good idea to make it possible to receive all data, probably some option like Dashing.filterEvents = false which causes request to not have ids parameter which is understood by backend as a signal to send all events. For now there is only one know issue, but up to you to revert the change.

@mikewalkerblackpatch
Copy link

@toy we actually have 168 data-id's on this particular dashboard. We amended the Jenkins Build widget to contain 4 data points (one for each environment) here is just one as an example.
image
image

With v 1.3.2 the dashboard was still rendering ok, but there was no data for all widgets past the 104.
I'm going to try renaming the id's to be much shorter so maybe that would help us if there is a 2000 character limit.
In the meantime, we've managed to convince the westwater/smashing docker container to pull down v 1.3.0 rather than the latest when we start it up.

@kinow Now I have worked out how, I am happy to raise this as an issue if you like.

@kinow
Copy link
Member

kinow commented Mar 4, 2021

Let's create a new issue. My plan is to work on a fix as @toy suggested that allows to send all events data.

Let's discuss it there and figure out if we use a new setting, if that's enabled or not by default, or any other approach. Thanks!

@mikewalkerblackpatch
Copy link

I have opened issue #181
Many thanks for your help.

@toy toy deleted the send-less-data branch March 8, 2021 13:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants