-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nailing Correlation Analysis #6474
Comments
rm events in an actions funnel - events that make up actions for the funnel. |
Hey @clarkus - some technical updates that should unblock design:
regarding timings for PostHog sized org: It takes about ~1.5 second to run correlation analysis on all properties for a given event. A more ambitious analysis is for running property correlations on all events & all properties. This takes ~5 seconds, so for now I'm ditching this functionality. We can revisit if there's a good enough usecase for it. About connecting to other insights: I was thinking of showing Person modals on the success and dropoff counts, and linking to Paths & session recordings via the modal. Does this make sense? (very open to ideas here, and if we should represent it differently / making it 1-click vs 2-click / etc.). The problem we want to solve here is to allow diving deeper: given a correlation analysis result, I want to see what else these specific people are doing: their paths & screen recordings. |
Taking a short step back with some thoughts (happy to discuss sync too, might be easier),
Details/rationale for priorities orderRationale for prioritization 👇
|
This aligns with our other uses of the person modal so far. I think that's a good start. We offer this in the core funnels visualization, so we might consider how we clarify the different origins of person lists. |
I was just trying this with our data - the current functionality seems to be there. I think you could expand the drill-down section once the analysis finishes running - it's just a stronger signal that things are done and ready for a look. Regarding the scale of drill-down properties, maybe we just show the top N items and then allow another action to "show more". I think a lot of the scale issues might be resolved by creating some reasonable defaults for filtering out known noisy events / properties. Secondary to that, users could flag events that are most critical to their use. That would be a strong signal on what matters from a user perspective. I'm happy to jump in on design iterations once you think we're ready for that. |
Sounds good @clarkus! I think the exclusions thing will be pretty helpful right now and we can then explore something like "critical" properties once we have some more feedback/data here. The other thing (pending @neilkakkar's comments here) I'd love to explore design-wise is how can we better display the results. In particular,
The benchmark doc will provide a lot more context here. |
Had a sync chat about this:
|
Growing list of default Property Exclusions:
All the above are irrelevant in the face of Rule properties in for Only elements_chain seems useful so far? Same for $pageview?: Nah, let $pageview be default place for all these random autocaptured properties to show up.
|
This issue hasn't seen activity in two years! If you want to keep it open, post a comment or remove the |
This issue was closed due to lack of activity. Feel free to reopen if it's still relevant. |
As noted in #6360, our ambitious goal this coming sprint is to build the best possible quant analysis tool.
In prep for this, we did some user testing on the MVP, and an industry analysis + feature brainstorm. (thanks Paolo!)
I interpret best as a tool that does its job without getting in the way. And the job for correlation analysis is to help diagnose causes.
There's two things we need to accomplish to be successful here:
For example, I might see that "Video Played" is a signal for success, but I can't understand why: Maybe it's specific videos that specially convert well, or something else to do with them. So, once I see "Video Played", I expect to be able to answer the following question: What is it about Video Played that makes it a good signal?
Implicit in the 2 things above is not cluttering the UI with 1000s of different options that might be useful: If it isn't clear how to answer your question, the tool isn't useful.
Things we ought to do: (rough prioritisation for now, evolving list). Ideally we'd do all of these, but given the 2 week constraint, we want to choose maximum impact things first. I think both lists go in parallel, and we should pick up the first thing from either and continue & see where we get till the end of the sprint.
Surface meaningful & easy to understand results
We're already in a place where results are useful to some people. Making these better involves allowing discarding 'obvious' events, surfacing things users are familiar with (actions), and making the numbers (if any) easy to understand & removing discrepancies.
Allow further actions on surfaced results
Where we're lacking right now is what to do with the results. It's hard to go from "oh, this is a signal" to "What I should do about it". Drilling down on signals via paths & breakdowns; and then qualitative feedback on this small set, like session recordings, help figure out the problems.
For example, on a property breakdown like browser_version=87 being a signal of failures, I'd love to see some session recordings to see what went wrong.
$autocapture
and$pageview
Add this event signal to the funnelI think executing the above things gets us to a place where we are integrated pretty well into other tools. It complements, and shortcuts what users would anyways be doing to diagnose causes, thus achieving its goal of helping diagnose causes.
To judge success: Get 3 users to LOVE correlation analysis
cc: @clarkus @paolodamico @marcushyett-ph @hazzadous @liyiy @EDsCODE @macobo for any ideas on what we should / should not be doing.
The text was updated successfully, but these errors were encountered: