Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Areas for Improvement When Using Robyn #1098

Open
ghltk opened this issue Oct 30, 2024 · 1 comment
Open

Areas for Improvement When Using Robyn #1098

ghltk opened this issue Oct 30, 2024 · 1 comment
Assignees

Comments

@ghltk
Copy link

ghltk commented Oct 30, 2024

Hello :)
I have been using Robyn quite effectively, but I have noticed a few areas for improvement that I would like to share.

  1. Issue with Budget Allocator Onepager Results:
  • The current values focus solely on paid media, leading to a significant gap with actual sales (Nonpaid + Paid). When collaborating with marketing practitioners, I can only explain that "The total response value is not the expected actual sales, as the impact of Non_paid_media is not considered, so the real value may be higher or lower." This explanation is not intuitive and is difficult to justify.
  1. Baseline Interpretation Issue
  • When defining Sales = Paid Media + Non Paid Media(=baseline), it becomes challenging to interpret if the baseline is negative. For example, when Sales is 1,000,000, Robyn may express this as 2,500,000 (Paid Media) - 1,500,000 (Non Paid Media). It would be beneficial to ensure that Non Paid Media does not become negative.
  1. Lack of Forecasting Features
  • There is a need for intuitive functionality, such as estimating "What will the expected sales be if the budget allocation strategy is used for 3 months?" I understand that Recast's MMM already includes this feature.
  1. Lack of Validation Functionality
  • To continually trust the model, tracking the error rate over time for previously used models is essential. A method to visually compare the total response value from the Budget allocator with the actual values after a simulation period using a time-series graph would be helpful. Additionally, separating validation for NonPaid Media and Paid Media could be beneficial. Although I understand that predicting NonPaid Media is challenging, having a rough estimate would enhance intuitive expression.
  1. Urgent Need for Refresh Performance Improvement
  • I wish to refresh the model weekly to observe changes in channel contributions, but the refresh process is unstable. For instance, if the Initial Model shows a 50% contribution from Campaign A, the Refreshed Model may drastically decrease to 20% one week later. The inconsistency and instability make it challenging to use the model continuously. Simply selecting the model with the lowest decomp value as the final model does not resolve the issue.

I would like to know if any of the issues mentioned above are currently being developed. I am also curious about the scheduled updates. Please let me know if any part is unclear.
Thank you :)

@gufengzhou
Copy link
Contributor

Very sorry about the delay due to many changes. Thank you for the feedback!

  1. Yes budget allocator can only allocate budget, thus only works for paid media. It's for maxismising media impact. What you're asking for is rather a "forecaster" that requires also simulations for baseline variables and organic etc. Robyn doesn't provide forecasting at the moment though.
  2. This is very philosophical. If I assume your competitor's marketing has huge negative impact on your sales, then it could be the case your marketing is 2.5M and competitor has -1.5M impact. Technicially speaking, it's about allowing coefficents to be negative or not. Usual practise is to keep coef positive for all media, but i doesn't make sense to keep all coef positive. I recommend to dig deeper into model selection, beyond those pareto optimum candidates.
  3. Yes you're right:) We have this on the roadmap, but can't promise any milestones unfortunately.
  4. Budget allocator is actually a simulator that strickly follows your historical response curve. Again, it's not a forecaster. For validation, we actually recommend you to run experiments and use those to calibrate MMM
  5. Appreciate this feedback! Yes we're aware of the instability. Will look into this next year. Sorry about the slow maintenance here because we're lacking resource unfortunately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants