Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChunkLoadError: Loading chunk XX failed (during add-on installation) #13295

Closed
willdurand opened this issue Jul 5, 2019 · 26 comments
Closed
Labels
neverstale Use this to tell stalebot to not touch this issue. Should be used infrequently. repository:addons-frontend Issue relating to addons-frontend state:stale Issues marked as stale. These can be re-opened should there be plans to fix them.

Comments

@willdurand
Copy link
Member

There are a lot of errors like below since the last push.

https://sentry.prod.mozaws.net/operations/addons-frontend-amo-prod/issues/6015921/

ChunkLoadError: Loading chunk 49 failed.
(error: https://addons-amo.cdn.mozilla.net/amo-i18n-pl-5895ff0e3199834fe71a.js)
  at location (./src/locale/fr/amo.js?0bc5:7:1)
  at __webpack_require__ (./src/amo/components/LanguagePicker/index.js:36:28)
  at e/< (./src/amo/components/LanguagePicker/index.js:36:13)
  at G (./src/amo/utils/errors.js:14:14)
  at a (./src/amo/components/LanguagePicker/index.js:57:14)
...
(16 additional frame(s) were not displayed)
@kumar303
Copy link
Contributor

kumar303 commented Jul 5, 2019

Sentry suggests that these errors may be preventing add-on installation. Because of this and that 6.8K users are affected thus far, I'm marking it a P1.

Screenshot 2019-07-05 14 11 06

@kumar303 kumar303 changed the title ChunkLoadError: Loading chunk XX failed. ChunkLoadError: Loading chunk XX failed (during add-on installation) Jul 5, 2019
@willdurand
Copy link
Member Author

Sentry suggests that these errors may be preventing add-on installation. Because of this and that 6.8K users are affected thus far, I'm marking it a P1.

I tried clicking the install button but I cannot reproduce...

@willdurand
Copy link
Member Author

willdurand commented Jul 5, 2019

We believe that the react-transition-group lib might be the problem. We merged an update in mozilla/addons-frontend#8226 and we found references of a SwitchTransition component in the Sentry logs. This lib has been updated in mozilla/addons-frontend#8241, after the tag. The update mentions the SwitchTransition component.

We think that updating the lib might fix this problem.

@kumar303
Copy link
Contributor

kumar303 commented Jul 5, 2019

We believe that the react-transition-group lib might be the problem.

Specifically, the stack traces begin with this:

Screenshot 2019-07-05 14 21 55

There might be additional clues on what was fixed in 4.2.1 here: reactjs/react-transition-group#516

If upgrading to 4.2.1 doesn't stop the errors, we will try downgrading mozilla/addons-frontend#8226

@kumar303
Copy link
Contributor

kumar303 commented Jul 5, 2019

tag prepped for a hotfix: mozilla/addons-frontend@2019.07.04...2019.07.04-1

@willdurand
Copy link
Member Author

The hotfix has been deployed in production but we're still seeing the same Sentry events 😞

@willdurand
Copy link
Member Author

willdurand commented Jul 8, 2019

If upgrading to 4.2.1 doesn't stop the errors, we will try downgrading mozilla/addons-frontend#8226

I am going to try that in 2019.07.04-2, see: mozilla/addons-frontend@2019.07.04-1...2019.07.04-2

@ioanarusiczki
Copy link

ioanarusiczki commented Jul 8, 2019

@willdurand Had some errors at first on a new profile - FF67 unbranded - Win10 on stage which I can no longer reproduce now.

errors

@willdurand
Copy link
Member Author

I am going to try that in 2019.07.04-2, see: 2019.07.04-1...2019.07.04-2

We're still seeing the same Sentry events after the second hotfix has been deployed. I believe it has nothing to do with react-transition-group...

It could be caused by mozilla/addons-frontend@5c1cfa7, maybe.

@willdurand
Copy link
Member Author

Sentry stats:

Screen Shot 2019-07-08 at 15 37 47
Screen Shot 2019-07-08 at 15 38 07

  • Device: 61% are smartphone
  • OS: 40% Windows 10, 24% Windows 7

Still, I cannot reproduce with a fresh Windows 10 + FF 67.

@AlexandraMoga
Copy link

I've also tested on an Android 8.0 with Fx67 and I could not reproduce the issue

@willdurand
Copy link
Member Author

Most (all?) stack traces have a reference to the lang picker. It's weird because the lang picker does not use CSS transitions and updating the value of the lang picker reloads the page.

@willdurand
Copy link
Member Author

Also, re: polyfills, the polyfills we're using in addons-frontend are likely not needed with FF 67+, so I am not sure that the updated polyfills are the cause.

Some babel deps have been updated too during last week's tag, so a behavior may have changed there too.

@kumar303
Copy link
Contributor

kumar303 commented Jul 8, 2019

This seems to affect a high number of users but there are no actual reports from users, which is atypical considering our userbase. A likely explanation of this is that it's not breaking the site in any visible way. Thus, I'm dropping priority.

It remains a mystery. None of us can reproduce it. The webpack issues up above suggest that it could be an intermittent problem due to the user's network or it could be caused by a browser extension on the client machine which is blocking webpack from loading resources.

It would be nice to figure this out simply because something changed in this deployment to cause it.

@muffinresearch
Copy link
Contributor

I wonder if sentry could be logging something on unload e.g. after lang picker interaction and during the start of the reload?

@willdurand
Copy link
Member Author

I see that this error appeared after last week's push: https://sentry.prod.mozaws.net/operations/addons-frontend-amo-prod/issues/6015991/. It seems very similar to the error linked to this GitHub issue. There is a reference to core-js, which has been updated in the last push according to the migration guide: https://babeljs.io/docs/en/babel-polyfill, https://babeljs.io/blog/2019/03/19/7.4.0#core-js-3-7646-https-githubcom-babel-babel-pull-7646 and https://github.com/zloirock/core-js#babelpolyfill.

That being said, I only noticed now that the babel polyfill requires core-js@2, so the best explanation I have right now is that there is a subtle BC break between core-js@2 and core-js@3.

Another issue to take into account: I believe the core-js update was necessary to be able to upgrade other dependencies. For example, Storybook uses its own babel/webpack setup and it requires core-js@3 (if I am not mistaken).

@willdurand
Copy link
Member Author

NS_ERROR_FAILURE seems to be a general FF error according to https://developer.mozilla.org/en-US/docs/Mozilla/Errors:

NS_ERROR_FAILURE (0x80004005)
This is the most general of all the errors and occurs for all errors for which a more specific error code does not apply.

@kaushalinfosys
Copy link

I am using AsyncComponent HOC to lazy load chunks, and was facing same issue.
the work around I did is, identifying the error and make a hard reload once.

.catch(error => {
        if (error.toString().indexOf('ChunkLoadError') > -1) {
          console.log('[ChunkLoadError] Reloading due to error');
          window.location.reload(true);
        }
      });

the full HOC file looks like this,

export default class Async extends React.Component {
  componentWillMount = () => {
    this.cancelUpdate = false;
    this.props.load
      .then(c => {
        this.C = c;
        if (!this.cancelUpdate) {
          this.forceUpdate();
        }
      })
      .catch(error => {
        if (error.toString().indexOf('ChunkLoadError') > -1) {
          console.log('[ChunkLoadError] Reloading due to error');
          window.location.reload(true);
        }
      });
  };

  componentWillUnmount = () => {
    this.cancelUpdate = true;
  };

  render = () => {
    const props = this.props;
    return this.C ? (
      this.C.default ? (
        <this.C.default {...props} />
      ) : (
        <this.C {...props} />
      )
    ) : null;
  };
}

@willdurand
Copy link
Member Author

Interesting, thanks @kaushalinfosys. Would you know the root cause, though?

@willdurand willdurand removed their assignment Mar 16, 2020
@kaushalinfosys
Copy link

kaushalinfosys commented Mar 16, 2020

Interesting, thanks @kaushalinfosys. Would you know the root cause, though?

Yes, the root cause is webpack changes the chunk file name on every build considering the code has been changed, see the setting in webpack.config.prod.js file,
chunkFilename: 'static/js/[name].[chunkhash:8].chunk.js',

If we keep our application tab open and meanwhile some PROD deployment is done, and if we try to navigate the page without reloading, this error will occur because the chunk references in main.[hash].js is old, but the lazy load file name is already changed on server. So main file can not find that chunk.

@willdurand
Copy link
Member Author

mm, thanks. That's interesting because I did not think about that. I am confused because we have errors coming almost all the time in Sentry and we deploy only once a week 🤔

@hannadrehman
Copy link

hannadrehman commented May 6, 2020

@kaushalinfosys i initially thought the same , but i have observed that even if there is no deployment done i still get this chunkLoadError quite sometimes in a day.

@stale
Copy link

stale bot commented Nov 3, 2020

This issue has been automatically marked as stale because it has not had recent activity. If you think this bug should stay open, please comment on the issue with further details. Thank you for your contributions.

@stale stale bot added the state:stale Issues marked as stale. These can be re-opened should there be plans to fix them. label Nov 3, 2020
@bobsilverberg
Copy link
Contributor

@willdurand Do you know where things stand with these errors? Are we still seeing a lot of them, or did the problem magically go away?

@willdurand
Copy link
Member Author

@willdurand Do you know where things stand with these errors? Are we still seeing a lot of them, or did the problem magically go away?

still there AFAIK

@stale stale bot closed this as completed Nov 19, 2020
@muffinresearch muffinresearch reopened this Mar 1, 2021
@muffinresearch muffinresearch added the neverstale Use this to tell stalebot to not touch this issue. Should be used infrequently. label Mar 1, 2021
@stale stale bot closed this as completed Mar 19, 2021
@KevinMind KevinMind added migration:no-jira repository:addons-frontend Issue relating to addons-frontend labels May 5, 2024
@KevinMind KevinMind transferred this issue from mozilla/addons-frontend May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
neverstale Use this to tell stalebot to not touch this issue. Should be used infrequently. repository:addons-frontend Issue relating to addons-frontend state:stale Issues marked as stale. These can be re-opened should there be plans to fix them.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants