Skip to content
This repository has been archived by the owner on Jan 9, 2018. It is now read-only.

Easier configuration for CDN backends #37

Open
acdha opened this issue Oct 9, 2012 · 0 comments
Open

Easier configuration for CDN backends #37

acdha opened this issue Oct 9, 2012 · 0 comments

Comments

@acdha
Copy link

acdha commented Oct 9, 2012

Background - my need:

  1. All files managed & distributed locally
  2. Servers use CachedFilesMixin to generate URLs with hash-busters which will be served from static.example.org which is a simple alias for S3 with far-futures expires, CORS headers, etc.
  3. During deployment, collectstatic populates S3 normally. This ensures that even during a rolling upgrade across multiple servers, there's no way for a response to reference a file before it's been collected to S3.

I have this implemented using django-storages' S3BotoBackend.

This required a storage subclasses to prevent leaking the S3 internal URLs out to the public:

class PublicS3BotoStorage(S3BotoStorage):
    def url(self, name):
        name = self._normalize_name(self._clean_name(name))
        return urljoin(settings.STATIC_URL, name)


class CachedS3StaticFileStorage(CachedFilesMixin, PublicS3BotoStorage):
    def __init__(self, bucket=None, *args, **kwargs):
        kwargs.update({'bucket': settings.STATICFILES_BUCKET_NAME})
        super(CachedS3StaticFileStorage, self).__init__(*args, **kwargs)

This feels a little baroque but it works. Unfortunately, it also ensures that every {% static %} call will hit S3 rather than the local filesystem which lowers performance and ensures the exact problem I was hoping to avoid if you have multiple versions running:

  1. version1 uploads both the hashed and unhashed file (e.g. common.css). Initially this works & the hashed name translation is cached locally
  2. version2 is uploaded to a different cluster, creating a new hashed file and clobbering the unhashed common.css
  3. The first server cluster's cache expires and it goes to regenerate the hash, getting version2 rather than the expected version1 installed locally.

I've rearchitected a bit to simply always use CachedStaticFilesStorage and simply call s3cmd sync during deployment but this feels like something which should be easier to do with staticfiles. After some thought it seems like this could simply be solved by extending collectstatic to either have a target or building a storage backend which would copy collected files to multiple backends. In either case it would also be extremely useful to either outright prevent uploading the unhashed files or have a regexp whitelist to avoid mixing versioned and unversioned resources unless the user has a good reason.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant