-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Backstroke Migration #66
Comments
I have a paid instance of deployhq.com, so if you want I can "donate" the deploy tool and then you can stop worrying about that 😄 |
@m1guelpf Are you talking about for the current deployment, or with the new architecture I'm proposing? I'll do some research this weekend and see it would work be helpful for the current state of affairs (I'm unfamiliar with the service) but I think for the new architecture I'd like to try out some sort of immutable deployment such as Docker (and at a cursory glance, DeployHQ doesn't seem immutable). Thanks! |
@1egoman I was talking about the old one. I don't think it supports Docker or any other type of immutable deployment... :sad: |
Cool. I'll do some research this weekend. 😄 |
@m1guelpf It doesn't look like DeployHQ works with Heroku, so thank for the offer, but I don't think it'll be helpful for maintaining the current Backstroke version 🙁 |
@1egoman It sounds like https://zeit.co/now does what you want? |
@eins78 Interesting, I'll do some research and see if now could fit Backstroke's needs. |
I love the idea of https://zeit.co/rgausnet/server/xvoeisygwy shows that the container takes a very long time to start, though the logs seem to show that the container actually started previously. Once the container did start, I get repeatedly get 502s: https://server-xvoeisygwy.now.sh/ @rauchg Any help you could provide with this? Also sent a support email to
|
Update: I've secured the domain Also, I've mostly finished work on the legacy backstroke service (now to be hosted at |
I've spent the last week or so writing a deployment script for Backstroke. My current plan is to host all services on a DigitalOcean droplet, with each service running within docker. In the near term, I plan to use docker-compose to spin up all services on server start since I'm not too concerned with scaling right off the bat. (If I want to scale the service further, I might try nomad.) My goal was to run all these services on the smallest size droplet (1 core, 512mb ram) but it looks like that is going to be near impossible. Between haproxy, docker and two node processes, the instance runs of of memory within a couple minutes. I'm now running on the 2nd-smallest droplet (2 core, 1gb ram) and I can run docker, redis, and three node processes ( I'd prefer to rely on a third party service for hosting the database rather than do it myself, though depending on cost it may make sense for me to just figure it out on my own. Currently, I'm relying on a Heroku free-tier database with a 10,000 row limit (and linking to it externally from the Unfortunately, this means that I'm going to be spending a bit of of pocket for now - hopefully this new version will gather some more Gratipay donations and can be self sufficient! While the deployment scripts aren't ready to open source (of course, with secrets redacted), I'll post a link once they are ready. All the existing services in the diagrams above are also now hosted at backstroke.co - I'm hoping these updates provide transparency into how Backstroke's upcoming release is shaping up. Are these helpful? Thanks for using Backstroke! ❤️ |
I finished up the deployment repository. https://github.com/backstrokeapp/deployment |
A number of helpful things have happened since the last update:
I'm nearly ready to release this thing. I'm a bit worried that once it's released, I will have forgotten to verify an edge case and I'll get an angry issue, but I think I just need to bite the bullet. My goal is to release this new stuff by next weekend. |
Pre-deployment
Deployment checklist
Verify
Take down old stuff (do this once sure that the new stuff is stable)
|
The deployment happened at 3pm EDT on October 7th, 2017. The service was down from 3pm to 3:10pm. I'm glad that this new stuff is finally deployed. Over the next week or so I expect for a few issues to come in with scenarios that I didn't take into account when working on the new stuff, but all in all, I'm pretty satisfied with this release. Dashboard: https://github.com/backstrokeapp/dashboard/releases/tag/v2.0.0 In a few weeks, I'll complete the migration by taking down all the old stuff on Heroku, and close this issue. |
I'm working on a rewrite of Backstroke. This has been a long time coming (over 6 months!) but I feel that it makes the system much more stable and predictable. In its current state, deploying updates to the live system is a challenge (and as a consequence, I haven't done it for months.) This isn't something I'm all that good at, so I'd love for anyone more experienced than me to let me know what I'm doing right and what I'm doing wrong.
Current System Architecture
What currently exists is deployed on Heroku on a free Dyno, using a mlab sandbox database.
Serious problems with the current approach
Rewrite plan
In general, I want to try to split the system into a number of smaller services. One of the biggest changes involves link updates - the current plan is to stick all link update operations into a queue with workers at the end that perform the actual updates. As a consequence, the response to
curl -X POST https://backstroke.us/_linkid
will return something like this:And then, to get the status of the webhook operation, make a call to
https://api.backstroke.us/v1/operations/id-of-thing-in-queue-here
, which returns something like this:The other large change is less of a reliance on webhooks. They are a side effect that is a pain to mange. Currently, links store two values: the last updated timestamp and the last known SHA that is the head of the upstream's branch. Every couple minutes, a timer is run in the background that finds all links that haven't been updated in 10 minutes (in this way, link updates are staggered so only a subset of all links are updated every couple minutes). If a link hasn't been updated in 10 minutes, then the SHA of the upstream branch is checked, and if it differs from the stored SHA, an automatic link update is added to the queue. Currently, this functionality lives in the
api.backstroke.us
service below, but once that service has to be scaled past one instance that functionality would probably be extracted to another service.Services in green are ones that I have already set up and services in red are ones that haven't been written yet:
NOTE: All green services are actually deployed. Check them out! :) Things may change though, so don't be surprised if I clear the database or something.
backstroke.surge.sh
- The new website. Code can be found here. I think it more accurately portrays Backstroke with it's upcoming changes.legacy.backstroke.us
- Many people are still using Backstroke Classic. To maintain backward compatibility, I need to run a service to emulate the old behavior. This still needs to be written.backstroke.us (nginx)
- A reverse proxy to run atbackstroke.us
, directing all POST requests tolegacy.backstroke.us
and all GET requests tobackstroke.surge.sh
. Required to keep Backstroke Classic working.app.backstroke.us
- The new dashboard. It simplifies the process of link management significantly. Screenshots and code are here.api.backstroke.us
- Manages user authentication and link CRUD. This is the only services that is connected to the database, which means that it's the only stateful service. This is a massive win. Also, this service handles adding webhook operations to Redis for the worker on a timer or when a user pings a webhook url.Backstroke Worker
- The worker reads operations from the Redis queue, performs them, and sticks the results back in Redis to be displayed by theapi.backstroke.us
service. The worker is stateless, small, and tested well.How I'm planning on fixing the serious problems:
Deployment
Before, this service was deployed on Heroku. I'm currently pursuing a sponsorship by DigitalOcean (They've said they'll give Backstroke $350 in free credits, but this was a few months ago. I need to follow up with them.)
If I'm unable to secure the DigitalOcean sponsorship (which is what it is looking like) then deployment is up in the air. I'm currently still deploying all the new services on Heroku as free dynos, utilizing Heroku Postgres and Heroku Redis for the stateful components of the system. Through Gratipay, we have about $4 a month available to put towards infrastructure. I think this could all be hosted on one DigitalOcean droplet of the smallest size, which is $5/mo. AWS, Google cloud platform, and other services should be explored too. Though I don't have as much experience with them they could work out too.
Questions for others
❤️ A thanks to all users - Backstroke has been a fun project to grow over the past year and a half. I hope we can make it better together!
Ryan Gaus, @1egoman
A number of users who have reported issues or commented on issues that may have opinions on these changes: @evandrocoan @thtliife @gaearon @eins78 @radrad @jeremypoulter @johanneskoester @m1guelpf
The text was updated successfully, but these errors were encountered: