-
Notifications
You must be signed in to change notification settings - Fork 0
0.9 Design Plan
We created a MVP (essentially) for the 0.1 octopi. It has worked alright, but the engineering around it is truly difficult. To say there is some technical debt would be an understatement, but such is expected with a project of this type (notably: project specifications were partially unknown during development, developers were new to both the project and the framework, and requirements changed during development).
Working on a redesign should be top priority. We can call it 0.9 so that once it is in decent shape, we can just do bugfix and launch it as 1.0.
- Thick models, thin controllers
- This is critically important because it allows us to do unit testing much easier, and MUCH MUCH more efficiently
- Testing models is 'easy', testing controllers is much harder
- Use PORO when we need.
- This goes with the same concept as thick models. This makes our code easier to use, and easier to test.
- Our testing right now sucks, most of it is integration testing and thus breaks really easily, and we have WAY too many controller methods and stuff like that
- Reuse CSS as much as possible
- Right now we have too many custom views, custom widths, etc etc. Let's just simplify everything, use much more bootstrap stuff, and try to be much more responsive.
- Look at other projects for inspiration whenever vaguely lost / unsure of where to put stuff (model vs controller)
- Code.org is almost entirely open source now. Their main project is dashboard, but their teacher interface is in pegasus, and written in angular.js. I havent' looked at the JSON endpoints for their teacher interface, but it is likely in dashboard.
- As much as possible, keep all permissions for cancancan within the RESTFUL actions.
- Don't do any of this stupid
can? :add_teacher
crap, orauthorize_resource!
on every method. Instead, usecheck_authorization
at the top, to ensure that an authorization check happens for each action, and then just docan? :edit, resource
(for example) at the first line of a method.
- Don't do any of this stupid
- Don't check in static vendor assets
- This is a prime way to get out of date, crappy javascripts that break. Use gems that just provide the asset files, and user bower in other places (see http://dotwell.io/taking-advantage-of-bower-in-your-rails-4-app/ and https://gist.github.com/afeld/5704079 and https://rails-assets.org/ or http://www.codefellows.org/blog/five-ways-to-manage-front-end-assets-in-rails)
-
Open source from the start
- This will allow us to use many public available build tools
-
Automated code review
- Codacy for JS
- PullReview for Ruby
-
Linting
- CoffeeLint?
- JSLint?
- SCSSS/CSS Lint?
- Ruby lint?
-
CI for testing and deployment
- All deploys to Heroku should be automated
- All branches should pass tests before merging in
-
Autoscaling on heroku
- We should also do performance analysis of critical paths. Use that 290 knowledge!
We currently use papertrail.com for our logging, and newrelic for our performance monitoring. Sadly, we are using the BARE MINIMUM feature set of the products. The worse is that the current version of octopi 'nicely' handles a few errors which would be GREAT to have logged, but we instead completely ignore them!
We should redesign the system for a product perspective to be log and monitoring first. Most errors should be logged in possible. I'm not sure if newrelic + papertrail is the best location for this logging, nor the best tools. There may be better gems + suites we should use; probably a good time to explore Heroku's offerings. Also, it might be a great time to create our own PORO that wraps our logger.
To keep track of all of the settings we can load from our environment, we should be doing it all in a LaPlaya singleton, as present in a Octopi singleton, as seed on CDO or https://github.com/jhenkens/rails-git-rugged-test. We can also use this Singleton to prepare our poro wrappers for our loggers.
- Unify Staff and Students back into one user model.
- This means that, by default, everyone is a student. This greatly simplifies a huge amount of logic in terms of managing users, classes, allowing teachers to experience the curriculum, etc.
- The one thing is we have to figure out how to separate login methods
1. We want to continue to have a simple user signon for students, but we could potentially add more (like code.org). This simple signons are purely secondary! Any user should be able to have a standard 'secure login', but it should be able to be 'unset', such that it is unavailable until a student manually sets it after a teacher created their account.
- Eliminating the always present student login portal would probably be a good idea. Instead, we should create login paths for each class based on a 3 word custom path. 2. For a class, we could allow a teacher to have randomly generated passwords from a dictionary for users
- Load the curriculum from a git repository into the database
- This would provide versioning for us, with human readable diffs
- Super admins would have an interface to start a background task that, in one transaction, drops the curriculum databases, and reloads from the git repository
- Loading from git, and getting the git hash has been prototyped in https://github.com/jhenkens/rails-git-rugged-test
- Curriculum hierarchy should be decoupled 1. This means a single task can be present in multiple activities, modules, etc 2. We can then make a 'open to the world' curriculum, using a few of our favorite tasks from the main curriculum extremely easily
- Each module and task should have a short-name that we use for referencing it between modules/activities, as well as for constructing pretty URLs.
- Resources should be separate but shared. Create a video database that has video-name and urls to youtube/s3 hosted copies of the video. Each module/activity/task can optionally have this video, which can optionally autoplay. See Code.org. We need both youtube and an s3 hosted webmv/mp4 so that we can use
video.js
, with youtube priority, and html5 fallback. - Each database load should set a git hash based on last modification for each curriculum element, and should also set some global string for the current version of the curriculum.
- Pictures, just as videos, should be manually uploaded to s3 in a static folder in the bucket, and kept track of in an index file.
- Write responses to all tasks to a write-only log, not the database
- This reduces database strain, and allows easier data collection.
- We can add an optional flag to tasks to allow 'continued progression', in which it is saved
- We can record the users progression in terms of 'attempted', 'finished', 'finish with perfect answer', ala Code.org's completion in the database so that we can provide view elements for them.
- Each line can have like student_id, task_name, current_datetime, git_hash_of_task_modification, git_hash_of_curriculum_load, task_data, etc. etc.
- We only write for users which we have said that we can collect data on.
- Eliminate all notion of server side analysis. It is unimportant. Just trust the response from the javascript.
- For read only LaPlaya access, we should be putting the data in the initializer XML, rather than using a readonly fileID. This would simplify permissions greatly.
- Create some sort of ephemeral user for those who are not logged in. They can try their hand at our 'open to the world' curriculum, and it will keep track of their progress, and record it if they then convert to a registered user.
- Allow anyone to register for our site as a student. We want this to be much more open.
- Complete redesign of CSV import feature for class list.
- Level editing should not be possible on
RAILS_ENV=='production'
. - We can make a second RAILS_ENV that enables level editing, and run it as a separate instance on heroku. For each task, we can then have an export feature that the editor can then put into github and reload onto production if it is ready.
- The usage of github for tracking the curriculum is critical for a research presentation. It logs everything we are doing for us.
- All data that we want to use for 'research' purposes should be logged immediately and not need the live database at all. Data should be exported as it is generated to some logger service, which we can hopefully do a weekly export to s3 or something.
- Load file from dictionary itself when we are doing read only stuff. There is no reason we should be passing a fileID around.
- Actually track time spent and record it
Student first, curriculum second.
Although the purpose of octopi is to collect data to improve computer science education, we need to realize that the primary audience we need to target is a student, and not just students in classes, but the outside student as well. The best way to get more people interested in the platform, to get more people learning computational thinking, is to make it easy and fun for people to try the site without having to convert them to a registered user first!
TEST EARLY, TEST OFTEN. One of the mail failures of Octopi 0.1 was a lack of model testing. Controller testing is too time consuming to do for every action. For the most part, if you think a controller action might warrant a test, you should probably just refactor that action into a model or PORO method.
- We should perform rate limiting on all sign-on related endpoints.
- Word based URL for classes may not be considered secure enough. Maybe should always be a alpha-numeric string.
- We could create a 'school login page', that an entire school could bookmark. This would provide links to all the class login pages for that school.
- We could also revisit the idea of adding IP ranges for a school
Data heavy pages may benefit from being constructed using a frontend framework like angularjs, emberjs, or backbonejs and a JSON backend. Specifically this should be a consideration for the teacher interface, as CDO does.
ActiveAdmin should probably be ditched entirely. It is way too heavy, and quite a pain. If we do leave it, the routes should be restricted to super_staff only, and it should be STRICTLY used for rarely used management.
For assets, I do not believe that we actually need to deploy them to s3. I think we can set up cloudfront to point to the rails app itself as the origin. This will make first requests slower (and costlier), but since we are only using it for assets that are part of the git repo, it shouldn't be an issue. See this article by codeshio.io. The alternative to this would be to use Travis/Codeship/Semaphore to actually build the static assets for us, upload them to s3, and then ignore asset compilation on Heroku deploy if possible. Travis mentions something about leaving 'artifacts' during the push to Heroku. I am interested in how this works... http://docs.travis-ci.com/user/deployment/heroku/#Deploying-build-artifacts
Create a user manual using a Jekyll site on github maybe that we can keep up to date for the site.