Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Packaging the configs into RPM #174

Merged
merged 12 commits into from
May 22, 2019
Merged

Packaging the configs into RPM #174

merged 12 commits into from
May 22, 2019

Conversation

kushaldas
Copy link
Contributor

@kushaldas kushaldas commented Oct 26, 2018

Fixes #171.

We can now create a source tar ball using python3 setup.py sdist command.
And then use the given spec file to build a RPM.

How to test?

  • In sd-dev (or other dev AppVM): run make dom0-rpm to create the RPM package.
  • Switch to dom0 for all remaining commands.
  • Run make clone to fetch latest dir, including freshly built RPM
  • Run make clean to remove any prior dev env state.
  • Install the RPM: sudo dnf install ./rpm-build/RPMS/noarch/securedrop-workstation-0.0.1-1.fc25.noarch.rpm
  • Run make all.
  • Ensure no errors: echo $? should show 0.
  • Run make test and confirm no errors.

@redshiftzero
Copy link
Contributor

awesome! can you add a test plan for this?

@msheiny
Copy link
Contributor

msheiny commented Oct 26, 2018

so i got this working with a slight tweak .. step 6 i had to run

docker run --rm -it -v "$HOME/rpmbuild:/home/user/rpmbuild" fpf/fedorapackager rpmbuild -ba SPECS/securedrop-workstation.spec

@msheiny
Copy link
Contributor

msheiny commented Oct 26, 2018

i'd like to improve the automation story here.. and publish the docker image to quay as part of this ticket.

@kushaldas
Copy link
Contributor Author

so i got this working with a slight tweak .. step 6 i had to run

docker run --rm -it -v "$HOME/rpmbuild:/home/user/rpmbuild" fpf/fedorapackager rpmbuild -ba SPECS/securedrop-workstation.spec

Means you did it from another directory I guess :)

@conorsch conorsch self-requested a review October 31, 2018 17:12
Copy link
Contributor

@conorsch conorsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blocked by #172; should be rebased on top of master after #172 is merged. Doing so will account for:

  • fixing CI flake8 errors
  • ensuring the new updater logic is present in the package

We also need docs on the build process, and ideally a script or Makefile target to keep the process on the rails for maintenance going forward.

@kushaldas
Copy link
Contributor Author

fixing CI flake8 errors

That reminds me, like all the other places, I am going to exclude setup.py from flake8 changes. The similar thing is done for securedrop-client too.

@conorsch
Copy link
Contributor

#172 has been merged, so ready for rebase and improvements here.

@kushaldas kushaldas force-pushed the packaging branch 3 times, most recently from a3f4548 to d8a5488 Compare November 1, 2018 07:35
@kushaldas
Copy link
Contributor Author

@conorsch Can we please start using black for all of our projects and remove all of these stylecheck steps? Saves a lot of developer time too. We are already using it for securedrop-sdk.

@kushaldas kushaldas force-pushed the packaging branch 2 times, most recently from 3475cc0 to 3f146e9 Compare November 1, 2018 15:07
@conorsch
Copy link
Contributor

conorsch commented Nov 1, 2018

Can we please start using black

Sure, works for me. It should be runnable locally as well as in CI, same as the flake8 checks. Then should be fine. Open a ticket with a dev-env tag, that'll be an easy one to knock out soon.

@kushaldas
Copy link
Contributor Author

kushaldas commented Nov 1, 2018

I am testing the rpm on dom0 right now, I will push some more changes here.

And it works :D

@conorsch
Copy link
Contributor

conorsch commented Nov 1, 2018

@kushaldas Those 6 steps you've listed in the testing instructions should be scripted, and referenced in the top-level Makefile. Otherwise, after merge, we'll have to dig out this PR in order to reproduce the build.

cc @heartsucker

@kushaldas
Copy link
Contributor Author

@kushaldas Those 6 steps you've listed in the testing instructions should be scripted, and referenced in the top-level Makefile. Otherwise, after merge, we'll have to dig out this PR in order to reproduce the build.

For now @msheiny tested out the steps, that will also require pushing a container image and other steps.

@msheiny
Copy link
Contributor

msheiny commented Nov 1, 2018

For now @msheiny tested out the steps, that will also require pushing a container image and other steps.

I can tackle either in this PR or another ticket? preference?

@conorsch
Copy link
Contributor

conorsch commented Nov 1, 2018

@msheiny go ahead and add the Dockerfile config to https://github.com/freedomofpress/containers, then we can add the scripting steps here to pull from that once it's pushed

@msheiny
Copy link
Contributor

msheiny commented Nov 1, 2018

we can add the scripting steps

okay! @conorsch can you clarify who we is? i'm fine helping out.. just dont want to butt heads

@conorsch
Copy link
Contributor

conorsch commented Nov 1, 2018

@msheiny Good call; please add the steps directly to this PR, then I can use those scripts to test the RPM build!

@msheiny
Copy link
Contributor

msheiny commented Nov 2, 2018

@conorsch ping for re-review

Copy link
Contributor

@emkll emkll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After building the rpms several times, I always get different hashes for the .rpms. Am I missing something locally to make the build completely reproducible?
I also get the same behavior while doing python3 setup.py sdist: the tarball created has a different hash.
Given the contents of the package is it something that is doable in this case?

@@ -0,0 +1,13 @@
ARG RPM_VER
FROM quay.io/freedomofpress/rpmbuilder:${RPM_VER}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we also pin the hash of the image?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@msheiny msheiny dismissed conorsch’s stale review November 6, 2018 16:42

Needs to be re-reviewed

@conorsch conorsch removed the WIP label May 8, 2019
@conorsch
Copy link
Contributor

conorsch commented May 8, 2019

Ready for review! I've rewritten the test plan, to reflect the current state of the changes as presented. The RPM built by this process is not currently hosted for direct installation, but merge of this PR will unblock that follow-up work.

Would appreciate re-review by @kushaldas, as well as fresh eyes from @emkll and perhaps @rmol.

conorsch
conorsch previously approved these changes May 8, 2019
@conorsch conorsch dismissed their stale review May 8, 2019 01:05

Made many edits, requesting review from another team member

@kushaldas
Copy link
Contributor Author

Will review it this week.

kushaldas and others added 12 commits May 15, 2019 10:37
Now one can use `python3 setup.py sdist` to create a tarball
Also updates the .gitignore file for adding spec files.
Before publishing the actual RPM, we will have to fix
#173
Also adds Makefile to the final package
This probably could use some tests :)
Very preliminary and needs tests.
The RPM dom0 package work was left unmerged for several months, so
it lagged behind master branch changes, including the the conversion of
sd-journalist -> sd-proxy.

Fixes a few small issues:

  * Python 3.5.4 showed a unicode error on README
  * Example JSON config file wasn't named correctly
Rather than sprinkle in numerous volume mounts into the build container,
let's mount the entire git repo, and copy artifacts within the container
to locations as necessary.
By default, new Saltstack .top files (the config files that map tasks to
target hosts/VMs) will be skipped. We must explicitly enable them for
the salt runs via qubesctl to include the logic shipped in the package.
Let's port the find/enable logic from the Makefile and place that in the
RPM spec, under the `%post` macro, which is the equivalent of a `postinst`
script in a Debian package.
The catch-all name "securedrop-workstation" can refer to many different
things, including the entirety of the workstation setup. The RPM package
containing the VM config logic for setting up the various application
components is specific to dom0 (in Qubes parlance), so we'll include
dom0 in the package name, to be consistent with e.g.
`qubes-gpg-split-dom0`.
Disables networking on the container image. We're using Fedora-25, which
is EOL, and the build process doesn't require network, so let's disable
it.

Also moves the pinned container hash into the build script, since it
didn't need to be in the Makefile.
Updates the Makefile targets to enable testing of the manually installed
dom0 RPM package for salt configs. Previously, the `make all` action
would clobber salt files with the contents of the locally cloned git
dir. Now, the `prep-salt` action is a bit more conservative: it will
only copy in the secrets (which are not handled by the RPM), and ignore
the salt dirs if the target dir already exist.

If `make clean` is run before `make all`, then local files will be
copied over.

Also updated the `make clean` logic to purge scripts and uninstall the
dom0 config RPM if it's present.
@conorsch
Copy link
Contributor

Rebased on top of latest master (300816f), to resolve minor conflicts due to merge of #251. Ready for review.

find /srv/salt -maxdepth 1 -type f -iname '*.top' \
| xargs -n1 basename \
| sed -e 's/\.top$$//g' \
| xargs qubesctl top.enable > /dev/null
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@conorsch Isn't this step will take too much time?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kushaldas Try running it:

[user@dom0 securedrop-workstation]$ time { sudo find /srv/salt -maxdepth 1 -type f -iname '*.top' \
> | xargs -n1 basename \
> | sed -e 's/\.top$//g' \
> | xargs sudo qubesctl top.enable ; }

On my machine, it takes about 4s:

real	0m4.380s
user	0m1.651s
sys	0m2.443s

Even if it takes much longer, we must mark those top files as "enabled," otherwise they'll be disregarded by qubesctl actions for enforcing configurations.

Copy link
Contributor Author

@kushaldas kushaldas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally, this is approved from my side. But, as I created the original PR, Github will not allow me to approve my own PR. @conorsch or @redshiftzero or @emkll please approve this one in the UI.

🦄 👍

@conorsch conorsch self-requested a review May 22, 2019 16:27
Copy link
Contributor

@conorsch conorsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kushaldas and I collaborated on this PR, and we're both satisfied with the work. Approving formally to unblock merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Package dom0 configuration as RPM
5 participants