-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the "appropriate location" for the root ca? #519
Comments
certificates are stored in the lme_certs volume You can find its location by using you will see storage locations here for the ca, elasticsearch, kibana, etc. Replace certs in this volume and they will be replicated out to the all containers using them. Ensure permissions for these files match what they were before. Dont forget on your new certs that they contain the ip address/hostname of your choice -- and add the ca to your trusted stores. This is how you get rid of the 'untrusted cert' error when browsing https://learn.microsoft.com/en-us/skype-sdk/sdn/articles/installing-the-trusted-root-certificate |
I understand this is not supported. All I really want is for our web interface to use our cert so browsers won't alarm. I have not been able to replace the self-signed certs with our own corp signed certs. My process is this:
For whatever reason Kibana can never fully get started. I tried replacing the kibana cert only, but for some reason that causes kibana to start and restart repeatedly. While started, it seems to work until it goes to restart. |
Check your permissions on the files. Often servers will not start up if
they aren't secure enough. Just do a ls -l on the existing ones and check
users and permissions.
Thanks,
Clint Baxley
…On Wed, Nov 27, 2024 at 4:39 PM GRRLjay ***@***.***> wrote:
I understand this is not supported. All I really want is for our web
interface to use our cert so browsers won't alarm.
I have not been able to replace the self-signed certs with our own corp
signed certs. My process is this:
1. Generate .csr files from the pre-existing .key files.
2. Sign the .csr files with our CA.
3. Replace the self-signed certs.
4. Replace the ca.crt with our ca certficate.
5. Generate the elasticsearch.chain.pem file.
6. Fix up ownership and permissions.
7. Start lme.service
For whatever reason Kibana can never fully get started. I tried replacing
the kibana cert only, but for some reason that causes kibana to start and
restart repeatedly. While started, it seems to work until it goes to
restart.
—
Reply to this email directly, view it on GitHub
<#519 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAQY33SAC56E7XZWCYEEAUD2CY3ZHAVCNFSM6AAAAABSJ4ZE5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBUHAYDQNJYGU>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Do you have to use your corp self signed certs? If you HAVE to use your own corp signed certs we will have to look into what his process would look like to retroactively replace certs. But he should be something as described above and as cbaxley said may be a permissions issue after you're replacing the certs in the volume. Your container must be able to read those certs the same as it did before. You must also include all the private docker domain names before as well or the containers will not be able to speak to each other See the instances.yml file to see what those would look like If you dont have to and are at a place where you can do a full re-install (its not that big of a deal if you aren't collecting data yet and just testing things out) but you could just utilize the instances.yml file. If you're still just using self-signed certs. https://github.com/cisagov/LME/blob/main/config/setup/instances.yml This file will be located in LME/config/setup after you download LME and by default handles the localhost ip address and the private docker network domain name. This means these certs really only work via the private docker network communication or if you're actually ON the server going to https://localhost. But you can add anything you want to get OUTSIDE of that private network as well. For instance the actual ip of your server, or even a domain name. Then those certs will also work for that domain name / ip address. You would just add that to the kibana section under dns and ip. Then you would ensure that your ca is loaded in the trusted store of all the devices that would access it. For instance in instances.yml
Then run the install process as you did before. Add the ca to the trusted stores of devices that will access kibana, then when you go to your.domain.name or https://ipaddress you should no longer get the browser error for untrusted certs. |
Also, I do not think replacing just ONE cert would work. As the containers talk to each other via their docker dns name and use the ca to do that. You would need to do a self signed cert from the same ca for each application. I haven't verified that with testing, but that would be my assumption. For instance when you review the quadlet for kibana: https://github.com/cisagov/LME/blob/main/quadlet/lme-kibana.container You'll see it has env variables to talk to elasticsearch via the private docker dns name at https://lme-elasticsearch:9200 -- and points to the ca.crt in order to do that |
I am specifically trying to use all corp-signed certificates. We don't add self-signed certs or CAs to our trust stores. I did maintain the DNS and IP listed in instances.yml when I was signing it. I just added our lan IP and intranet fqdn. I did check and fix the ownership and perms of the signed certificates after replacing the self-signed ones (step 6 above). I'm happy to do whatever troubleshooting is needed or provide more detail about the steps I took. Just know that the reason we turned to LME is we don't know enough about the ELK stack to figure it out on our own. |
So you may be able to do this by placing your ca in a zip and placing it ~/LME/config/setup/ca.zip What happens during install this location is completely cloned to /opt/lme as you see here in the ansible script:
Then during install the ca is checked for its existence and if it doesn't exist one is created. See the init-setup.sh script https://github.com/cisagov/LME/blob/main/config/setup/init-setup.sh It then uses this ca and the instances.yml to generate certs. My thought here is that instead of creating a new ca it will just use your ca to then generate each individual cert for elasticsearch, kibana etc. What I think you would need to do in this scenario:
I hope that makes sense. The init-setup.sh could use some cleaning up to integrate this better, but we will need time to test/implement. |
This entire idea may need some quick testing - should be able to do this in a few minutes as proof of concept. The path may actually be this: ~/LME/config/setup/certs/ |
That doesn't appear to be the solution. As the certs are mounted to volume lme_certs not the path as previously thought This will take some more time to look at |
Here's the steps I've just tried. It did not result in a corp-signed set of certificates. Create my own ca.key and ca.crt signed by our corporate root ca. When I connect with a browser, it's not using the cert's signed by us. I have tried the procedure with several different directories and none worked: It's easy enough to try other directories if you can tell me where. The init-setup.sh says /usr/share/elasticsearch/config/certs, but that elasticsearch tree doesn't exist so I assume it's some scripting magic happening somewhere else I'm not aware of. |
So that path is in the container not on your host. When you review a quadlet you'll see this: https://github.com/cisagov/LME/blob/main/quadlet/lme-setup-certs.container
This is saying mount lme_certs volume on the host machine to that location within the container. To find the location on your host machine you would run:
This should give you a location like : You can then do a From here you will see ALL your certs used across all services. ie. ca, elasticsearch, fleet-server, etc. In theory you could just replace each on here in the volume and it will replicate to each of your services. -- but I'm not sure what that might break as well if we missing something that happens during install -- hence trying to get to work during install. As a workaround I have created this script that you will replace the init-setup.sh script with before running the install ansible yaml: First uninstall using uninstall steps.
Replace the contents of init-setup.sh with this code instead(located at ~/LME/config/setup/init-setup.sh)
add your ca.crt and ca.key to this location NOW go through the install steps of setting up the .env file, running the ansible playbook. |
So to be clear, this can't be done post-install? The corporate ca must be provisioned prior to installation? I have already tried the approach of replacing all the certs in /var/lib/containers/storage/volumes/lme_certs/_data and that did not work as stated earlier. I will give this script a try. |
My assumption is that it SHOULD work... but that just hasn't been done or tested. Id have to make sure your creation of a ca and subsequent creation of every other cert is in line with what we are currently doing with elastic cert util It may take actually installing elast cert util on the ubuntu instance and then running CLI commands using the custom ca.crt and ca.key. THEN replacing all the certs in /var/lib/containers/storage/volumes/lme_certs/_data |
This is how you can manually generate certs with elastic cert util make a directory
run the command (youll need to adjust the path to your user)
this assumes you have your ca.crt and ca.key and updated instances.yml in ~/LME/config/setup it will generate your certs into ~/LME/config/setup/generated_certs You should be able to manually move these into the volume location. Dont forget to move your ca.crt and ca.key as well I ended up having to also adjust permissions;
This APPEARS to be working for me -- replacing the certs AFTER install |
At first I got the following result. This was because the self-signed certificate was not trusted by the server OS.
After adding the self-signed ca.crt to the servers ca store, the error is now:
|
try just pulling the public elasticsearch image instead:
This ca doesn't have anything to do with the pulling of the image. So it shouldn't have an impact on it. It's using podman. Does your org use a proxy with TLS inspection? Because that can complicate things |
For testing purposes, I removed any TLS inpsection for this machine and the result is the same. I am an absolute novice with podman, but the original command given seems to be pulling from a service internal to this machine. However, if the service is using an untrusted certificate, it makes sense that there would be an issue. This last modification executed without error. The browser trusts the Kibana web interface. However, I'm back where I've been before with "Kibana server is not ready yet."
provides the following error:
|
Yes, it should be pulling internal -- that image should exist in the local repo on the machine. I'm not exactly sure why you would have TLS issues to a local repo as I dont get this error on my ubuntu machine. The other error is because the cert isn't talking correctly with elasticsearch. Did you cp all certs generated in generated certs path and replace in the volume? And adjust the permissions? You can also run the command sudo -i podman logs lme-elasticsearch And sudo -i podman logs lme-kibana And paste the logs in here |
I'm going to run through all my steps again on an install and verify |
verify ca.crt and ca.crt are in config/setup run podman command should now have these crts in generated_certs: ( you wont have all these -- some like 'keycloak' are for my local testing) move the ca and the certs to replace them in the volume - adjust to match your home directory
adjust permissions
exit sudo with restart lme.service
I dont get the same error you're seeing: From browser im seeing my custom cert details |
I followed your steps with one adjustment. I had to include our corp-signed root and intermediate cert in the elasticsearch.chain.pem. If I don't, elasticsearch won't even start. Requested logs are attached. |
Does your ca.crt thats located in: /var/lib/containers/storage/volumes/lme_certs/_data/ca/ca.crt Also contain the intermediate and root ca details? |
It is signed by our intermediate, but it is not a chain file. |
My theory is kibana or the container itself doesn't trust our root. I wouldn't know how to achieve that. |
I would attempt to do this by included all the crts into the .crt to make it a chain. For some reason kibana cant find the issuer -- and thats the only thing I can think of. The order of your elasticsearch chain would be like:
And your custom ca would be
Granted I'm not really an expert of certs in an organization like this -- but its unable to determine the issuer and thats the only thing I can think of. You can can just copy the contents of those certs into the ca.crt The same thing is done in ubuntu when trusting crts: /etc/ssl/certs/ca-certificates.crt is just a chain of a bunch of crts |
There it is. I can log in now. My commands:
|
Great -- now when you install elastic agents we typically have users add --insecure to the end of the install command for development / testing (which keeps it tls but skips verification) if your endpoints already have this ca trusted -- you should be able to not add --insecure and it will verify successfully using your orgs certs that are already installed. |
Alright, here is the whole thing in one procedure for posterity: After ansible-playbook -K ~/LME/ansible/install_lme_local.yml:
|
This covers post-install. I'm not sure about the pre-install method above. I have not been able to install from scratch for several days due to some error in the process. |
for the pre-install method the script would have to be updated to also include copying of root-ca's and intermediate ca's into the final ca.crt This should be fine going forward for you as any updates to your crts you'd probably want to do post install without an uninstall anyway |
I think it's important to note that elasticsearch-certutil wants the ca.crt file to be a single certificate. It threw an error when I tried to give it a chain. That's why my process assembles the ca chain afterward. |
That's pretty typical when dealing with generating certs using a ca and its associated key. Its normal to see chains combined afterwards. I'm going to add to some docs on this to capture this process for other users in the same boat |
Under the heading "Migrating from Self-Signed Certificates" on the "Certificates" page, it says "If the certs are signed, ensure you also include the root ca in the appropriate location as well." What is the appropriate location? This is an Ubuntu 22.04 install using the Ansible playbook. It is not clear to me whether we're talking about a location on the host or someplace in a container.
Please make this more clear.
The text was updated successfully, but these errors were encountered: