-
Notifications
You must be signed in to change notification settings - Fork 2
- Should I use subdomains or /pathnames/ for my LA Services?
- Help!, I don't have access to my Collectory Admin section, e.g. https://collections.my.site/admin/
- So what happens when I rerun the ansible? it doesn’t like make two versions of everything?
- If I change some variable in my inventories, shall I run all these roles and tasks again?
- Collectory cache issues
- How to increase the limit of upload size in some LA module
When you are planning your LA node you have to choose which domain and/our subdomains to use, and in definitive how your urls will looks like.
Some nodes prefer to use subdomains urls like https://services.l-a.site
and others like https://l-a.site/services/
using pathnames. You can have a lock to this LA node list for some production urls. It's up to you the election.
With more samples:
-
https://biocache.l-a.site/
andhttps://biocache-ws.l-a.site/
(for the biocache webservices) - or
https://biocache.l-a.site/ and https://biocache.l-a.site/ws/
or evenhttps://l-a.site/biocache/
andhttps://l-a.site/biocache/ws/
The subdomains method tends to be better supported generally. Probably because ALA uses this method mostly. However, nowadays both options have some minor issue that we are trying to solve:
- if you use always subdomains for every LA module, you'll have to pay attention to the CORS configuration
- when you use
/paths/
and you install several LA modules in the same server with the same hostname viaansible
, thenginx_vhost
role, it will overwrite previous vhosts configurations that use the same hostname. See ala-install#256. Also at quite some places default config assumes subdomains and you might have to dig around and manually configure more.
In other words, if you deploy https://biocache.l-a.site/
and later https://biocache.l-a.site/ws/
in the same server, at the end ningx_vhost
ansible role will only configure the last one (because the first is overwritten). After PR368 you can workaround this setting the new variable vhost_with_appname_conf
to true
.
But if you deploy https://biocache.l-a.site/
and https://biocache-ws.l-a.site/
you will suffer some extra CORS
issue that you have to solve later via a new nginx variable. If you deploy both in the same server, this machine should receive multiple names ( biocache.l-a.site
and biocache-ws.l-a.site
) and nginx
role doesn't overwrite each service vhost
configuration.
Probably ALA uses biocache
and biocache/ws/
in different servers (same with bie/species) so they don't suffer these issues.
So we have some Pull Request (368 and 370) to ala-install
to improve both situations with some extra ansible vars.
If you're using CAS, make sure to set auth_cookie_domain=
to the highest level common part of the domain in your inventory. For example, set it to la.domain.come
when using biocache.la.domain.com
and collections.la.domain.com
..., but to domain.com
if using e.g. la-collection.domain.com
and la.domain.com/records
.
First, check that you have the correct roles assigned to your user, see User-Roles-and-Services. Adding only the ADMIN
role is actually not enough to get admin rights to manage the collectory
.
But if you have the correct roles, but you still don't can create collections, etc, it may be a problem with the authentication cache. Try logging out and in again, or logging in with a different browser.
It's quite common to rerun ansible
with ala-install
from time to time, for instance to upgrade some LA service, etc.
In general you repeat the same tasks. If you changed some variable in your inventories, it will reconfigure your service using it. If you update ala-install
you will use new versions and/or configuration variables.
The only exception are probably solr
, cassandra
(this is biocache backend
), geoserver
, and nameindex
:
-
solr
: becauseala-install
tries to create the cores and fails. Use--skip
(see below) -
cassandra
: because recently old versions removed the cassandra data so it's better to skip if you already ingested data (this is done because if you have a cluster, you have to start from zero in new cassandra nodes). It's safe to rerun it if you are using a recent version of ala-install. -
nameindex
: you repeat the same task (download, unzip, backup previousnamedindex
) and at the end you are out of disk because/data/lucene
is full ofnameindex
copies. -
geoserver
(inspatial
): password is not set correctly on reinstall , causing geoserver not to start up, see issue 556. Workaround in this case in new LA portals is to remove/data/geoserver
.
So better --skip nameindex,solr7_create_cores,cassandra,geoserver
if you are using ansible-playbook
or --skip=nameindex,solr7_create_cores,cassandra,geoserver
if you are using the LA generator ansiblew
wrapper.
Also you can use --tags properties
with ansible-playbook
(or -p
with ansiblew
wrapper) if you only want to propagate new properties variable changes. This will not work with CAS
and Spatial
because the way these playbooks are written including roles, see tag inheritance.
It depends, if in only some minor adjustment (live some service minor parameter, etc), you can run ansible
with --tags properties
.
Some ansible
roles restarts tomcat after changing the properties (other not). A faster way to restart some tomcat service is to do a "touch" a war or a deployed file, like:
touch /var/lib/tomcat7/webapps-species.l-a.site/ROOT.war
to reload that service. This is more faster that service tomcat7 restart
.
Other services has a /alaAdmin/
interface with a button to reload the configuration.
There is also some weird aspect with collectory
and biocache caches
when running in the same server
If this collectory
cache is disabled you'll get ids instead of collections or institutions names in some parts or biocache
searches like charts and facets.
You have to increase the limit in all the components involved in the service (like proxies and grails
, etc).
Via ansible
you can configure nginx
via:
nginx_client_max_body_size = 600m
and in some LA module like spatial-hub
and spatial-service
you can also configure its grails
upload limit via ansible
variable:
max_request_size = 614400000
Index
- Wiki home
- Community
- Getting Started
- Support
- Portals in production
- ALA modules
- Demonstration portal
- Data management in ALA Architecture
- DataHub
- Customization
- Internationalization (i18n)
- Administration system
- Contribution to main project
- Study case