Open-Source Code Repository links #
The release package of GSS Suite is available on a dedicated Open-Source GitHub repository:
Otherwise, you can find GSS components also at the following links. Please note that in addition to the Ingest, Catalogue, Admin API and Notification components also the toolbox is available, since it is needed to initialize the databases. Please note that GSS uses externalized databases:
- https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-compose/<VERSION>/cdh-compose-<VERSION>.zip
- https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-ingest/<VERSION>/cdh-ingest-<VERSION>.zip
- https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-catalogue/<VERSION>/cdh-catalogue-<VERSION>.zip
- https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-admin-api/<VERSION>/cdh-admin-api-<VERSION>.zip
- https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-notification/<VERSION>/cdh-notification-<VERSION>.zip
- https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-toolbox/<VERSION>/cdh-toolbox-<VERSION>.zip
- https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-stac-api/<VERSION>/cdh-stac-api-<VERSION>.zip
Public Docker Repository links #
The relevant Docker images are available on dedicated Public Docker Hub repositories:
- https://hub.docker.com/r/gaeldockerhub/cdh-ingest/tags
- https://hub.docker.com/r/gaeldockerhub/cdh-catalogue/tags
- https://hub.docker.com/r/gaeldockerhub/cdh-stac-api/tags
- https://hub.docker.com/r/gaeldockerhub/cdh-admin-api/tags
- https://hub.docker.com/r/gaeldockerhub/cdh-notification/tags
- https://hub.docker.com/r/gaeldockerhub/cdh-toolbox/tags
Installation and Configuration Guidelines #
Installation of the GSS suite takes place via docker compose. Examples of operational configuration file can be found in the .zip package of the GSS components under the “operational-samples” folder.
It is present also a Json example folder where examples of Json files useful to define datastores, producer, consumer, quota, subscription, eviction and deletion via the Admin API component.
Software pre-requirements #
The following software must be pre-installed on the Virtual Machine:
- docker: 20.10.12 (or later)
- docker-compose: 1.29.0 (or later)
- a tool to unzip file
- Java 17
- a running SOLR 9.0 or later instance
- a running PostgreSQL instance, v10.12 and after
- JTS (Java Topology Suite)
- JQ package
- A running Keycloak instance for the authentication configured according to [COPE-SERCO-TN-21-1229 Keycloak Installation and Configuration Manual, v2.5]
- A running Kafka instance (3.3.1 and later)
- Zookeeper 3.8 or later
Installation #
Toolbox #
As Administrator, access via SSH to the VM where the CDH-Toolbox should be installed and download the CDH-Toolbox using the following command:
Wget https://repository.gael-systems.com/repository/thirdparty/fr/gael/gss/cdh-toolbox/<VERSION>/cdh-toolbox-<VERSION>.zip
Unzip the file and enter in the “db” folder within “cdh-toolbox-<VERSION>” folder.
Launch the following command to create and update PostgreSQL database schema:
./database_updater.sh --url jdbc:postgresql://<database_server_url>:<database_port>/<database_name> -- login *** --password ***
Enter in “solr” folder within “cdh-toolbox-<VERSION>” folder and edit the solr_configuration.sh file specifying SOLR_URL and SOLR_CORE. After this, set the solr configuration file with the command:
set path-to-file/solr_configuration.sh
Finally, launch the following command to create the SOLR schema:
./solr_init.sh
To update the schema run:
./solr_create_schema.sh
Docker compose #
In the release a docker compose file to install some GSS COTS, as Zookeeper, Kafka, Solr, PostgreSQL is present, but here we assume that such COTS are already installed, according to [COPE-SERCO-TN-23-14-61 GSS COTS Installation, v1.4].
Admin API #
Go to “admin” directory. Configure application.properties file.
To launch the admin API instance, execute the command:
nohup docker-compose up &
To stop the Admin API launch:
docker-compose down
Ingestion #
Admin API configuration
Go to “ingest” directory and configure the docker-compose.yml and the docker-compose-with-database-configuration.yml files.
Please note that ingesters and stores need to be configured previously using the Admin API component.
Launch 1 ingestion process with the following commands:
nohup docker-compose up <Ingestion_component_name> &
The following command can be used to start different services separately:
docker-compose up <service-1> <service-2>
where <service-X> is the name of ingestion component to start as it is in the docker-compose.yml file.
To stop the ingesters use the following command:
docker-compose down -t 180
XML configuration
Go to “ingest” directory and configure the docker-compose.yml, gss-producer.xml and gss-consumer.xml files.
Launch 1 producer and n consumers with the following commands:
nohup docker-compose up -d --scale consumer=n &
The following command can be used to start different services separately:
docker-compose up <service-1> <service-2>
where <service-X> is the name of producer/consumer to start as it is in the docker-compose.yml file.
To stop the ingesters use the following command:
docker-compose down -t 180
OData Catalogue #
Admin API configuration
Go to the “catalogue” directory. Configure the “application.properties” file with the name of the ingestion instances created with the CDH-Admin-API component.
Configure the docker-compose.yml catalogue compose file.
To launch a catalogue instance, execute the command:
nohup docker-compose up &
To stop the catalogue, execute the command:
docker-compose down
XML configuration
Go to the “catalogue” directory. Configure the gss-catalogue.xml and the application.properties files.
Configure the docker-compose.yml catalogue compose file.
To launch a catalogue instance, execute the command:
nohup docker-compose up &
To stop the catalogue, execute the command:
docker-compose down
STAC Catalogue #
Admin API configuration
Go to the “stac” directory. Configure the “application.properties” file with the name of the ingestion instances created with the CDH-Admin-API component and set parameter useDbConfiguration = true.
Configure only collections in the gss.xml file.
Configure the docker-compose.yml catalogue compose file.
To launch a catalogue instance, execute the command:
nohup docker-compose up &
To stop the catalogue, execute the command:
docker-compose down
XML configuration
Go to the “stac” directory. Configure the gss-catalogue.xml and the application.properties files.
Configure the docker-compose.yml catalogue compose file.
To launch a catalogue instance, execute the command:
nohup docker-compose up &
To stop the catalogue, execute the command:
docker-compose down
Notification #
Go to the “notification” directory. Configure the gss-catalogue.xml and the consumer-for-notification.properties files.
To launch the notification instance, execute the command:
nohup docker-compose up &
To stop the notification, execute the command:
docker-compose down
Configuration #
Please refer to [GAEL-P311-GSS-CDH-Administration Manual – Collaborative Data Hub Software GSS Administration Manual, v2.2.2] for details about the configuration of GSS components and to [GAEL-P311-GSS – Collaborative Data Hub Software GSS STAC Catalog Access ICD, v1.8] for GSS STAC APIs and to [COPE-SERCO-TN-23-1461 GSS COTS Installation, v1.4] for COTS configuration and to [COPE-SERCO-TN-21-1229 – Keycloak Installation and Configuration Manual, v2.5] for Keycloak configuration.
User Guide #
Datastore (HFS, S3 and SWIFT Object Storage) #
Here below some useful example of queries to manage HFS, S3 and SWIFT Object Storage. Please note that, for the correct request execution, it shall be request an access token, named as “AT” in the queries below.
- To list datastores (HFS, S3 and SWIFT):
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server_url>:<server_port>/datastores/<datastore_type>" | jq
- To list a specific datastore (HFS, S3 and SWIFT):
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/datastores/<datastore_type>/<datastore_name>" | jq
- To create HFS datastore for Quicklook:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/datastores/hfs" -d
' {
"name": "<quicklook_datastore_name>",
"permission": [
"READ",
"WRITE",
"DELETE"
],
"properties": {
"property": [
{
"name": "STORE_ATTACHED_FILES",
"value": "true"
}
]
},
"path": "/path/to/folder",
"depth": 0,
"granularity": 2,
"parentGroup": null,
"type": "HFS"
}' | jq
- To create HFS datastore for Products:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/datastores/timebased" -d
' {
"permission": [
"READ",
"WRITE",
"DELETE"
],
"properties": {
"property": [
{
"name": "EVICT_REFERENCE",
"value": "true"
},
{
"name": "EVICT_ATTACHED_FILES",
"value": "true"
}
]
},
"name": "<timebased_datastore_name>",
"filter": ".*",
"policy": "BASIC_STORE_PRIORITY_POLICY",
"children": [
{
"permission": [
"READ",
"WRITE",
"DELETE"
],
"properties": {
"property": [
{
"name": "EVICT_REFERENCE",
"value": "true"
},
{
"name": "EVICT_ATTACHED_FILES",
"value": "true"
},
{
"name": "STORE_BY_NAME",
"value": "true"
},
{
"name": "SAVE_AS_MULTIPART",
"value": "false"
},
{
"name": "KEEP_PERIOD_SECONDS",
"value": "<keep_period_value>"
}
]
},
"name": "datastore_name",
"path": "/path/to/folder",
"depth": 0,
"granularity": 2,
"parentGroup": "<timebased_datastore_name>",
"type": "HFS"
}
],
"type": "TIME_GROUP"
}' | jq
- To create SWIFT datastore for Quicklook:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/datastores/swift" -d
' {
"name": "<quicklook_swift_datastore_name>",
"permission":
["READ","WRITE","DELETE"],
"properties": {
"property": [
{
"name": "STORE_ATTACHED_FILES",
"value": true
},
{
"name": "STORE_BY_NAME",
"value": true
}
]
},
"credentials": "<swift_credentials>",
"container": "<container_name>",
"prefixLocation": "instrument/productType/year/month/day",
"parentGroup": null,
"type": "SWIFT"
}' | jq
- To create SWIFT datastore for Products:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/datastores/timebased" -d
' {
"name":"<swift_timebased_datastore_name>",
"type":"timeBasedDataStoreGroupConf",
"permission": [
"READ","WRITE","DELETE"
],
"properties": {
"property": [
{
"name": "EVICT_REFERENCE",
"value": true
},
{
"name": "EVICT_ATTACHED_FILES",
"value": true
}
]
},
"filter": ".*",
"policy": "BASIC_STORE_PRIORITY_POLICY",
"children": [
{
"permission": [
"READ",
"WRITE",
"DELETE"
],
"properties": {
"property":
[
{
"name": "STORE_BY_NAME",
"value": true
},
{
"name": "SAVE_AS_MULTIPART",
"value": false
},
{
"name": "KEEP_PERIOD_SECONDS",
"value": "<keep_period_value>"
}
]
},
"name": "<swift_datastore_name>",
"credentials": "<swift_credentials>",
"filter": ".*",
"containerPattern": {
"type": "mapperPattern",
"patternMapper": "<Sentinel_mission_tag>.*:<container_name>"
},
"prefixLocation": "instrument/productType/year/month/day",
"parentGroup": "<swift_timebased_datastore_name>",
"type": "SWIFT_GROUP"
}
]
}' | jq
- To create S3 datastore for Quicklook
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/datastores/s3" -d
' {
"name": "<quicklook_s3_datastore_name>",
"permission":
["READ","WRITE","DELETE"],
"properties": {
"property": [
{
"name": "STORE_ATTACHED_FILES",
"value": true
},
{
"name": "STORE_BY_NAME",
"value": true
},
{
"name": "EXPOSE_IN_STAC",
"value": true
}
]
},
"credentials": "<s3_credentials>",
"container": "<container_name>",
"prefixLocation": "instrument/productType/year/month/day",
"parentGroup": null,
"type": "S3",
"bucket": "<bucket_name>"
}' | jq
- To create S3 datastore for Products
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/datastores/timebased" -d
' {
"name":"<s3_timebased_datastore_name>",
"type":"timeBasedDataStoreGroupConf",
"permission": [
"READ","WRITE","DELETE"
],
"properties": {
"property": [
{
"name": "EVICT_REFERENCE",
"value": true
},
{
"name": "EVICT_ATTACHED_FILES",
"value": true
}
]
},
"filter": ".*",
"policy": "BASIC_STORE_PRIORITY_POLICY",
"children": [
{
"permission": [
"READ",
"WRITE",
"DELETE"
],
"properties": {
"property":
[
{
"name": "STORE_BY_NAME",
"value": true
},
{
"name": "SAVE_AS_MULTIPART",
"value": false
},
{
"name": "KEEP_PERIOD_SECONDS",
"value": "<keep_period_value>"
},
{
"name": "EXPOSE_IN_STAC",
"value": true
}
]
},
"name": "<s3_datastore_name>",
"credentials": "<s3_credentials>",
"filter": ".*",
"containerPattern": {
"type": "mapperPattern",
"patternMapper": "<Sentinel_mission_tag>.*:<container_name>"
},
"prefixLocation": "instrument/productType/year/month/day",
"parentGroup": "<s3_timebased_datastore_name>",
"type": "S3_GROUP"
}
]
}' | jq
- To modify the datastores (for both HFS, S3 and SWIFT) use the whole JSON body and perform a PATCH:
curl -X PATCH -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/datastores/<datastore_type>/<datastore_name>” -d
‘{JSON_BODY}’ | jq
- To delete datastore (HFS, S3 and SWIFT):
Curl -X DELETE -H “Authorization: Bearer ${AT}” -H “content-type:application/json" "https: ://<server.url>:<port>/datastores/<datastore_type>/<datastore_name>" | jq
Metadatastore #
Here below some useful queries to manage Metadatastore. Please note that, for the correct request execution, it shall be request an access token, named as “AT” in the queries below.
- To list metadatastore:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server_url>:<server_port>/metadatastores/solr" | jq
- To list a specific metadatastore:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/metadatastores/solr/<metadatastore_name>" | jq
- To create metadatastore:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/metadatastores/solr" -d
' {
"name": "<metadatastore_name>",
"permission": [
"READ",
"WRITE",
"DELETE"
],
"properties": null,
"strategies": null,
"hosts": "http://<IP>:<PORT>/solr",
"clientType": "SolrCloud",
"user": null,
"password": null,
"collection": "<collection_name>",
"defaultSort": null,
"defaultTop": 100,
"maxSkip": 10000,
"storage": null,
"visitorBuilder": "fr.gael.gss.core.store.solr.ProductSolrVisitorBuilder",
"transformer": "fr.gael.gss.core.store.solr.ProductSolrTransformer"
}' | jq
- To modify the metadatastore please use the whole JSON body and perform a PATCH:
curl -X PATCH -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/metadatastores/solr/<metadatastore_name>” -d
‘{JSON_BODY}’ | jq
- To delete metadatastore:
Curl -X DELETE -H “Authorization: Bearer ${AT}” -H “content-type:application/json" "https: ://<server.url>:<port>/metadatastores/solr/<metadatastore_name>" | jq
Producer #
Here below some useful queries to manage producer ingestion component. Please note that, for the correct request execution, it shall be request an access token, named as “AT” in the queries below.
- To list producers:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server_url>:<server_port>/producers" | jq
- To list a specific producer:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/producers/<producer_name>" | jq
- To create producer (in this example there is a Producer for data gathering from Colhub datasource, for the other cases please see the “JSON EXAMPLES” files present in the distribution):
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/producers" -d
' {
"name": "<producer_name>",
"hosts": "<IP>:<PORT>",
"topic": "<topic_name>",
"pushInterval": 10,
"filter": ".*",
"processError": {
"active": true,
"retries": 0
},
"reprocess": false,
"dataSource": "DHuS",
"source": {
"sourceType": "fr.gael.gss.ingest.ingester.ProducerOdataConf",
"serviceRootUrl": "https://colhub.copernicus.eu/dhus/odata/v2",
"auth": {
"user": "<username>",
"password": "<password>",
"clientId": null,
"tokenEndpoint": null,
"type": "basic"
},
"top": 10,
"lastPublicationDate": "<Last_Publication_Date_value>",
"filter": "<OData_v2_Filter>",
"type": "dhus",
"assumedFormat": ".zip",
"fetchAttributes": false,
"fetchQuicklook": true,
"useDateFromDb": false,
"geoPostFilter": "<POLYGON>”
}
}' | jq
- To modify the producer please use the whole JSON body and perform a PATCH:
curl -X PATCH -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/producers/<producer_name>” -d
‘{JSON_BODY}’ | jq
- To delete producer:
Curl -X DELETE -H “Authorization: Bearer ${AT}” -H “content-type:application/json" "https: ://<server.url>:<port>/producers/<producer_name>" | jq
Consumer #
Here below some useful queries to manage consumer ingestion component. Please note that, for the correct request execution, it shall be request an access token, named as “AT” in the queries below.
- To list consumers:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server_url>:<server_port>/consumers" | jq
- To list a specific consumer:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/consumers/<consumer_name>" | jq
- To create consumer (in this example there is a Consumer for data gathering from Colhub datasource, for the other cases please see the “JSON EXAMPLES” files present in the distribution):
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/consumers" -d
' {
"name": "<consumer-name>",
"parallelIngests": 10,
"hosts": "<IP>:<PORT>",
"groupId": "<topic-group-name>",
"topics": "<topic_name>",
"topicPattern": null,
"reprocess": false,
"pollIntervalMs": 40000,
"sourceDelete": false,
"ingestThreads": 3,
"source": {
"sourceType": "fr.gael.gss.ingest.ingester.ConsumerOdataConf",
"serviceRootUrl": "https://colhub.copernicus.eu/dhus/odata/v2",
"auth": {
"user": "<username>",
"password": "<password>",
"clientId": null,
"tokenEndpoint": null,
"type": "basic"
},
"type": "dhus",
"retriesOn429": 10,
"retryWaitOn429Ms": 5000
},
"taskList": [
{
"type": "fr.gael.gss.ingest.ingester.IngestInDataStores",
"pattern": ".*",
"stopOnFailure": true,
"tryLimit": 5,
"active": true,
"targetStores": "<timebased-datastore-name>"
},
{
"type": "fr.gael.gss.ingest.ingester.IngestInMetadataStores",
"pattern": ".*",
"stopOnFailure": true,
"tryLimit": 5,
"active": true,
"targetStores": "<metadatastore-name>"
},
{
"type": "fr.gael.gss.ingest.ingester.ExtractMetadata",
"pattern": ".*",
"stopOnFailure": true,
"tryLimit": 5,
"active": true,
"forceOnline": false
},
{
"type": "fr.gael.gss.ingest.ingester.CreateQuicklook",
"pattern": ".*",
"stopOnFailure": false,
"tryLimit": 5,
"active": true,
"onlyUseProvidedQL": true,
"height": 45,
"width": 45,
"targetStores": "<quicklook-datastore-name>"
}
],
"tmpPath": "/path/to/tmp/folder",
"errorManager": {
"errorLocation": "/path/to/error/folder",
"container": null,
"type": "folder",
"credentials": null
}
}' | jq
- To modify the consumer please use the whole JSON body and perform a PATCH:
curl -X PATCH -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/consumers/<producer_name>” -d
‘{JSON_BODY}’ | jq
- To delete consumer:
Curl -X DELETE -H “Authorization: Bearer ${AT}” -H “content-type:application/json" "https: ://<server.url>:<port>/consumers/<consumer_name>" | jq
Quota #
Here below some useful examples of queries to manage Users quota. Please note that, for the correct request execution, it shall be request an access token, named as “AT” in the queries below.
- To list quota:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server_url>:<server_port>/quotas" | jq
- To list a user’s quota:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/quotas/<user>" | jq
- To create quota:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/quotas" -d
'{"name":"TOTAL_DOWNLOAD", "userId": "user", "value":20,
"duration":1}' | jq
- To create quota in bulk:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/quotas/bulk" -d
'[{"name":"TOTAL_DOWNLOAD", "userId": "user", "value":20,
"duration":1},{ "name":"TOTAL_DOWNLOAD", "userId": "user2",
"value":20, "duration":1}]' | jq
- To modify quota:
curl -X PATCH -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/quotas/{user}/{PARALLEL_DOWNLOAD}” -d
‘{“value”:15}’ | jq
- To delete quota:
Curl -X DELETE -H “Authorization: Bearer ${AT}” -H “content-type:application/json" "https: ://<server.url>:<port>/quotas/{user}/{PARALLEL_DOWNLOAD}" | jq
- To delete quota in bulk:
curl -X DELETE -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/quotas/{user}" | jq
OData Catalogue #
The GSS Catalogue exposes products according to the CSC OData API. Here below some useful examples:
- To list products:
https://<server_url>:<server_port>/odata/v1/Products
- To search a product by uuid:
https://<server_url>:<server_port>/odata/v1/Products(uuid)
- To search product by name:
https://<server_url>:<server_port>/odata/v1/Products?$filter=startswith(Name,’S1’) and contains(Name,’SLC’)
- To search product by date:
https://<server_url>:<server_port>/odata/v1/Products?$filter=PublicationDate%20gt%202023-03-15T11:44:31.854Z
- To list product by attributes (Date Attributes):
http://<server_url>:<server_port>/odata/v1/Products?$filter=Attributes/OData.CSC.DateTimeOffsetAttribute/any(att:att/Name eq ‘<attribute_name>’ and att/OData.CSC.DateTimeOffsetAttribute/Value in (<attribute_value1>,<attribute_value2>))
- To list product by attributes (String Attributes):
http://<server_url>:<server_port>//odata/v1/Products?$filter= Attributes/OData.CSC.StringAttribute/any(att:att/Name eq ‘<attribute_name>’ and att/OData.CSC.StringAttribute/Value eq ‘<attribute_value>’
Subscription #
Use the “CDH-Catalogue” component to create a Subscription. The Notification component will use this subscription to send the notification to a configured endpoint.
- To list subscriptions:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server_url>:<server_port>/Subscriptions" | jq
- To list a specific subscription:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/Subscriptions(<id>)" | jq
- To create subscription:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/Subscriptions" -d
' {
"Status":"running",
"SubscriptionEvent": "deleted",
"FilterParam": "Products?$filter=<ODATA_FILTER>",
"NotificationEndpoint":"<ENDPOINT_WEB_ADDRESS>"
}' | jq
Subscriptions can be created for the following events:
– SubscriptionEvent: “deleted” -> for deleted and evicted products
– SubscriptionEvent: “created” -> for ingested products
- To pause subscription:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/Subscriptions(<id>)/OData.CSC.Pause” -d
‘{}’ | jq
- To resume subscription:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/ Subscriptions(<id>)/OData.CSC.Resume” -d
‘{}’ | jq
- To cancel subscription:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/Subscriptions(<id>)/OData.CSC.Cancel” -d
‘{}’ | jq
Deletion #
Here below some useful examples of queries to manage product deletion. Please note that, for the correct request execution, it shall be request an access token, named as “AT” in the queries below.
- To list deletion jobs:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server_url>:<server_port>/jobs" | jq
- To list a specific deletion job:
curl -v -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/jobs/<job_id>" | jq
- To create deletion job:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.ulr>:<port>/jobs" -d
' {
"message": "<deletion_cause>",
"reason": "<reason_name>",
"status": "RUNNING",
"odataFilter": "$filter=<ODATA_FILTER>",
"nbThreads": <n_threads>
}' | jq
- To perform deletion job dryrun:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json" "https://<server.url>:<port>/jobs/<job_id>/$dryrun" -d
'{}' | jq
- To run deletion job:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/jobs/<job_id>/run” -d
‘{}’ | jq
- To pause deletion job:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/jobs/<job_id>/pause” -d
‘{}’ | jq
- To resume deletion job:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/jobs/<job_id>/resume” -d
‘{}’ | jq
- To cancel deletion job:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/jobs/<job_id>/cancel” -d
‘{}’ | jq
- To delete deletion job:
Curl -X DELETE -H “Authorization: Bearer ${AT}” -H “content-type:application/json" "https: ://<server.url>:<port>/jobs/<job_id>" | jq
Ingestion Management via Admin-API #
It is possible to start/stop producers and consumers via Admin APIs and monitor ingestion.
- To start a producer or a consumer:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/ingesters-management/<ingester-name>?action=start” -d ‘{}’ | jq
- To stop a producer or a consumer:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/ingesters-management/<ingester-name>?action=stop” -d ‘{}’ | jq
- To monitor ingestion:
curl -X GET -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/ingesters-management/<ingester-name>” -d ‘{}’ | jq
Optional parameters can be appended to the URL using “?” and separated by “&”. The supported parameters include name, type, state, top, and skip.
Eviction via Admin-API #
The eviction process can be either triggered automatically at startup or managed manually through the Admin API. It operates based on the DataStores configured in the database to determine which products to evict.
- Automatic eviction on startup: to enable this, set process.evictionByTime=true in the application.properties file of the Admin-API component. Please note that this must be set to false in the OData Catalogue component.
- Manual eviction: it is possible start or stop the eviction process at any time using the Admin API.
To start eviction:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“http://<server.url>:<port>/eviction?action=start” -d ‘{}’ | jq
To stop eviction:
curl -X POST -H "Authorization: Bearer ${AT}" -H "content-type:application/json”
“https://<server.url>:<port>/eviction?action=stop” -d ‘{}’ | jq
To monitor eviction:
curl -X GET -H "Authorization: Bearer ${AT}" -H "content type:application/json" "https://<server.url>:<port>/eviction" -d ‘{}’ | jq
STAC Catalogue #
The STAC Catalogue publishes products in compliance with the STAC API specifications. Collections can be configured as needed. To filter items, the catalogue supports STAC API Collections Query Options for searching within a specific collection, and STAC API Search Query Options for filtering items across all collections.
Here below there are some useful examples:
- To list products from all collections:
http://<server.url>:<server.port>/<context-path>/stac/search
- To list products within a specific collection
http://<server.url>:<server.port>/<context-path>/stac/collections/<collection-id>/items
- To search for a product by its ID and explore its associated nodes:
http://<server.url>:<server.port>/<context-path>/stac/collections/<collection-id>/items/<feature-id>
- To filter products by a geographic bounding box coordinates:
http://<server.url>:<server.port>/<context-path>/stac/search?bbox=<bbox-cooridinates>
- To filter products by datetime:
http://<server.url>:<server.port>/<context-path>/stac/search?datetime=<datetime>
- To filter products by performing intersection between their geometry and provided GeoJSON geometry:
http://<server.url>:<server.port>/<context-path>/stac/search?intersects={“type”:”Polygon”,”coordinates”:[<coordinates>]}
The same filters are available within each collection. Additional options include limiting the number of products per page with limit=<number>, paginating results using page=<page-number>, and sorting with the “sortby” parameter. Filters can be combined. For example:
http://<server.url>:<server.port>/<context-path>/stac/collections/<collection-id>/items?limit=<number>&sortby=-properties.updated
Log Aggregator #
GSS Suite components can send their logs to a Kafka topic. The logs can then be processed by consuming messages from this topic. To configure the log aggregator, the log4j2.xml file should be modified to specify the logging levels and appenders, where logs will be sent.
Tutorials #
Please find available Tutorials at the following link: