... | ... | @@ -1913,13 +1913,11 @@ Likewise, when I run the oldb-gui database browser, it does not show this data p |
|
|
I try to access an OLDB datapoint, and I see two errors like this:
|
|
|
|
|
|
```plaintext
|
|
|
Target configuration does not exist: Failed to retrieve configuration from elastic search: Configuration
|
|
|
Target configuration does not exist: Failed to retrieve configuration from config storage: Configuration
|
|
|
[…]
|
|
|
elt.oldb.exceptions.CiiOldbDpExistsException: Data point cii.oldb:/alarm/alarm/device/motor/input_int_dp_alarm already exisits.
|
|
|
```
|
|
|
|
|
|
Go directly to Solution 2 below.
|
|
|
|
|
|
**Variant 3 of the Problem**
|
|
|
|
|
|
I try to delete an OLDB datapoint and I see an error like this:
|
... | ... | @@ -1928,99 +1926,26 @@ I try to delete an OLDB datapoint and I see an error like this: |
|
|
CiiOldbPyB.CiiOldbException: De-serialization error:sizeof(T)\*count is greater then remaining
|
|
|
```
|
|
|
|
|
|
Go directly to Solution 2 below.
|
|
|
|
|
|
**Background**
|
|
|
|
|
|
The two errors are contradicting.
|
|
|
|
|
|
Datapoints are stored in two databases: a document-database (permanent store) for its metadata, and a key-value-database (volatile store) for its current value. The above symptoms indicate that the two databases are out-of-sync, meaning the datapoint exists only "half".
|
|
|
Datapoints are stored in two databases: the config-storage (aka. permanent store) for its metadata, and a key-value-database (aka. volatile store) for its current value. The above symptoms indicate that the two databases are out-of-sync, meaning the datapoint exists only "half".
|
|
|
|
|
|
**Solution 1**
|
|
|
|
|
|
With DevEnv 4, which contains [ECII-500](https://jira.eso.org/browse/ECII-500), you can probably delete the datapoint to clean up the situation:
|
|
|
You may be able to delete the offending datapoint ([ECII-500](https://jira.eso.org/browse/ECII-500)):
|
|
|
|
|
|
```plaintext
|
|
|
#!/usr/bin/env python
|
|
|
import elt.config
|
|
|
import elt.oldb
|
|
|
|
|
|
oldb_client = elt.oldb.CiiOldbFactory.get_instance()
|
|
|
elt.oldb.CiiOldbGlobal.set_write_enabled(True)
|
|
|
|
|
|
uri = elt.config.Uri("cii.oldb:/tcs/hb/tempser3")
|
|
|
oldb_client.delete_data_point(uri)
|
|
|
|
|
|
testutil-oldb-datapoint del <URI>
|
|
|
```
|
|
|
|
|
|
**Solution 2**
|
|
|
|
|
|
If the above didn't help, find out which "half" of the datapoint exists.
|
|
|
|
|
|
1. The current value exists, and the metadata is missing. This is the case when upgrading DevEnv/CII without deleting the Redis cache.
|
|
|
2. The metadata exists, and the current value is missing
|
|
|
|
|
|
Define the following shell functions (note: not applicable to redis-clusters):
|
|
|
|
|
|
```plaintext
|
|
|
function oldb_ela_list { curl -s -X GET localhost:9200/configuration_instance/_search?size=2000\&q=data.uri.value:\"$1\" | jq -r '.hits.hits[]._id' | sort ; }
|
|
|
|
|
|
function oldb_ela_del { curl -s -X POST localhost:9200/configuration_instance/_delete_by_query?q=data.uri.value:\"$1\" | jq -r '.deleted' ; }
|
|
|
|
|
|
function oldb_red_list { redis-cli --scan --pattern "*$1*" ; }
|
|
|
|
|
|
function oldb_red_del { redis-cli --scan --pattern "*$1*" | xargs redis-cli del ; }
|
|
|
```
|
|
|
|
|
|
Then check if the problematic key is in the volatile store:
|
|
|
|
|
|
```plaintext
|
|
|
# Search for path component of dp-uri (here: "device")
|
|
|
$ oldb_red_list device
|
|
|
... output will be e.g.:
|
|
|
/sampleroot/child/device/doubledp444
|
|
|
/sampleroot/child/device/doubledp445
|
|
|
/sampleroot/child/device/doubledp111
|
|
|
/sampleroot/child/device/doubledp2222
|
|
|
|
|
|
# If the problematic key is in the list, delete it:
|
|
|
$ oldb_red_del device/doubledp444
|
|
|
```
|
|
|
|
|
|
Otherwise, check if the problematic key is in the permanent store:
|
|
|
|
|
|
```plaintext
|
|
|
# Search for path component of dp-uri (whole-word search, e.g. "dev" would not match)
|
|
|
$ oldb_ela_list device
|
|
|
... output e.g.:
|
|
|
oldb___datapoints___sampleroot___child___device___doubledp446___1
|
|
|
|
|
|
# Delete the offending metadata
|
|
|
$ oldb_ela_del doubbledp446
|
|
|
|
|
|
# After deletion, restart the internal config server
|
|
|
$ sudo cii-services stop config ; sudo cii-services start config
|
|
|
```
|
|
|
|
|
|
**Solution 3**
|
|
|
|
|
|
If none of the above helped, another possibility is to clean up the metadata.
|
|
|
|
|
|
WARNING: This is an invasive operation. It deletes all datapoints in the OLDB.
|
|
|
|
|
|
```plaintext
|
|
|
# Clean up the OLDB databases
|
|
|
config-initEs.sh
|
|
|
oldb-initEs
|
|
|
redis-cli flushall
|
|
|
sudo cii-services stop config
|
|
|
sudo cii-services start config
|
|
|
```
|
|
|
|
|
|
If you are dealing with a multi-user oldb ("role_groupserver", meaning it serves an OLDB to a team of developers), after executing the above commands you need to additionally execute (with privileges):
|
|
|
You can clean the whole OLDB (remove all datapoints) to return to a known state:
|
|
|
|
|
|
```plaintext
|
|
|
/elt/ciisrv/postinstall/cii-postinstall role_groupserver
|
|
|
oldbReset -h
|
|
|
```
|
|
|
|
|
|
If you have doubts, please contact us.
|
... | ... | |