Policy Training
Storage Tiering
Jason M. Coposky
@jason_coposky
Executive Director, iRODS Consortium
Policy Training
Storage Tiering
June 25-28, 2019
iRODS User Group Meeting 2019
Utrecht, Netherlands
iRODS Capabilities
Storage Tiering
A policy framework providing a scalable solution for data movement between storage resources
Storage Tiering Policy Components
Storage Tiering Overview
Example Implementation
Getting Started
Installing Tiered Storage Plugin
wget -qO - https://packages.irods.org/irods-signing-key.asc | sudo apt-key add -
echo "deb [arch=amd64] https://packages.irods.org/apt/ $(lsb_release -sc) main" | \
sudo tee /etc/apt/sources.list.d/renci-irods.list
sudo apt-get update
As the ubuntu user
Install the package repository
sudo apt-get install irods-rule-engine-plugin-apply-access-time irods-rule-engine-plugin-data-movement irods-rule-engine-plugin-data-replication irods-rule-engine-plugin-data-verification irods-rule-engine-plugin-storage-tiering
Install the storage tiering packages
Configuring the rule engine plugin
As the irods user
Edit /etc/irods/server_config.json
Note - Make sure storage_tiering is the only rule engine plugin listed above irods_rule_language.
"rule_engines": [
{
"instance_name": "irods_rule_engine_plugin-storage_tiering-instance",
"plugin_name": "irods_rule_engine_plugin-storage_tiering",
"plugin_specific_configuration": {
}
},
....
{
"instance_name": "irods_rule_engine_plugin-irods_rule_language-instance",
"plugin_name": "irods_rule_engine_plugin-irods_rule_language",
"plugin_specific_configuration": {
<snip>
},
"shared_memory_instance": "irods_rule_language_rule_engine"
},
...
]
Configuring the rule engine plugin
Add the other plugins after the tiering plugin
"rule_engines": [
... { "instance_name": "irods_rule_engine_plugin-apply_access_time-instance", "plugin_name": "irods_rule_engine_plugin-apply_access_time", "plugin_specific_configuration": { } }, { "instance_name": "irods_rule_engine_plugin-data_verification-instance", "plugin_name": "irods_rule_engine_plugin-data_verification", "plugin_specific_configuration": { } }, { "instance_name": "irods_rule_engine_plugin-data_replication-instance", "plugin_name": "irods_rule_engine_plugin-data_replication", "plugin_specific_configuration": { } }, { "instance_name": "irods_rule_engine_plugin-data_movement-instance", "plugin_name": "irods_rule_engine_plugin-data_movement", "plugin_specific_configuration": { } }, ... ]
Example Implementation
Three Tier Group with Random Resources
Make some resources
iadmin mkresc rnd0 random iadmin mkresc rnd1 random iadmin mkresc rnd2 random iadmin mkresc st_ufs0 unixfilesystem `hostname`:/tmp/irods/st_ufs0 iadmin mkresc st_ufs1 unixfilesystem `hostname`:/tmp/irods/st_ufs1 iadmin mkresc st_ufs2 unixfilesystem `hostname`:/tmp/irods/st_ufs2 iadmin mkresc st_ufs3 unixfilesystem `hostname`:/tmp/irods/st_ufs3 iadmin mkresc st_ufs4 unixfilesystem `hostname`:/tmp/irods/st_ufs4 iadmin mkresc st_ufs5 unixfilesystem `hostname`:/tmp/irods/st_ufs5 iadmin addchildtoresc rnd0 st_ufs0 iadmin addchildtoresc rnd0 st_ufs1 iadmin addchildtoresc rnd1 st_ufs2 iadmin addchildtoresc rnd1 st_ufs3 iadmin addchildtoresc rnd2 st_ufs4 iadmin addchildtoresc rnd2 st_ufs5
As the irods user
Make some resources
irods@hostname:~$ ilsresc demoResc:unixfilesystem rnd0:random ├── st_ufs0:unixfilesystem └── st_ufs1:unixfilesystem rnd1:random ├── st_ufs2:unixfilesystem └── st_ufs3:unixfilesystem rnd2:random ├── st_ufs4:unixfilesystem └── st_ufs5:unixfilesystem
Check the results
Create a Tier Group
imeta set -R rnd0 irods::storage_tiering::group example_group 0 imeta set -R rnd1 irods::storage_tiering::group example_group 1 imeta set -R rnd2 irods::storage_tiering::group example_group 2
Create a tier group named example_group, adding the metadata to the root resources
Set the Tiering Time Constraints
imeta set -R rnd1 irods::storage_tiering::time 60
Tier 2 does not have a storage tiering time as it will hold data indefinitely
imeta set -R rnd0 irods::storage_tiering::time 30
Configure tier 0 to hold data for 30 seconds
Configure tier 1 to hold data for 60 seconds
Stage some data into storage tier 0
iput -R rnd0 /home/ubuntu/irods_training/stickers.jpg
Check the results
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 0 rnd0;st_ufs0 2157087 2018-05-11.11:51 & stickers.jpg
irods@hostname:~$ imeta ls -d stickers.jpg AVUs defined for dataObj stickers.jpg: attribute: irods::access_time value: 1526134799 units:
Testing the tiering
Note the child resource of rnd0 may be different
Sample Tiering rule
{ "rule-engine-instance-name": "irods_rule_engine_plugin-storage_tiering-instance", "rule-engine-operation": "irods_policy_schedule_storage_tiering", "delay-parameters": "<INST_NAME>irods_rule_engine_plugin-storage_tiering-instance</INST_NAME><PLUSET>1s</PLUSET><EF>1h DOUBLE UNTIL SUCCESS OR 6 TIMES</EF>", "storage-tier-groups": [ "example_group_g2", "example_group" ] } INPUT null OUTPUT ruleExecOut
JSON ingested by the Tiering Plugin
In production this would be persistently on the delay queue
Launching the sample Tiering rule
irods@hostname:~$ iqstat id name 10038 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["example_group_g2","example_group"]}
irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r
Check the delay queue
Run the rule from the terminal
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 2 rnd1;st_ufs2 2157087 2018-05-12.10:22 & stickers.jpg
Wait for the delay execution engine to fire...
Check the resource for stickers.jpg
Launching the Tiering rule once more
irods@hostname:~$ iqstat id name 10038 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["example_group_g2","example_group"]}
irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r
Check the delay queue
The time for violation is 60 seconds for rnd1
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 3 rnd2;st_ufs4 2157087 2018-05-12.10:22 & stickers.jpg
Wait for the delay execution engine to fire...
Check the resource for stickers.jpg
Restaging Data
irods@hostname:~$ iget -f stickers.jpg irods@hostname:~$ iqstat id name 10035 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["example_group_g2","example_group"]}
Fetching data when it is not in the lowest tier will automatically trigger a restaging of the data
The object will be replicated back to the lowest tier, honoring the verification policy
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 4 rnd0;st_ufs1 2157087 2018-05-12.10:22 & stickers.jpg
Policy Composition
Overriding Existing Policy
Policy Signatures
irods_policy_apply_access_time(*obj_path, *coll_type, *attribute)
irods_policy_data_movement( *instance_name, *object_path,
*user_name, *source_replica_number, *source_resource,
*destination_resource, *preserve_replicas, *verification_type)
irods_policy_data_replication(*inst_name, *src_resc, *dst_resc, *obj_path)
irods_policy_data_verification(*inst_name, *type, *src_resc, *dst_resc, *obj_path)
Storage Tiering invokes Data Movement
Data Movement invokes Replication and Verification, and implements Retention
Data Object Access Time
The default policy for tiering is based on the last time of access for a given data object which is applied as metadata
pep_api_data_obj_close_post
pep_api_data_obj_put_post
pep_api_data_obj_get_post
pep_api_phy_path_reg_post
irods::access_time <unix timestamp>
Dynamic Policy Enforcement Points for RPC API are used to apply the metadata
Overriding the Access Time Policy
irods_policy_apply_access_time(*obj_path, *coll_type, *attribute) {
msiGetSystemTime(*sys_time, '')
*clean_time = triml(*sys_time, "0")
*cleaner_time = triml(*clean_time, " ")
writeLine("serverLog", "Access Time [*obj_path] [*cleaner_time]")
*kvp = "*attribute=*cleaner_time"
msiString2KeyValPair(*kvp, *attr)
msiAssociateKeyValuePairsToObj(*attr, *obj_path, "-d")
writeLine("serverLog", "token_for_test_without_access_time")
}
edit /etc/irods/apply_access_time.re
Configure the Access Time Policy
edit /etc/irods/server_config.json
{
"instance_name": "irods_rule_engine_plugin-irods_rule_language-instance",
"plugin_name": "irods_rule_engine_plugin-irods_rule_language",
"plugin_specific_configuration": {
"re_data_variable_mapping_set": [
"core"
],
"re_function_name_mapping_set": [
"core"
],
"re_rulebase_set": [
"apply_access_time",
"core"
],
"regexes_for_supported_peps": [
"ac[^ ]*",
"msi[^ ]*",
"[^ ]*pep_[^ ]*_(pre|post|except)"
]
},
"shared_memory_instance": "irods_rule_language_rule_engine"
},
Disable the apply access time rule engine plugin
"rule_engines": [
...
{
"instance_name": "irods_rule_engine_plugin-apply_access_time-instance",
"plugin_name": "irods_rule_engine_plugin-apply_access_time",
"plugin_specific_configuration": {
}
},
{
"instance_name": "irods_rule_engine_plugin-data_verification-instance",
"plugin_name": "irods_rule_engine_plugin-data_verification",
"plugin_specific_configuration": {
}
},
{
"instance_name": "irods_rule_engine_plugin-data_replication-instance",
"plugin_name": "irods_rule_engine_plugin-data_replication",
"plugin_specific_configuration": {
}
},
{
"instance_name": "irods_rule_engine_plugin-data_movement-instance",
"plugin_name": "irods_rule_engine_plugin-data_movement",
"plugin_specific_configuration": {
}
},
...
]
Remove the red text
Test the new policy
Repave the old object
Check log for our message from the policy implementation
irods@hostname:~$ grep "Access Time" log/rodsLog*
<SNIP> writeLine: inString = Access Time [/tempZone/home/rods/stickers.jpg] [1560362577]
irm -f stickers.jpg
iput -R rnd0 stickers.jpg
Overriding the Data Replication Policy
edit /etc/irods/data_replication.re
irods_policy_data_replication(*inst_name, *src_resc, *dst_resc, *obj_path) {
*err = errormsg(msiDataObjRepl(
*obj_path,
"rescName=*src_resc++++destRescName=*dst_resc",
*out_param), *msg)
if(0 != *err) {
failmsg(*err, "msiDataObjRepl failed for [*obj_path] [*src_resc] [*dst_resc] - [*msg]")
}
writeLine("serverLog", "Data Replication [*obj_path]")
}
Configure the Data Replication Policy
edit /etc/irods/server_config.json
{
"instance_name": "irods_rule_engine_plugin-irods_rule_language-instance",
"plugin_name": "irods_rule_engine_plugin-irods_rule_language",
"plugin_specific_configuration": {
"re_data_variable_mapping_set": [
"core"
],
"re_function_name_mapping_set": [
"core"
],
"re_rulebase_set": [
"data_replication",
"apply_access_time",
"core"
],
"regexes_for_supported_peps": [
"ac[^ ]*",
"msi[^ ]*",
"[^ ]*pep_[^ ]*_(pre|post|except)"
]
},
"shared_memory_instance": "irods_rule_language_rule_engine"
},
Disable the Data Replication plugin
"rule_engines": [
...
{
"instance_name": "irods_rule_engine_plugin-data_verification-instance",
"plugin_name": "irods_rule_engine_plugin-data_verification",
"plugin_specific_configuration": {
}
},
{
"instance_name": "irods_rule_engine_plugin-data_replication-instance",
"plugin_name": "irods_rule_engine_plugin-data_replication",
"plugin_specific_configuration": {
}
},
{
"instance_name": "irods_rule_engine_plugin-data_movement-instance",
"plugin_name": "irods_rule_engine_plugin-data_movement",
"plugin_specific_configuration": {
}
},
...
]
Test the new policy
Check the delay queue
Run the rule from the terminal
Wait for the delay execution engine to fire...
Check log for our message from the policy implementation
irods@hostname:~$ grep "Data Replication" log/rodsLog*
<SNIP> writeLine: inString = Data Replication [/tempZone/home/rods/stickers.jpg]
irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r
irods@hostname:~$ iqstat
id name
10038 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["example_group_g2","example_group"]}
Overriding the Data Verification Policy
irods_policy_data_verification(*inst_name, *type, *src_resc, *dst_resc, *obj_path) {
*col = trimr(*obj_path, "/")
if(strlen(*col) == strlen(*obj_path)) {
*obj = *col
} else {
*obj = substr(*obj_path, strlen(*col)+1, strlen(*obj_path))
}
*resc_id = "EMPTY"
foreach( *row in SELECT RESC_ID WHERE
DATA_NAME = *obj and
COLL_NAME = *col and
RESC_NAME = *dst_resc ) {
*resc_id = *row.RESC_ID
}
# this simple policy only works for root-only resources
# if("EMPTY" != *resc_id) {
writeLine("serverLog", "Data Verification [*obj_path]")
# }
}
edit /etc/irods/data_verification.re
Configure the Data Replication Policy
edit /etc/irods/server_config.re
{
"instance_name": "irods_rule_engine_plugin-irods_rule_language-instance",
"plugin_name": "irods_rule_engine_plugin-irods_rule_language",
"plugin_specific_configuration": {
"re_data_variable_mapping_set": [
"core"
],
"re_function_name_mapping_set": [
"core"
],
"re_rulebase_set": [
"data_verification",
"data_replication",
"apply_access_time",
"core"
],
"regexes_for_supported_peps": [
"ac[^ ]*",
"msi[^ ]*",
"[^ ]*pep_[^ ]*_(pre|post|except)"
]
},
"shared_memory_instance": "irods_rule_language_rule_engine"
},
Disable the Data Verification policy
"rule_engines": [
...
{
"instance_name": "irods_rule_engine_plugin-data_verification-instance",
"plugin_name": "irods_rule_engine_plugin-data_verification",
"plugin_specific_configuration": {
}
},
{
"instance_name": "irods_rule_engine_plugin-data_movement-instance",
"plugin_name": "irods_rule_engine_plugin-data_movement",
"plugin_specific_configuration": {
}
},
...
]
Test the new policy
Check the delay queue
Run the rule from the terminal
Wait for the delay execution engine to fire...
Check log for our message from the policy implementation
irods@hostname:~$ grep "Data Verification" log/rodsLog*
<SNIP> writeLine: inString = Data Verification [/tempZone/home/rods/stickers.jpg]
irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r
irods@hostname:~$ iqstat
id name
10038 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["example_group_g2","example_group"]}
Another example
Three Tier Groups with Common Archive
We will create data flow from instrument to archive
Create some more resources
iadmin mkresc tier2 unixfilesystem `hostname`:/tmp/irods/tier2 iadmin mkresc tier0_A unixfilesystem `hostname`:/tmp/irods/tier0_A iadmin mkresc tier1_A unixfilesystem `hostname`:/tmp/irods/tier1_A iadmin mkresc tier0_B unixfilesystem `hostname`:/tmp/irods/tier0_B iadmin mkresc tier1_B unixfilesystem `hostname`:/tmp/irods/tier1_B iadmin mkresc tier0_C unixfilesystem `hostname`:/tmp/irods/tier2_C iadmin mkresc tier1_C unixfilesystem `hostname`:/tmp/irods/tier1_C
Create Tier Groups
imeta set -R tier0_A irods::storage_tiering::group tier_group_A 0 imeta set -R tier1_A irods::storage_tiering::group tier_group_A 1 imeta add -R tier2 irods::storage_tiering::group tier_group_A 2
imeta set -R tier0_B irods::storage_tiering::group tier_group_B 0 imeta set -R tier1_B irods::storage_tiering::group tier_group_B 1 imeta add -R tier2 irods::storage_tiering::group tier_group_B 2
Tier Group A
Tier Group B
imeta set -R tier0_C irods::storage_tiering::group tier_group_C 0 imeta set -R tier1_C irods::storage_tiering::group tier_group_C 1 imeta add -R tier2 irods::storage_tiering::group tier_group_C 2
Tier Group C
Set Tier Time Constraints
imeta set -R tier0_A irods::storage_tiering::time 30 imeta set -R tier0_B irods::storage_tiering::time 45 imeta set -R tier0_C irods::storage_tiering::time 15
Tier 0
imeta set -R tier1_A irods::storage_tiering::time 60 imeta set -R tier1_B irods::storage_tiering::time 120 imeta set -R tier1_C irods::storage_tiering::time 180
Tier 1
Tier 2 has no time constraints
Creating an automated periodic rule
{ "rule-engine-instance-name": "irods_rule_engine_plugin-storage_tiering-instance", "rule-engine-operation": "irods_policy_schedule_storage_tiering", "delay-parameters": "<INST_NAME>irods_rule_engine_plugin-storage_tiering-instance</INST_NAME><PLUSET>1s</PLUSET><EF>REPEAT FOR EVER</EF>", "storage-tier-groups": [ "tier_group_A", "tier_group_B", "tier_group_C" ] } INPUT null OUTPUT ruleExecOut
Create a new rule file to periodically apply the storage tiering policy - /var/lib/irods/foo.r
Launch the new rule
irods@hostname:~$ irule -r irods_rule_engine_plugin-storage_tiering-instance -F foo.r irods@hostname:~$ iqstat id name 10096 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["tier_group_A","tier_group_B","tier_group_C"]}
Stage data into all three tiers and watch
irods@hostname:~$ iput -R tier0_A irods_training/stickers.jpg stickers_A.jpg irods@hostname:~$ iput -R tier0_B irods_training/stickers.jpg stickers_B.jpg irods@hostname:~$ iput -R tier0_C irods_training/stickers.jpg stickers_C.jpg irods@hostname:~$ ils -l /tempZone/home/rods: rods 0 tier0_A 2157087 2018-05-18.21:03 & stickers_A.jpg rods 0 tier0_B 2157087 2018-05-18.21:08 & stickers_B.jpg rods 0 tier0_C 2157087 2018-05-18.21:13 & stickers_C.jpg rods 1 rnd1;st_ufs3 2157087 2018-05-18.13:38 & stickers.jpg rods 2 rnd2;st_ufs4 2157087 2018-05-18.14:16 & stickers.jpg
Wait for it...
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 0 tier2 2157087 2018-05-18.22:03 & stickers_A.jpg
rods 1 tier1_B 2157087 2018-05-18.22:00 & stickers_B.jpg
rods 2 tier0_C 2157087 2018-05-18.21:13 & stickers_C.jpg
rods 1 rnd1;st_ufs3 2157087 2018-05-18.13:38 & stickers.jpg
rods 2 rnd2;st_ufs4 2157087 2018-05-18.14:16 & stickers.jpg
And then again...
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 2 tier2 2157087 2018-05-18.22:03 & stickers_A.jpg
rods 2 tier2 2157087 2018-05-18.24:00 & stickers_B.jpg
rods 2 tier2 2157087 2018-05-18.25:45 & stickers_C.jpg
rods 1 rnd1;st_ufs3 2157087 2018-05-18.13:38 & stickers.jpg
rods 2 rnd2;st_ufs4 2157087 2018-05-18.14:16 & stickers.jpg
All newly ingested stickers_X all now reside in tier2
Questions?
Custom Policy Extension
Extending Storage Tiering with Custom Policy Implementation
The Data Movement Policy
The Data Movement Policy Calls out to:
Data Retention is a hard coded part of the framework:
a new implementation would be needed when
overriding this policy
irods_policy_data_movement (
*instance_name, *object_path, *user_name,
*source_replica_number, *source_resource, *destination_resource,
*preserve_replicas, *verification_type) {
}
Storage Tiering Policy Extension
Implement a new policy which will trigger a data restage given a particular metadata combination
Use an attribute of: irods::storage_tiering::restage
We will need to identify:
Once we have our pieces, leverage the existing data movement for the restage exactly like the tiering plugin
Helper Functions
# Return the continuation code for the Rule Engine Plugin Framework
return_rule_engine_continue() { 5000000 }
# single point of truth for error value
get_error_value(*err) { *err = "ERROR_VALUE" }
# split a logical path into collection and data name
split_path(*p,*tok,*col,*obj) {
*col = trimr(*p, *tok)
if(strlen(*col) == strlen(*p)) {
*obj = *col
} else {
*obj = substr(*p, strlen(*col)+1, strlen(*p))
}
} # split_path
The new error code indicates to the REPF that it should continue looking for any additional rules with the same name provided by other rule engine plugins.
Helper Functions
# find the root resource id given a leaf resource id
get_root_resource_id_for_leaf_resource_id(*leaf_id, *root_id) {
get_error_value(*root_id)
get_error_value(*error_value)
*prev_id = *error_value
*tmp_id = *leaf_id
while(*prev_id != *tmp_id) {
*prev_id = *tmp_id
foreach( *row in SELECT RESC_PARENT WHERE RESC_ID = "*tmp_id") {
*resc_id = *row.RESC_PARENT
# cannot query on empty string
if("" != *resc_id) {
*tmp_id = *resc_id
} # if
} # foreach
} # while
*root_id = *tmp_id
} # get_root_resource_id_for_leaf_resource_id
Helper Functions
# find the tier group name for a given root resource id
get_tier_group_name_for_resource_id(*resc_id, *group_name) {
get_error_value(*group_name)
get_error_value(*error_value)
foreach(*row in SELECT META_RESC_ATTR_VALUE WHERE
RESC_ID = "*resc_id" AND
META_RESC_ATTR_NAME = "irods::storage_tiering::group") {
*group_name = *row.META_RESC_ATTR_VALUE
} # foreach
} # get_tier_group_name_for_resource_id
Helper Functions
get_minimum_tier_resource_id(*group_name, *resc_id) {
get_error_value(*resc_id)
get_error_value(*error_value)
*prev_idx = "99999999999"
foreach(*row in SELECT RESC_ID,
META_RESC_ATTR_UNITS
WHERE
META_RESC_ATTR_VALUE = "*group_name" AND
META_RESC_ATTR_NAME = "irods::storage_tiering::group") {
*index = *row.META_RESC_ATTR_UNITS
if(*index < *prev_idx) {
*resc_id = *row.RESC_ID
*prev_idx = *index
} # if
} # foreach
} # get_minimum_tier_resource_id
Helper Functions
get_replica_number_for_object(*coll_name, *data_name, *resc_id, *repl_num) {
get_error_value(*repl_num)
foreach(*row in SELECT DATA_REPL_NUM
WHERE DATA_NAME = "*data_name" AND
COLL_NAME = "*coll_name" AND
RESC_ID = "*resc_id") {
*repl_num = *row.DATA_REPL_NUM
}
}
get_resource_name(*resc_id, *resc_name) {
get_error_value(*resc_name)
foreach(*row in SELECT RESC_NAME WHERE RESC_ID = "*resc_id") {
*resc_name = *row.RESC_NAME
}
}
Policy Implementation
invoke_data_restage_for_group(*group_name, *coll_name, *data_name, *user_name, *leaf_id, *root_id) {
get_minimum_tier_resource_id(*group_name, *min_resc_id)
*instance_name = "irods_rule_engine_plugin-data_movement-instance"
*object_path = *coll_name ++ "/" ++ *data_name
get_replica_number_for_object(*coll_name, *data_name, *leaf_id, *source_replica_number)
get_resource_name(*leaf_id, *source_resource)
get_resource_name(*min_resc_id, *destination_resource)
*preserve_replicas = "false"
*verification_type = "catalog"
# note that this is synchronous, we could also schedule a movement on queue
irods_policy_data_movement (
*instance_name,
*object_path,
*user_name,
*source_replica_number,
*source_resource,
*destination_resource,
*preserve_replicas,
*verification_type)
}
Policy Implementation
pep_api_mod_avu_metadata_post(*INST_NAME, *COMM, *AVU_INP) {
get_error_value(*error_value)
*logical_path = *AVU_INP.arg2
*user_name = *COMM.user_user_name
*attribute = *AVU_INP.arg3
if("irods::storage_tiering::restage" == *attribute) {
split_path(*logical_path, "/", *coll_name, *data_name)
foreach( *row in SELECT RESC_ID WHERE
COLL_NAME = "*coll_name" AND
DATA_NAME = "*data_name" ) {
*leaf_id = *row.RESC_ID
get_root_resource_id_for_leaf_resource_id(*leaf_id, *root_id)
get_tier_group_name_for_resource_id(*root_id, *group_name)
if(*error_value != *group_name) {
invoke_data_restage_for_group(*group_name, *coll_name, *data_name,
*user_name, *leaf_id, *root_id)
}
} # foreach
} # if attribute
} # pep_api_mod_avu_metadata_post
Restore the policy configuration
Edit a new rule base:
/etc/irods/metadata_data_restage.re
Copy the policy implementation into the rule base
Configure the Data Replication Policy
edit /etc/irods/server_config.re
{
"instance_name": "irods_rule_engine_plugin-irods_rule_language-instance",
"plugin_name": "irods_rule_engine_plugin-irods_rule_language",
"plugin_specific_configuration": {
"re_data_variable_mapping_set": [
"core"
],
"re_function_name_mapping_set": [
"core"
],
"re_rulebase_set": [
"metadata_data_restage",
"data_replication",
"apply_access_time",
"core"
],
"regexes_for_supported_peps": [
"ac[^ ]*",
"msi[^ ]*",
"[^ ]*pep_[^ ]*_(pre|post|except)"
]
},
"shared_memory_instance": "irods_rule_language_rule_engine"
},
Testing Metadata Restage
Ensure that stickers.jpg is on the lowest tier
Trigger the restage by setting the metadata
irods@hostname:~$ imeta set -d stickers.jpg irods::storage_tiering::restage true
irods@hostname:~$ ils -l stickers.jpg
rods 11 rnd0;st_ufs0 2157087 2019-06-21.10:11 & stickers.jpg
irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r
irods@hostname:~$ ils -l stickers.jpg
rods 10 rnd2;st_ufs5 2157087 2019-06-21.10:09 & stickers.jpg
Questions?
Configuration Options
Overview of various options
Configuring a Tier Group
imeta set -R <resc0> irods::storage_tiering::group example_group 0 imeta set -R <resc1> irods::storage_tiering::group example_group 1 imeta set -R <resc2> irods::storage_tiering::group example_group 2
Tier groups are entirely driven by metadata
Configuring Tiering Time Constraints
Tiering violation time is configured in seconds
imeta set -R <resc> irods::storage_tiering::time 2592000
The final tier in a group does not have a storage tiering time
- it will hold data indefinitely
imeta set -R <resc> irods::storage_tiering::time 30
Configure a tier to hold data for 30 seconds
Configure a tier to hold data for 30 days
Verification of Data Migration
When data is found to be in violation:
'catalog' is the default verification for all resources
imeta set -R <resc> irods::storage_tiering::verification catalog
For verification, this setting will determine if the replica is properly registered within the catalog after replication.
Verification of Data Migration
imeta add -R <resc> irods::storage_tiering::verification filesystem
This option will stat the remote replica on disk and compare the file size with that of the catalog.
Filesystem verification is more expensive as it involves a potentially remote file system stat.
Verification of Data Migration
imeta add -R <resc> irods::storage_tiering::verification checksum
Checksum verification is the most expensive as file sizes may be large
Compute a checksum of the data once it is at rest, and compare with the value in the catalog.
Should the source replica not have a checksum one will be computed before the replication is performed
Configuring the restage resource
imeta add -R <resc> irods::storage_tiering::minimum_restage_tier true
When data is in a tier other than the lowest tier, upon access the data is restaged back to the lowest tier.
This flag identifies the tier for restage:
Users may not want data restaged back to the lowest tier, should that tier be very remote or not appropriate for analysis.
Consider a storage resource at the edge serving as a landing zone for instrument data.
Preserving Replicas
Some users may not wish to trim a replica from a tier when data is migrated, such as to allow data to be archived and also still available on fast storage.
To preserve a replica on any given tier, attach the following metadata flag to the root resource.
imeta set -R <resc> irods::storage_tiering::preserve_replicas true
Custom Violation Query
Admins may specify a custom query which identifies violating data objects
imeta set -R <resc> irods::storage_tiering::query "SELECT DATA_NAME, COLL_NAME, DATA_RESC_ID WHERE META_DATA_ATTR_NAME = 'irods::access_time' AND META_DATA_ATTR_VALUE < 'TIME_CHECK_STRING' AND DATA_RESC_ID IN ('10021', '10022')"
Any number of queries may be attached to a resource in order provide a range of criteria by which violating data may be identified
Custom Violating Specific Query
More complex SQL may be required to identify violating objects. Users may configure Specific Queries and attach those to a given tier within a group.
Create a specific query in SQL
Configure the specific query
iadmin asq "select distinct R_DATA_MAIN.data_name, R_COLL_MAIN.coll_name, R_DATA_MAIN.resc_id from R_DATA_MAIN, R_COLL_MAIN, R_OBJT_METAMAP r_data_metamap, R_META_MAIN r_data_meta_main where R_DATA_MAIN.resc_id IN (10021, 10022) AND r_data_meta_main.meta_attr_name = 'archive_object' AND r_data_meta_main.meta_attr_value = 'true' AND R_COLL_MAIN.coll_id = R_DATA_MAIN.coll_id AND R_DATA_MAIN.data_id = r_data_metamap.object_id AND r_data_metamap.meta_id = r_data_meta_main.meta_id order by R_COLL_MAIN.coll_name, R_DATA_MAIN.data_name" archive_query
imeta set -R <resc> irods::storage_tiering::query archive_query specific
Limiting violating query results
When working with large sets of data, throttling the amount of data migrated at one time can be helpful.
In order to limit the results of the violating queries attach the following metadata attribute with the value set as the query limit.
imeta set -R <resc> irods::storage_tiering::object_limit LIMIT_VALUE
Logging data transfer
In order to record the transfer of data objects from one tier to the next, the storage tiering plugin on the ICAT server can be configured by setting "data_transfer_log_level" : "LOG_NOTICE" in the plugin_specific_configuration.
In /etc/irods/server_config.json add the configuration to the storage_tiering plugin instance:
{
"instance_name": "irods_rule_engine_plugin-storage_tiering-instance",
"plugin_name": "irods_rule_engine_plugin-storage_tiering",
"plugin_specific_configuration": {
"data_transfer_log_level" : "LOG_NOTICE"
}
},
Storage Tiering Metadata Vocabulary
"plugin_specific_configuration": { "access_time_attribute" : "irods::access_time", "storage_tiering_group_attribute" : "irods::storage_tiering::group", "storage_tiering_time_attribute" : "irods::storage_tiering::time", "storage_tiering_query_attribute" : "irods::storage_tiering::query", "storage_tiering_verification_attribute" : "irods::storage_tiering::verification", "storage_tiering_restage_delay_attribute" : "irods::storage_tiering::restage_delay", "default_restage_delay_parameters" : "<PLUSET>1s</PLUSET><EF>1h DOUBLE UNTIL SUCCESS OR 6 TIMES</EF>", "time_check_string" : "TIME_CHECK_STRING" }
All default metadata attributes are configurable
Should there be a preexisting vocabulary in your organization,
it can be leveraged by redefining the metadata attributes used by the storage tiering framework.
Configuration
Demonstrating the additional Configuration
Setting a Minimum Restage Tier
imeta set -R rnd1 irods::storage_tiering::minimum_restage_tier true
In order to flag a resource as the target resource for restaging data, we add metadata
The object will be replicated to this tier instead of the lowest tier
irods@hostname:~$ !irule
irods@hostname:~$ !irule
irods@hostname:~$ ils -l /tempZone/home/rods: rods 6 rnd2;st_ufs5 2157087 2018-05-15.15:10 & stickers.jpg irods@hostname:~$ iget -f stickers.jpg irods@hostname:~$ iqstat id name 10044 {"destination-resource":"rnd1","object-path":"/tempZone/home/rods/stickers.jpg","preserve-replicas":false,"rule-engine-operation":"migrate_object_to_resource","source-resource":"rnd2","verification-type":"catalog"} irods@hostname:~$ ils -l /tempZone/home/rods: rods 7 rnd1;st_ufs2 2157087 2018-05-15.15:10 & stickers.jpg
Preserving replicas on a given storage tier
imeta set -R rnd1 irods::storage_tiering::preserve_replicas true
If we want to preserve replicas on a tier we can set a metadata flag
irods@hostname:~$ !irule irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r irods@hostname:~$ ils -l /tempZone/home/rods: rods 1 rnd1;st_ufs2 2157087 2018-05-15.15:28 & stickers.jpg rods 2 rnd2;st_ufs5 2157087 2018-05-15.15:28 & stickers.jpg
When the staging rule is invoked the replica on the rnd1 tier will not be trimmed after replication
A replica is preserved for analysis while another is safe in the archive tier
Custom violation queries
Any number of queries may be attached to a tier to identify violating objects.
The resource ids of the leaf resources are necessary for the query, not the root.
irods@hostname:~$ ilsresc -l st_ufs0 resource name: ufs0 id: 10019 ... irods@hostname:~$ ilsresc -l st_ufs1 resource name: ufs1 id: 10020 ....
Custom violation queries
Craft a query that replicates the default behavior
imeta set -R rnd0 irods::storage_tiering::query "SELECT DATA_NAME, COLL_NAME, DATA_RESC_ID WHERE META_DATA_ATTR_NAME = 'irods::access_time' AND META_DATA_ATTR_VALUE < 'TIME_CHECK_STRING' AND DATA_RESC_ID IN ('10019', '10020')" [general]
Compare data object access time against TIME_CHECK_STRING
TIME_CHECK_STRING macro is replaced with the current time by the plugin before the query is submitted
Check DATA_RESC_ID against the list of child resource ids
Columns DATA_NAME, COLL_NAME, DATA_RESC_ID must be queried in that order
By default all queries are of the type general, which is optional
Custom violation queries
Identify the leaf resource ids for the middle tier
irods@hostname:~$ ilsresc -l st_ufs2 resource name: ufs2 id: 10021 ... irods@hostname:~$ ilsresc -l st_ufs3 resource name: ufs3 id: 10022 ....
Custom violation queries
Craft a query that uses metadata to identify violating objects
Add a Specific Query to iRODS, using new resource ids
iadmin asq "select distinct R_DATA_MAIN.data_name, R_COLL_MAIN.coll_name, R_DATA_MAIN.resc_id from R_DATA_MAIN, R_COLL_MAIN, R_OBJT_METAMAP r_data_metamap, R_META_MAIN r_data_meta_main where R_DATA_MAIN.resc_id IN (10021, 10022) AND r_data_meta_main.meta_attr_name = 'archive_object' AND r_data_meta_main.meta_attr_value = 'true' AND R_COLL_MAIN.coll_id = R_DATA_MAIN.coll_id AND R_DATA_MAIN.data_id = r_data_metamap.object_id AND r_data_metamap.meta_id = r_data_meta_main.meta_id order by R_COLL_MAIN.coll_name, R_DATA_MAIN.data_name" archive_query
imeta set -R rnd1 irods::storage_tiering::query archive_query specific
Attach the query to the middle tier
This query must be labeled specific via the units
Testing the queries
irods@hostname:~$ irm -f stickers.jpg
irods@hostname:~$ iput -R rnd0 /tmp/stickers.jpg
Starting over with stickers.jpg
Wait for it...
irods@hostname:~$ irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r
irods@hostname:~$ iqstat
id name
10065 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["example_group_g2","example_group"]}
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 1 rnd1;st_ufs3 2157087 2018-05-18.13:38 & stickers.jpg
Testing the queries
irods@hostname:~$ ils -l
/tempZone/home/rods:
rods 1 rnd1;st_ufs3 2157087 2018-05-18.13:38 & stickers.jpg
The file stopped at rnd1 as the time-based default query is now overridden
Now set the metadata flag to archive the data object
irods@hostname:~$ imeta set -d /tempZone/home/rods/stickers.jpg archive_object true
Testing the queries
irods@hostname:~$ irule -r irods_rule_engine_plugin-storage_tiering-instance -F example_tiering_invocation.r irods@hostname:~$ iqstat id name 10065 {"rule-engine-operation":"apply_storage_tiering_policy","storage-tier-groups":["example_group_g2","example_group"]} irods@hostname:~$ ils -l /tempZone/home/rods: rods 1 rnd1;st_ufs3 2157087 2018-05-18.13:38 & stickers.jpg rods 2 rnd2;st_ufs4 2157087 2018-05-18.14:16 & stickers.jpg
The metadata is set, run the tiering rule
The preservation flag is still set so we have two replicas
Questions?