6.2.3 HP StoreAll Storage Release Notes (update)
2. Run the following command to exclude all files in the .webdav directory:
# ibrix_avconfig -a -E -f FS1 -P /FS1/.webdav/ -x '*'
3. Reset the immutable bit on the .webdav directory:
# chattr +i .webdav/
4. Exclude all files in the .DAV directory for each HTTP/WebDAV share created:
# ibrix_avconfig -a -E -f FS1 -P /FS1/httpshare/.DAV/ -x '*'
Segment evacuation
• The segment evacuator cannot evacuate segments in a READONLY, BROKEN or UNAVAILABLE
state.
The ibrix_collect command
• If collection does not start after a node recovers from a system crash, check the /var/crash/
<timestamp> directory to determine whether the vmcore is complete. The command
ibrix_collect does not process incomplete vmcores. Also check /usr/local/ibrix/log/
ibrixcollect/kdumpcollect.log for any errors.
• If the status of a collection is Partially_collected, typically the management console service
was not running or there was not enough space available in the /local disk partition on the
node where the collection failed. To determine the exact cause of a failure during collection, see
the following logs:
◦ /usr/local/ibrix/log/fusionserver.log
◦ /usr/local/ibrix/log/ibrixcollect/ibrixcollect.py.log
• Email notifications do not include information about failed attempts to collect the cluster
configuration.
• In some situations, ibrix_collect successfully collects information after a system crash but
fails to report a completed collection. The information is available in the /local/ibrixcollect/
archive directory on one of the file serving nodes.
• The ibrix_collect command supports a maximum collection size of 4 GB. If the size of the
final collection exceeds 4 GB, the collection fails. You must either:
◦ Delete the excess older logs from each node to reduce the size.
Or
◦ Manually collect the individual collection, which is stored on each node in the following
format:
/local/ibrixcollect/<node_name>_<collection_name>_<time>.tgz
Cluster component states
• Changes in file serving node status do not appear on the management console until 6 minutes
after an event. During this time, the node status may appear to be UP when it is actually DOWN
or UNKNOWN. Be sure to allow enough time for the management console to be updated before
verifying node status.
• Generally, when a vendorstorage component is marked Stale, the component has failed
and is not responding to monitoring. However, if all components are marked Stale, this implies
a failure of the monitoring subsystem. Temporary failures of this system can cause all monitored
24 Workarounds