Dear ,
Hope you are doing well!
Having problem with one of the system while doing Disaster Recovery (Getting Error Failed to start OMNI.SOCKET) I have observe we have XFS file system and while doing backup of it (Getting warnings like
Warning] From: abc.local "abc..local [/]" Time: 4/24/2024 5:38:00 PM
/run
Directory is a mount point to a different filesystem.
Backed up as empty directory without extended attributes and ACLs.
We had tried every below mentioned:
1- OMNIRC file (Enable ob2_allow_remote_mount_point=1) but when we using it , It started backup whole cluster storage volume like size we can say 128TB while actual volume is 100GB. This server is created on VM and we assign 100Gb to it)
Data Protector 11.0.1 using (Tested on P1 / P2 Patch) - FAILED
Don't understand how and why this error always there if the problem with backup what is the solution? or is there anything with PHASE 1 recovery where its saying FAILED TO START OMNI.SOCKET how it will resolve?
Please help!
Filesystem Type 1K-blocks Used Available Use% Mounted on
devtmpfs devtmpfs 5940644 0 5940644 0% /dev
tmpfs tmpfs 5970620 64 5970556 1% /dev/shm
tmpfs tmpfs 5970620 1096 5969524 1% /run
tmpfs tmpfs 5970620 0 5970620 0% /sys/fs/cgroup
/dev/mapper/rhel xfs 99388008 34689628 64698380 35% /
/dev/sda2 xfs 1038336 365936 672400 36% /boot
/dev/sda1 vfat 204580 9748 194832 5% /boot/efi
tmpfs tmpfs 1194124 16 1194108 1% /run/user/42
tmpfs tmpfs 1194124 0 1194124 0% /run/user/0
# /etc/fstab
# Created by anaconda on Thu Jan 27 15:49:59 2022 # # Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info #
/dev/mapper/rhel_root / xfs defaults 0 0
UUID=22311905-f85a-4f15-9078-8da15e5bdb9d /boot xfs defaults 0 0
UUID=8D6A-84BA /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/rhel_ swap swap defaults 0 0