Post by DanielThanks!
After a research I find out what an apparmor profile is. I think that I do not have one.
908 /usr/sbin/sshd not confined
962 /usr/sbin/bareos-fd not confined
964 /usr/sbin/nrpe not confined
991 /usr/lib/postgresql/9.5/bin/postgres not confined
1041 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 not confined
1205 /usr/lib/postfix/sbin/master not confined
26265 /usr/sbin/bareos-sd not confined
26291 /usr/sbin/bareos-dir not confined
apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/tcpdump
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
Hi,
I have bareos 15.2.3 on centos 6 and face the same errors. I basically had in the beginning interleaved disk volumes, and no jobs were failing due to this kind of errors.
----Phase 1:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
}
Storage {
Name = DiskStorageIncr
# Do not use "localhost" here
Address = muc1pro-backup-1 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "***"
Device = DiskDeviceIncr1
Media Type = FileIncr
Autochanger = no
Maximum Concurrent Jobs = 10
}
Device {
Name = DiskDeviceIncr1
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
}
I than moved to 1 job/volume setup, with 5 different devices defined in the sd pointing to the same folder on the disk, and added these devices in the storage definition in director.
----Phase 2:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Maximum Volume Jobs = 1
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
}
Storage {
Name = DiskStorageIncr
# Do not use "localhost" here
Address = muc1pro-backup-1 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "***"
Device = DiskDeviceIncr1
Device = DiskDeviceIncr2
Device = DiskDeviceIncr3
Device = DiskDeviceIncr4
Device = DiskDeviceIncr5
Media Type = FileIncr
Autochanger = no
Maximum Concurrent Jobs = 5
}
Device {
Name = DiskDeviceIncr1
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr2
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr3
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr4
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr5
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
At this point there were no failing jobs. Than I added Scratch Pool:
----Phase 3:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Maximum Volume Jobs = 1
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
Scratch Pool = ScratchIncr
}
And 30+ out of 43 jobs were failing, randomly, not always the same.
I was thinking that the scratch pool causes the problems. Went ahead and removed scratch pool , reload, update pool and surprize, in the catalog the scratchpoolid in the Pool table is still pointing to the ScratchIncr.
----Phase 4:
I removed both recycle pool and scratch pool , reload, update pool and now both the recyclepoolid and scratchpoolid are null in the catalog, and got only one job failed without recycle pool and scratch pool.
After another cycle with the same setup, got 0 jobs failed.
So in my case it is definitely related to multiple devices in storage definition, but I'm not sure how is it related to recycle/scratch pool, because I still got 1 failing job, althogh is an improvement from 30+, but cannot say it solved my issue.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: User defined maximum volume capacity 10,737,418,240 exceeded on device "DiskDeviceIncr5" (/data/bareos-storage/Incr).
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: End of medium on Volume "Incr-3838" Bytes=10,737,377,479 Blocks=166,440 at 13-Dec-2016 00:30.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Fatal error: Too many errors trying to mount device "DiskDeviceIncr5" (/data/bareos-storage/Incr).
13-Dec 00:30 dg1nc0505 JobId 7662: Error: bsock_tcp.c:422 Write error sending 65536 bytes to client:192.168.210.39:9102: ERR=Broken pipe
If I go back to the interleaved job setup, everything goes well again.
I will give it another chance this night with the same config from Phase 4, let's see what happens.
I have also enabled debug in the director and storage daemon.
So basically if Recycle pool is configured in the pool it does automatically add scratch pool? I don't find anything in the docu related to this...
Thanks
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.