Discussion:
[bareos-users] ERR=Could not open file device
Daniel
2016-10-11 06:37:06 UTC
Permalink
Hi,

i have some problems with my storages. I have one file storage each client, so that each client will backup in a own folder (is there any better solution for that?). So actually there are 9 storage devices.
The first days after change (1 -> 9 storages) everything was fine, the daily backups were okay. But since a few days I often get the following errors:

"backuptest-sd JobId 821: Warning: mount.c:247 Open device "FileStorage-Arbeitsplatz-Oliver" (/var/lib/bareos/storage/arbeitsplatz/oliver) Volume "" failed: ERR=Could not open file device "FileStorage-Arbeitsplatz-Oliver" (/var/lib/bareos/storage/arbeitsplatz/oliver). No Volume name given."

This error appears (with different names) for some of my clients. But often the next backup will work correctly.
E.g. last night one backup throws the error above, but I started it a hour ago und it worked!

Has somebody an idea about that? I don't know what I can do to solve it. The only possibility that I see is to change back to 1 storage device, but we wan't different folders for the clients - so this isn't a solution.

Have a nice day and thanks for your time!
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Bruno Friedmann
2016-10-11 06:52:46 UTC
Permalink
Post by Daniel
Hi,
i have some problems with my storages. I have one file storage each client,
so that each client will backup in a own folder (is there any better
solution for that?). So actually there are 9 storage devices. The first
days after change (1 -> 9 storages) everything was fine, the daily backups
"backuptest-sd JobId 821: Warning: mount.c:247 Open device
"FileStorage-Arbeitsplatz-Oliver"
(/var/lib/bareos/storage/arbeitsplatz/oliver) Volume "" failed: ERR=Could
not open file device "FileStorage-Arbeitsplatz-Oliver"
(/var/lib/bareos/storage/arbeitsplatz/oliver). No Volume name given."
This error appears (with different names) for some of my clients. But often
the next backup will work correctly. E.g. last night one backup throws the
error above, but I started it a hour ago und it worked!
Has somebody an idea about that? I don't know what I can do to solve it. The
only possibility that I see is to change back to 1 storage device, but we
wan't different folders for the clients - so this isn't a solution.
Have a nice day and thanks for your time!
I'm not sure to be 100% right without a better overview of your configuration.

You talk about /var/lib/bareos/storage/arbeitsplatz/oliver
is oliver a folder ?
the storage device should point to a directory, where then volumes will be
files.
--
Bruno Friedmann
Ioda-Net Sàrl www.ioda-net.ch
Bareos Partner, openSUSE Member, fsfe fellowship
GPG KEY : D5C9B751C4653227
irc: tigerfoot

openSUSE Tumbleweed
Linux 4.7.5-1-default x86_64 GNU/Linux, nvidia: 367.44
Qt: 5.7.0, KDE Frameworks: 5.26.0, Plasma: 5.7.4, kmail2 5.3.0 (QtWebEngine)
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Daniel
2016-10-12 06:45:12 UTC
Permalink
yes, /var/lib/bareos/storage/arbeitsplatz/oliver is a folder. In the night to yesterday, the backup throws a fatal error. Yesterday I started the backup manually and it works, the same was tonight - it works. So it es a strange problem, that only exists sometimes.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jörg Steffens
2016-10-12 11:31:24 UTC
Permalink
Post by Daniel
yes, /var/lib/bareos/storage/arbeitsplatz/oliver is a folder. In the
night to yesterday, the backup throws a fatal error. Yesterday I
started the backup manually and it works, the same was tonight - it
works. So it es a strange problem, that only exists sometimes.
it seams, you are not the only one having this problem:
https://bugs.bareos.org/view.php?id=691
--
Jörg Steffens ***@bareos.com
Bareos GmbH & Co. KG Phone: +49 221 630693-91
http://www.bareos.com Fax: +49 221 630693-10

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer:
S. Dühr, M. Außendorf, Jörg Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Douglas K. Rand
2016-10-12 15:53:34 UTC
Permalink
Post by Jörg Steffens
Post by Daniel
yes, /var/lib/bareos/storage/arbeitsplatz/oliver is a folder. In the
night to yesterday, the backup throws a fatal error. Yesterday I
started the backup manually and it works, the same was tonight - it
works. So it es a strange problem, that only exists sometimes.
https://bugs.bareos.org/view.php?id=691
Just a "me too": I experienced the same problem as that bug last week.
Re-running the job worked fine, and have not seen the error since.
Bareos 15.2.2.

06-Oct 08:30 bareos-sd JobId 898: Warning: mount.c:247 Open device
"disk-9" (/local-project/tmp/bareos/backups) Volume "" failed: ERR=Could
not open file device "disk-9" (/local-project/tmp/bareos/backups). No
Volume name given.

06-Oct 08:30 bareos-sd JobId 898: Warning: mount.c:247 Open device
"disk-9" (/local-project/tmp/bareos/backups) Volume "" failed: ERR=Could
not open file device "disk-9" (/local-project/tmp/bareos/backups). No
Volume name given.

06-Oct 08:30 bareos-sd JobId 898: Warning: mount.c:247 Open device
"disk-9" (/local-project/tmp/bareos/backups) Volume "" failed: ERR=Could
not open file device "disk-9" (/local-project/tmp/bareos/backups). No
Volume name given.

06-Oct 08:30 bareos-sd JobId 898: Warning: mount.c:247 Open device
"disk-9" (/local-project/tmp/bareos/backups) Volume "" failed: ERR=Could
not open file device "disk-9" (/local-project/tmp/bareos/backups). No
Volume name given.

06-Oct 08:30 bareos-sd JobId 898: Warning: mount.c:247 Open device
"disk-9" (/local-project/tmp/bareos/backups) Volume "" failed: ERR=Could
not open file device "disk-9" (/local-project/tmp/bareos/backups). No
Volume name given.

06-Oct 08:30 darvocet JobId 898: Fatal error: dir_cmd.c:2400 Bad
response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

06-Oct 08:30 bareos-sd JobId 898: Fatal error: Too many errors trying to
mount device "disk-9" (/local-project/tmp/bareos/backups).
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jörg Steffens
2016-10-13 08:50:41 UTC
Permalink
Am 12.10.2016 um 17:53 schrieb Douglas K. Rand:
[...]
Post by Douglas K. Rand
Post by Jörg Steffens
https://bugs.bareos.org/view.php?id=691
Just a "me too": I experienced the same problem as that bug last week.
Re-running the job worked fine, and have not seen the error since.
Bareos 15.2.2.
Have anybody seen this error using newer versions then bareos-15.2.2?
Or even better, have someone had this problem and it disappear with
newer versions?

It might also be related to https://bugs.bareos.org/view.php?id=647,
fixed in bareos-15.2.4
--
Jörg Steffens ***@bareos.com
Bareos GmbH & Co. KG Phone: +49 221 630693-91
http://www.bareos.com Fax: +49 221 630693-10

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer:
S. Dühr, M. Außendorf, Jörg Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Bruno Friedmann
2016-10-13 19:41:48 UTC
Permalink
Post by Jörg Steffens
[...]
Post by Douglas K. Rand
Post by Jörg Steffens
https://bugs.bareos.org/view.php?id=691
Just a "me too": I experienced the same problem as that bug last week.
Re-running the job worked fine, and have not seen the error since.
Bareos 15.2.2.
Have anybody seen this error using newer versions then bareos-15.2.2?
Or even better, have someone had this problem and it disappear with
newer versions?
It might also be related to https://bugs.bareos.org/view.php?id=647,
fixed in bareos-15.2.4
Didn't see that perticular bug, but yes, I'm using subscription channel ;-)
--
Bruno Friedmann
Ioda-Net Sàrl www.ioda-net.ch
Bareos Partner, openSUSE Member, fsfe fellowship
GPG KEY : D5C9B751C4653227
irc: tigerfoot

openSUSE Tumbleweed
Linux 4.7.6-1-default x86_64 GNU/Linux, nvidia: 367.44
Qt: 5.7.0, KDE Frameworks: 5.26.0, Plasma: 5.8.0, kmail2 5.3.0 (QtWebEngine)
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Douglas K. Rand
2016-11-15 16:07:57 UTC
Permalink
Post by Jörg Steffens
[...]
Post by Douglas K. Rand
Post by Jörg Steffens
https://bugs.bareos.org/view.php?id=691
Just a "me too": I experienced the same problem as that bug last week.
Re-running the job worked fine, and have not seen the error since.
Bareos 15.2.2.
Have anybody seen this error using newer versions then bareos-15.2.2?
Or even better, have someone had this problem and it disappear with
newer versions?
I'm seeing the same problem with 16.2.4. (A few newlines added to the
error.)

11-Nov 19:30 bareos-sd JobId 3371:
Warning: mount.c:248 Open device "disk-2"
(/local-project/tmp/bareos/backups) Volume "" failed:
ERR=Could not open file device "disk-2"
(/local-project/tmp/bareos/backups). No Volume name given.

It seems fairly random. Some backups I'll have it happen to 8-10 jobs in
the backup, other times just 1-3. Sometimes the backups work perfectly.

It does seem to happen more often for early jobs in the backup, and
really not at all for later jobs. (For me jobs are sorted
alphabetically, which means jobs that start with a, b, or c are far more
likely to have this error than other jobs.)
Post by Jörg Steffens
It might also be related to https://bugs.bareos.org/view.php?id=647,
fixed in bareos-15.2.4
My problem seems more related to
https://bugs.bareos.org/mantis/view.php?id=691 and
https://bugs.bareos.org/view.php?id=580 than 647.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Douglas K. Rand
2016-12-14 19:24:04 UTC
Permalink
Post by Douglas K. Rand
Post by Jörg Steffens
[...]
Post by Douglas K. Rand
Post by Jörg Steffens
https://bugs.bareos.org/view.php?id=691
Just a "me too": I experienced the same problem as that bug last week.
Re-running the job worked fine, and have not seen the error since.
Bareos 15.2.2.
Have anybody seen this error using newer versions then bareos-15.2.2?
Or even better, have someone had this problem and it disappear with
newer versions?
I'm seeing the same problem with 16.2.4. (A few newlines added to the
error.)
Warning: mount.c:248 Open device "disk-2"
ERR=Could not open file device "disk-2"
(/local-project/tmp/bareos/backups). No Volume name given.
It seems fairly random. Some backups I'll have it happen to 8-10 jobs in
the backup, other times just 1-3. Sometimes the backups work perfectly.
A mildly brute-force work around that at least gets the backups to run
is to add something like this to your job definitions:

reschedule interval = 10 minutes
reschedule on error = yes
reschedule times = 3

Which re-tries failed backups 3 times pausing 10 minutes between each
re-try. This is working for me in that the backups that fail due to not
being able to find a volume named "" get re-started and work just fine
the second time.

Thanks to Daniel Andratschke for the inadvertent tip on Reschedule On Error.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Daniel
2016-12-12 09:02:00 UTC
Permalink
Hi,
since a few days I use Bareos version 16.2.4 on Ubuntu Server 16.04 and today I see that the backups on the weekend had the same problems like above. Most times the backup will be done with in the 3rd-5th try (RescheduleOnError = yes). But in this example, the backup could be successfully executed almost 27 hours after the start (Reschudule every 30min). The client is a windows 10 pc.
Here are example logs:
2016-12-08 21:00:03 Arbeitsplatz_Daniel JobId 178: Fatal error: filed/dir_cmd.c:2641 Bad response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data
2016-12-08 21:00:03 bareos-sd JobId 178: Fatal error: Too many errors trying to mount device "FileStorage-Arbeitsplatz-Daniel" (/var/lib/bareos/storage/arbeitsplatz/daniel).
2016-12-08 21:00:03 bareos-sd JobId 178: Warning: mount.c:248 Open device "FileStorage-Arbeitsplatz-Daniel" (/var/lib/bareos/storage/arbeitsplatz/daniel) Volume "" failed: ERR=Could not open file device "FileStorage-Arbeitsplatz-Daniel" (/var/lib/bareos/storage/arbeitsplatz/daniel). No Volume name given.
2016-12-08 21:30 "same errors"
2016-12-08 22:00:06 bareos-sd JobId 178: Job Arbeitsplatz-Daniel-weekly.2016-12-08_21.00.00_40 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "FileStorage-Arbeitsplatz-Daniel" (/var/lib/bareos/storage/arbeitsplatz/daniel)
Pool: Incremental
Media type: File
2016-12-08 22:05:06 bareos-sd JobId 178: Warning: mount.c:248 Open device "FileStorage-Arbeitsplatz-Daniel" (/var/lib/bareos/storage/arbeitsplatz/daniel) Volume "Incremental-0027" failed: ERR=dev.c:661 Could not open: /var/lib/bareos/storage/arbeitsplatz/daniel/Incremental-0027, ERR=Datei oder Verzeichnis nicht gefunden

These errors were shown every half hour till 2016-12-09 23:40:06.
On 2016-12-09 23:46:15 the backup could be done without an error.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Bruno Friedmann
2016-12-12 12:33:17 UTC
Permalink
Post by Daniel
/var/lib/bareos/storage/arbeitsplatz/daniel/Incremental-0027, ERR=Datei oder
Verzeichnis nicht gefunden
You have to find what cause bareos-sd (limited user right) the deny access
and finally why it become available

(perhaps you will have to check and set on audit)
As you are running ubuntu, check also if you don't have an apparmor profile
blocking those action.
--
Bruno Friedmann
Ioda-Net Sàrl www.ioda-net.ch
Bareos Partner, openSUSE Member, fsfe fellowship
GPG KEY : D5C9B751C4653227
irc: tigerfoot

openSUSE Tumbleweed
Linux 4.8.13-1-default x86_64 GNU/Linux, nvidia: 375.20
Qt: 5.7.0, KDE Frameworks: 5.28.0, Plasma: 5.8.4, kmail2 5.3.3 (QtWebEngine)
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Robert N
2016-12-13 09:44:47 UTC
Permalink
Post by Bruno Friedmann
Post by Daniel
/var/lib/bareos/storage/arbeitsplatz/daniel/Incremental-0027, ERR=Datei oder
Verzeichnis nicht gefunden
You have to find what cause bareos-sd (limited user right) the deny access
and finally why it become available
(perhaps you will have to check and set on audit)
As you are running ubuntu, check also if you don't have an apparmor profile
blocking those action.
--
Bruno Friedmann
Ioda-Net Sàrl www.ioda-net.ch
Bareos Partner, openSUSE Member, fsfe fellowship
GPG KEY : D5C9B751C4653227
irc: tigerfoot
openSUSE Tumbleweed
Linux 4.8.13-1-default x86_64 GNU/Linux, nvidia: 375.20
Qt: 5.7.0, KDE Frameworks: 5.28.0, Plasma: 5.8.4, kmail2 5.3.3 (QtWebEngine)
Hi,

I have bareos 15.2.3 on centos 6 and face the same errors. I basically had in the beginning interleaved disk volumes, and no jobs were failing due to this kind of errors.
----Phase 1:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
}
Storage {
Name = DiskStorageIncr
# Do not use "localhost" here
Address = muc1pro-backup-1.adm.financial.com # N.B. Use a fully qualified name here
SDPort = 9103
Password = "***"
Device = DiskDeviceIncr1
Media Type = FileIncr
Autochanger = no
Maximum Concurrent Jobs = 10
}
Device {
Name = DiskDeviceIncr1
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
}

I than moved to 1 job/volume setup, with 5 different devices defined in the sd pointing to the same folder on the disk, and added these devices in the storage definition in director.
----Phase 2:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Maximum Volume Jobs = 1
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
}

Storage {
Name = DiskStorageIncr
# Do not use "localhost" here
Address = muc1pro-backup-1.adm.financial.com # N.B. Use a fully qualified name here
SDPort = 9103
Password = "***"
Device = DiskDeviceIncr1
Device = DiskDeviceIncr2
Device = DiskDeviceIncr3
Device = DiskDeviceIncr4
Device = DiskDeviceIncr5
Media Type = FileIncr
Autochanger = no
Maximum Concurrent Jobs = 5
}

Device {
Name = DiskDeviceIncr1
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr2
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr3
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr4
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr5
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}


At this point there were no failing jobs. Than I added Scratch Pool:
----Phase 3:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Maximum Volume Jobs = 1
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
Scratch Pool = ScratchIncr
}

And 30+ out of 43 jobs were failing, randomly, not always the same.

I was thinking that the scratch pool causes the problems. Went ahead and removed scratch pool , reload, update pool and surprize, in the catalog the scratchpoolid in the Pool table is still pointing to the ScratchIncr.

----Phase 4:
I removed both recycle pool and scratch pool , reload, update pool and now both the recyclepoolid and scratchpoolid are null in the catalog, and got only one job failed without recycle pool and scratch pool.

So in my case it is definitely related to multiple devices in storage definition, but I'm not sure how is it related to recycle/scratch pool, because I still got 1 failing job, althogh is an improvement from 30+, but cannot say it solved my issue.

13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: User defined maximum volume capacity 10,737,418,240 exceeded on device "DiskDeviceIncr5" (/data/bareos-storage/Incr).
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: End of medium on Volume "Incr-3838" Bytes=10,737,377,479 Blocks=166,440 at 13-Dec-2016 00:30.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Fatal error: Too many errors trying to mount device "DiskDeviceIncr5" (/data/bareos-storage/Incr).
13-Dec 00:30 dg1nc0505 JobId 7662: Error: bsock_tcp.c:422 Write error sending 65536 bytes to client:192.168.210.39:9102: ERR=Broken pipe

If I go back to the interleaved job setup, everything goes well again.
I will give it another chance this night with the same config from Phase 4, let's see what happens.

I have also enabled debug in the director and storage daemon.

So basically if Recycle pool is configured in the pool it does automatically add scratch pool? I don't find anything in the docu related to this...

Thanks
Robert
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Daniel
2016-12-14 07:31:40 UTC
Permalink
Thanks!
After a research I find out what an apparmor profile is. I think that I do not have one.
***@backuptest:/var/lib/bareos/storage# sudo aa-unconfined
908 /usr/sbin/sshd not confined
962 /usr/sbin/bareos-fd not confined
964 /usr/sbin/nrpe not confined
991 /usr/lib/postgresql/9.5/bin/postgres not confined
1041 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 not confined
1205 /usr/lib/postfix/sbin/master not confined
26265 /usr/sbin/bareos-sd not confined
26291 /usr/sbin/bareos-dir not confined

***@backuptest:/var/lib/bareos/storage# aa-status
apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/tcpdump
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Robert N
2016-12-14 07:45:24 UTC
Permalink
Post by Daniel
Thanks!
After a research I find out what an apparmor profile is. I think that I do not have one.
908 /usr/sbin/sshd not confined
962 /usr/sbin/bareos-fd not confined
964 /usr/sbin/nrpe not confined
991 /usr/lib/postgresql/9.5/bin/postgres not confined
1041 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 not confined
1205 /usr/lib/postfix/sbin/master not confined
26265 /usr/sbin/bareos-sd not confined
26291 /usr/sbin/bareos-dir not confined
apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/sbin/dhclient
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/tcpdump
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
Hi,

I have bareos 15.2.3 on centos 6 and face the same errors. I basically had in the beginning interleaved disk volumes, and no jobs were failing due to this kind of errors.
----Phase 1:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
}
Storage {
Name = DiskStorageIncr
# Do not use "localhost" here
Address = muc1pro-backup-1 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "***"
Device = DiskDeviceIncr1
Media Type = FileIncr
Autochanger = no
Maximum Concurrent Jobs = 10
}
Device {
Name = DiskDeviceIncr1
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
}

I than moved to 1 job/volume setup, with 5 different devices defined in the sd pointing to the same folder on the disk, and added these devices in the storage definition in director.
----Phase 2:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Maximum Volume Jobs = 1
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
}

Storage {
Name = DiskStorageIncr
# Do not use "localhost" here
Address = muc1pro-backup-1 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "***"
Device = DiskDeviceIncr1
Device = DiskDeviceIncr2
Device = DiskDeviceIncr3
Device = DiskDeviceIncr4
Device = DiskDeviceIncr5
Media Type = FileIncr
Autochanger = no
Maximum Concurrent Jobs = 5
}

Device {
Name = DiskDeviceIncr1
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr2
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr3
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr4
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}
Device {
Name = DiskDeviceIncr5
Media Type = FileIncr
Archive Device = /data/bareos-storage/Incr
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 1
}


At this point there were no failing jobs. Than I added Scratch Pool:
----Phase 3:
Pool {
Name = DiskDaily
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Storage = DiskStorageIncr
Maximum Volume Bytes = 10G
Maximum Volumes = 600
Maximum Volume Jobs = 1
Next Pool = TapeDaily2
Label Format = "Incr-"
Volume Retention = 1 week
Recycle Oldest Volume = yes
Recycle Pool = ScratchIncr
Scratch Pool = ScratchIncr
}

And 30+ out of 43 jobs were failing, randomly, not always the same.

I was thinking that the scratch pool causes the problems. Went ahead and removed scratch pool , reload, update pool and surprize, in the catalog the scratchpoolid in the Pool table is still pointing to the ScratchIncr.

----Phase 4:
I removed both recycle pool and scratch pool , reload, update pool and now both the recyclepoolid and scratchpoolid are null in the catalog, and got only one job failed without recycle pool and scratch pool.
After another cycle with the same setup, got 0 jobs failed.

So in my case it is definitely related to multiple devices in storage definition, but I'm not sure how is it related to recycle/scratch pool, because I still got 1 failing job, althogh is an improvement from 30+, but cannot say it solved my issue.

13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: User defined maximum volume capacity 10,737,418,240 exceeded on device "DiskDeviceIncr5" (/data/bareos-storage/Incr).
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: End of medium on Volume "Incr-3838" Bytes=10,737,377,479 Blocks=166,440 at 13-Dec-2016 00:30.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Warning: mount.c:247 Open device "DiskDeviceIncr5" (/data/bareos-storage/Incr) Volume "" failed: ERR=Could not open file device "DiskDeviceIncr5" (/data/bareos-storage/Incr). No Volume name given.
13-Dec 00:30 muc1pro-backup-1-sd JobId 7662: Fatal error: Too many errors trying to mount device "DiskDeviceIncr5" (/data/bareos-storage/Incr).
13-Dec 00:30 dg1nc0505 JobId 7662: Error: bsock_tcp.c:422 Write error sending 65536 bytes to client:192.168.210.39:9102: ERR=Broken pipe

If I go back to the interleaved job setup, everything goes well again.
I will give it another chance this night with the same config from Phase 4, let's see what happens.

I have also enabled debug in the director and storage daemon.

So basically if Recycle pool is configured in the pool it does automatically add scratch pool? I don't find anything in the docu related to this...

Thanks
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Daniel
2017-01-03 07:35:26 UTC
Permalink
I have now done further analysis.
In my configuration I use several storage devices, one for each client. The purpose of this method is to write the individual backups into different folders. If I use this method, it always comes to the above errors, which are taking more and more lately. The "brute force method" is also no longer as reliable as under Bareos 15.
If I now switch back to a single storage device, the backups run reliably.
Therefore, I once tried to put maximum concurrent jobs as small numbers, but that also brought no success.
Alternatively, I could imagine using a storage, but then use individual names for the backups. Just how do I deal with recycling and / or delete backups?
Perhaps it is simply synonymous to my storage configuration? This looks as follows for the individual clients:

Device {
Name = FileStorage-Workstation-Daniel
MediaType = File
ArchiveDevice = / var / lib / bareos / storage / workstation / daniel # it's a folder and writeable
LabelMedia = yes; # Lets Bareos label unlabeled media
Random Access = yes;
AutomaticMount = yes; # When device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Description = "File device." A connecting director must have the same Name and MediaType. "
}

Storage {
Name = File Workstation-Daniel
Address = backuptest.xxx.com # N.B. Use a fully qualified name here (do not use "localhost" here).
Password = "pwstandshere"
Device = FileStorage workstation-Daniel
MediaType = File
}

FileSet {
Name = "Workplace-Daniel-weekly-Fileset"
Include {
Options {
Signature = MD5
Compression = GZIP6
}
File = "\\ <C: /ProgramData/Bareos/backupfiles-weekly.txt"
}
}

Job {
Name = "Workplace-Daniel-weekly"
Client = "Workplace-Daniel-fd"
JobDefs = "Workplace-Daniel-weekly-Job"
MaxFullInterval = 1 months
MaxDiffInterval = 1 weeks
Reschedule On Error = yes
Reschedule Interval = 10 minutes
Reschedule Times = 24
}

JobDefs {
Name = "Workplace-Daniel-weekly-Job"
Type = Backup
Level = Incremental
Messages = "Standard"
Storage = "File"
Pool = "Incremental"
FullBackupPool = "Full"
IncrementalBackupPool = "Incremental"
DifferentialBackupPool = "Differential"
FileSet = "Workstation-Daniel-weekly-Fileset"
Schedule = "WeeklyCycle"
WriteBootstrap = "/var/lib/bareos/%c.bsr"
}

Pool {
Name = Incremental
PoolType = Backup
#Recycle = yes # Bareos can automatically recycle Volumes
Recycle = no
AutoPrune = yes # Prune expired volumes
Volume Retention = 1 days # How long should the Incremental Backups be kept? (# 12)
#LabelFormat = Incremental- # Volumes will be labeled "Incremental- <volume-id>"
LabelFormat = "$ Job $ Level $ Year $ Month $ Day $ Hour $ Min $ Second # I chose this name to get discrete backups with just one storage. Just how do I get this then deleted after the retention time? Recycling does not go with the name yes ....
Maximum Volume Jobs = 1
}
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...