Discussion:
[bareos-users] bareos and xtrabackup
Denis Barbazza
2016-11-04 10:43:09 UTC
Permalink
Hello,
I'm trying to setup a new backup server with bareos 16.2.4 and a client
with same version and percona-xtrabackup plugin, but I run into trouble :(

I'm running Debian Stretch and I followed the guide here:
https://github.com/bareos/bareos-contrib/tree/master/fd-plugins/bareos_percona
My percona xtrabackup is version 2.3.5 (I tried also with 2.2 with same
result, with 2.4 I received a 11 signal (and I read in a forum that they
suggest to downgrade).

The problem is that if i manually run the backup command everything works
ok:
'xtrabackup --backup --datadir=/var/lib/mysql/ --stream=xbstream
--extra-lsndir=/tmp/tmpYmkHJn '
It creates the file with dump.

If i run the job through bareos-director it return ok, but it only backups
a file called:
_percona/xbstream.000000XX (where XX is a number that increments each time).

The file as NaN size, and if I try to restore it, the restore fails with:
Error: python-fd: No lsn information found in restore object for file
/tmp/bareos-restores//_percona/xbstream.0000000020 from job 20

Please tell me what I can try, if you need more information simply ask me
and I'll provide everything.


thank you all
--
Denis
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Maik Aussendorf
2016-11-07 15:18:11 UTC
Permalink
Hello Denis,
Post by Denis Barbazza
Hello,
I'm trying to setup a new backup server with bareos 16.2.4 and a
client with same version and percona-xtrabackup plugin, but I run into
trouble :(
https://github.com/bareos/bareos-contrib/tree/master/fd-plugins/bareos_percona
My percona xtrabackup is version 2.3.5 (I tried also with 2.2 with
same result, with 2.4 I received a 11 signal (and I read in a forum
that they suggest to downgrade).
Version 2.3.5 runs on my development machine, should work, while earlier
versions likely do not work.
Post by Denis Barbazza
The problem is that if i manually run the backup command everything
'xtrabackup --backup --datadir=/var/lib/mysql/ --stream=xbstream
--extra-lsndir=/tmp/tmpYmkHJn '
It creates the file with dump.
If i run the job through bareos-director it return ok, but it only
_percona/xbstream.000000XX (where XX is a number that increments each time).
That is correct, the xbrstream file contains the output of the
xtrabackup command as dump in xbstream-format.
Post by Denis Barbazza
Error: python-fd: No lsn information found in restore object for file
/tmp/bareos-restores//_percona/xbstream.0000000020 from job 20
This is not good.

Please do the following:
- run the xtrabackup manually as above.
send the file xtrabackup_checkpoints from your temp-directory:
/tmp/tmpYmkHJn

Run FD in Debug mode and send the output after running a backup and a
restore job.

To activate debug on your client-fd, do something like
*setdebug client=centos-fd trace=on level=200
...
you should get output like:
--
Connecting to Client centos-fd at centos:9102
2000 OK setdebug=150 trace=0 hangup=0 timestamp=0
tracefile=/var/lib/bareos/centos-fd.trace

bconsolse tells you where to look for the tracefile=/...
Attach that tracefile (after having removed eventual sensible information).

Regards
Maik
Post by Denis Barbazza
Please tell me what I can try, if you need more information simply ask
me and I'll provide everything.
thank you all
--
Denis
--
You received this message because you are subscribed to the Google
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
With kind regards // Mit freundlichen GrÌßen
--
Maik Außendorf ***@bareos.com
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016 http://opensourcesummit.paris **

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Denis Barbazza
2016-11-08 09:57:48 UTC
Permalink
Hello Maik, thank you for your reply.

I found the problem, it was so simple, damn :-(
the client cannot reach the storage daemon, I discovered it because when I
run a file backup it simply told me :(

But....

Now I was able to make a full backup, size is approsimately 1GB (it could
be real).
The problem is that if I try to run an incremental backup I obtain this
error.



08-Nov 10:50 bareos-dir JobId 50: Start Backup JobId 50,
Job=dragon-mysql.2016-11-08_10.50.55_55
08-Nov 10:50 bareos-dir JobId 50: Using Device "FileStorage" to write.
08-Nov 10:50 dragon-fd JobId 50: python-fd: Got to_lsn 10998506785 from
restore object of job 49
08-Nov 10:50 bareos-sd JobId 50: Volume "Incremental-0100" previously
written, moving to end of data.
08-Nov 10:50 bareos-sd JobId 50: Ready to append to end of Volume
"Incremental-0100" size=496585614
08-Nov 10:50 dragon-fd JobId 50: Fatal error: python-fd: Traceback (most
recent call last):
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 38, in
handle_plugin_event
return bareos_fd_plugin_object.handle_plugin_event(context, event)
File "/usr/lib/bareos/plugins/BareosFdPluginBaseclass.py", line 223, in
handle_plugin_event
return self.start_backup_job(context)
File "/usr/lib/bareos/plugins/BareosFdPercona.py", line 166, in
start_backup_job
last_lsn = int(os.popen(get_lsn_command).read())
ValueError: invalid literal for int() with base 10: ''

08-Nov 10:50 dragon-fd JobId 50: Fatal error: fd_plugins.c:654 Command
plugin
"python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona"
requested, but is not loaded.
08-Nov 10:50 bareos-sd JobId 50: Elapsed time=00:00:01, Transfer rate=0
Bytes/second
08-Nov 10:50 bareos-dir JobId 50: Error: Bareos bareos-dir 16.2.4 (01Jul16):
Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux 8.0
(jessie)
JobId: 50
Job: dragon-mysql.2016-11-08_10.50.55_55
Backup Level: Incremental, since=2016-11-08 10:44:49
Client: "dragon-fd" 16.2.4 (01Jul16)
x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
FileSet: "mysql" 2016-11-04 11:14:29
Pool: "Incremental" (From command line)
Catalog: "MyCatalog" (From Client resource)
Storage: "File" (From Job resource)
Scheduled time: 08-Nov-2016 10:50:55
Start time: 08-Nov-2016 10:50:57
End time: 08-Nov-2016 10:50:58
Elapsed time: 1 sec
Priority: 10
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s):
Volume Session Id: 48
Volume Session Time: 1478171275
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 1
SD Errors: 0
FD termination status: Fatal Error
SD termination status: Canceled
Termination: *** Backup Error ***

it seems that it can't recognize last lsn.

another strange thing is that the full mysql backup has 2 files and 1GB,
but if I click on "show files" I can see only _percona/xbstream.00000XX
is this normal? I should have also xtrabackup_checkpoints I think... or I'm
wrong?

thank you
--
Denis
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Bruno Friedmann
2016-11-08 10:40:58 UTC
Permalink
Post by Denis Barbazza
another strange thing is that the full mysql backup has 2 files and 1GB,
but if I click on "show files" I can see only _percona/xbstream.00000XX
is this normal? I should have also xtrabackup_checkpoints I think... or I'm
wrong?
thank you
--
Denis
I guess you're not yet used to Bareos, 2 files in backup could also be
a directory and a file ;-)

_perconna seems to be counted as 1
--
Bruno Friedmann
Ioda-Net Sàrl www.ioda-net.ch
Bareos Partner, openSUSE Member, fsfe fellowship
GPG KEY : D5C9B751C4653227
irc: tigerfoot

openSUSE Tumbleweed
Linux 4.8.6-2-default x86_64 GNU/Linux, nvidia: 367.57
Qt: 5.7.0, KDE Frameworks: 5.27.0, Plasma: 5.8.3, kmail2 5.3.0 (QtWebEngine)
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Maik Aussendorf
2016-11-08 10:55:51 UTC
Permalink
Hello Denis,
Post by Denis Barbazza
Hello Maik, thank you for your reply.
I found the problem, it was so simple, damn :-(
the client cannot reach the storage daemon, I discovered it because
when I run a file backup it simply told me :(
Good.
Post by Denis Barbazza
But....
Now I was able to make a full backup, size is approsimately 1GB (it
could be real).
The problem is that if I try to run an incremental backup I obtain
this error.
08-Nov 10:50 bareos-dir JobId 50: Start Backup JobId 50,
Job=dragon-mysql.2016-11-08_10.50.55_55
08-Nov 10:50 bareos-dir JobId 50: Using Device "FileStorage" to write.
08-Nov 10:50 dragon-fd JobId 50: python-fd: Got to_lsn 10998506785
from restore object of job 49
08-Nov 10:50 bareos-sd JobId 50: Volume "Incremental-0100" previously
written, moving to end of data.
08-Nov 10:50 bareos-sd JobId 50: Ready to append to end of Volume
"Incremental-0100" size=496585614
08-Nov 10:50 dragon-fd JobId 50: Fatal error: python-fd: Traceback
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 38, in
handle_plugin_event
return bareos_fd_plugin_object.handle_plugin_event(context, event)
File "/usr/lib/bareos/plugins/BareosFdPluginBaseclass.py", line 223,
in handle_plugin_event
return self.start_backup_job(context)
File "/usr/lib/bareos/plugins/BareosFdPercona.py", line 166, in
start_backup_job
last_lsn = int(os.popen(get_lsn_command).read())
ValueError: invalid literal for int() with base 10: ''
08-Nov 10:50 dragon-fd JobId 50: Fatal error: fd_plugins.c:654 Command
plugin
"python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona"
requested, but is not loaded.
08-Nov 10:50 bareos-sd JobId 50: Elapsed time=00:00:01, Transfer
rate=0 Bytes/second
Build OS: x86_64-pc-linux-gnu debian Debian GNU/Linux
8.0 (jessie)
JobId: 50
Job: dragon-mysql.2016-11-08_10.50.55_55
Backup Level: Incremental, since=2016-11-08 10:44:49
Client: "dragon-fd" 16.2.4 (01Jul16)
x86_64-pc-linux-gnu,debian,Debian GNU/Linux 8.0 (jessie),Debian_8.0,x86_64
FileSet: "mysql" 2016-11-04 11:14:29
Pool: "Incremental" (From command line)
Catalog: "MyCatalog" (From Client resource)
Storage: "File" (From Job resource)
Scheduled time: 08-Nov-2016 10:50:55
Start time: 08-Nov-2016 10:50:57
End time: 08-Nov-2016 10:50:58
Elapsed time: 1 sec
Priority: 10
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume Session Id: 48
Volume Session Time: 1478171275
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 1
SD Errors: 0
FD termination status: Fatal Error
SD termination status: Canceled
Termination: *** Backup Error ***
it seems that it can't recognize last lsn.
Yes, that seems to be the problem.

echo 'SHOW ENGINE INNODB STATUS' | mysql -r
It should contain a line like
Log sequence number 14882332

Can you post that line (or the whole result)?
Post by Denis Barbazza
another strange thing is that the full mysql backup has 2 files and
1GB, but if I click on "show files" I can see only
_percona/xbstream.00000XX
is this normal? I should have also xtrabackup_checkpoints I think...
or I'm wrong?
This is normal. The LSN is stored in a so called 'restore object', in a
kind of virtual file, that's why Bareos reports 2 files (1 'virtual' for
the restore object, 1 for the xbstream).

Regards
Maik
Post by Denis Barbazza
thank you
--
Denis
--
With kind regards // Mit freundlichen GrÌßen
--
Maik Außendorf ***@bareos.com
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016 http://opensourcesummit.paris **

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Denis Barbazza
2016-11-08 13:05:24 UTC
Permalink
Post by Denis Barbazza
it seems that it can't recognize last lsn.
Yes, that seems to be the problem.
echo 'SHOW ENGINE INNODB STATUS' | mysql -r
It should contain a line like
Log sequence number 14882332
Can you post that line (or the whole result)?
Here the whole result:

# echo 'SHOW ENGINE INNODB STATUS' | mysql -r -p
Enter password:
Type Name Status
InnoDB
=====================================
2016-11-08 14:02:51 7fa095cef700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 51 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 127728 srv_active, 0 srv_shutdown, 60676 srv_idle
srv_master_thread log flush and writes: 188366
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 150086
OS WAIT ARRAY INFO: signal count 812322
Mutex spin waits 1175854, rounds 3675604, OS waits 89595
RW-shared spins 255778, rounds 1749256, OS waits 44144
RW-excl spins 124857, rounds 1522886, OS waits 15597
Spin rounds per wait: 3.13 mutex, 6.84 RW-shared, 12.20 RW-excl
------------
TRANSACTIONS
------------
Trx id counter 4253963
Purge done for trx's n:o < 4253963 undo n:o < 0 state: running but idle
History list length 2712
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 209451, OS thread handle 0x7fa095cef700, query id 29836067
localhost root init
SHOW ENGINE INNODB STATUS
---TRANSACTION 4253584, not started
MySQL thread id 180416, OS thread handle 0x7fa095b69700, query id 29832117
localhost 127.0.0.1 vlogger_user cleaning up
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer
thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (read thread)
I/O thread 7 state: waiting for completed aio requests (read thread)
I/O thread 8 state: waiting for completed aio requests (read thread)
I/O thread 9 state: waiting for completed aio requests (read thread)
I/O thread 10 state: waiting for completed aio requests (read thread)
I/O thread 11 state: waiting for completed aio requests (read thread)
I/O thread 12 state: waiting for completed aio requests (read thread)
I/O thread 13 state: waiting for completed aio requests (read thread)
I/O thread 14 state: waiting for completed aio requests (read thread)
I/O thread 15 state: waiting for completed aio requests (read thread)
I/O thread 16 state: waiting for completed aio requests (read thread)
I/O thread 17 state: waiting for completed aio requests (read thread)
I/O thread 18 state: waiting for completed aio requests (read thread)
I/O thread 19 state: waiting for completed aio requests (read thread)
I/O thread 20 state: waiting for completed aio requests (read thread)
I/O thread 21 state: waiting for completed aio requests (read thread)
I/O thread 22 state: waiting for completed aio requests (read thread)
I/O thread 23 state: waiting for completed aio requests (read thread)
I/O thread 24 state: waiting for completed aio requests (read thread)
I/O thread 25 state: waiting for completed aio requests (read thread)
I/O thread 26 state: waiting for completed aio requests (read thread)
I/O thread 27 state: waiting for completed aio requests (read thread)
I/O thread 28 state: waiting for completed aio requests (read thread)
I/O thread 29 state: waiting for completed aio requests (read thread)
I/O thread 30 state: waiting for completed aio requests (read thread)
I/O thread 31 state: waiting for completed aio requests (read thread)
I/O thread 32 state: waiting for completed aio requests (read thread)
I/O thread 33 state: waiting for completed aio requests (read thread)
I/O thread 34 state: waiting for completed aio requests (read thread)
I/O thread 35 state: waiting for completed aio requests (read thread)
I/O thread 36 state: waiting for completed aio requests (read thread)
I/O thread 37 state: waiting for completed aio requests (read thread)
I/O thread 38 state: waiting for completed aio requests (read thread)
I/O thread 39 state: waiting for completed aio requests (read thread)
I/O thread 40 state: waiting for completed aio requests (read thread)
I/O thread 41 state: waiting for completed aio requests (read thread)
I/O thread 42 state: waiting for completed aio requests (read thread)
I/O thread 43 state: waiting for completed aio requests (read thread)
I/O thread 44 state: waiting for completed aio requests (read thread)
I/O thread 45 state: waiting for completed aio requests (read thread)
I/O thread 46 state: waiting for completed aio requests (read thread)
I/O thread 47 state: waiting for completed aio requests (read thread)
I/O thread 48 state: waiting for completed aio requests (read thread)
I/O thread 49 state: waiting for completed aio requests (read thread)
I/O thread 50 state: waiting for completed aio requests (read thread)
I/O thread 51 state: waiting for completed aio requests (read thread)
I/O thread 52 state: waiting for completed aio requests (read thread)
I/O thread 53 state: waiting for completed aio requests (read thread)
I/O thread 54 state: waiting for completed aio requests (read thread)
I/O thread 55 state: waiting for completed aio requests (read thread)
I/O thread 56 state: waiting for completed aio requests (read thread)
I/O thread 57 state: waiting for completed aio requests (read thread)
I/O thread 58 state: waiting for completed aio requests (read thread)
I/O thread 59 state: waiting for completed aio requests (read thread)
I/O thread 60 state: waiting for completed aio requests (read thread)
I/O thread 61 state: waiting for completed aio requests (read thread)
I/O thread 62 state: waiting for completed aio requests (read thread)
I/O thread 63 state: waiting for completed aio requests (read thread)
I/O thread 64 state: waiting for completed aio requests (read thread)
I/O thread 65 state: waiting for completed aio requests (read thread)
I/O thread 66 state: waiting for completed aio requests (write thread)
I/O thread 67 state: waiting for completed aio requests (write thread)
I/O thread 68 state: waiting for completed aio requests (write thread)
I/O thread 69 state: waiting for completed aio requests (write thread)
I/O thread 70 state: waiting for completed aio requests (write thread)
I/O thread 71 state: waiting for completed aio requests (write thread)
I/O thread 72 state: waiting for completed aio requests (write thread)
I/O thread 73 state: waiting for completed aio requests (write thread)
I/O thread 74 state: waiting for completed aio requests (write thread)
I/O thread 75 state: waiting for completed aio requests (write thread)
I/O thread 76 state: waiting for completed aio requests (write thread)
I/O thread 77 state: waiting for completed aio requests (write thread)
I/O thread 78 state: waiting for completed aio requests (write thread)
I/O thread 79 state: waiting for completed aio requests (write thread)
I/O thread 80 state: waiting for completed aio requests (write thread)
I/O thread 81 state: waiting for completed aio requests (write thread)
I/O thread 82 state: waiting for completed aio requests (write thread)
I/O thread 83 state: waiting for completed aio requests (write thread)
I/O thread 84 state: waiting for completed aio requests (write thread)
I/O thread 85 state: waiting for completed aio requests (write thread)
I/O thread 86 state: waiting for completed aio requests (write thread)
I/O thread 87 state: waiting for completed aio requests (write thread)
I/O thread 88 state: waiting for completed aio requests (write thread)
I/O thread 89 state: waiting for completed aio requests (write thread)
I/O thread 90 state: waiting for completed aio requests (write thread)
I/O thread 91 state: waiting for completed aio requests (write thread)
I/O thread 92 state: waiting for completed aio requests (write thread)
I/O thread 93 state: waiting for completed aio requests (write thread)
I/O thread 94 state: waiting for completed aio requests (write thread)
I/O thread 95 state: waiting for completed aio requests (write thread)
I/O thread 96 state: waiting for completed aio requests (write thread)
I/O thread 97 state: waiting for completed aio requests (write thread)
I/O thread 98 state: waiting for completed aio requests (write thread)
I/O thread 99 state: waiting for completed aio requests (write thread)
I/O thread 100 state: waiting for completed aio requests (write thread)
I/O thread 101 state: waiting for completed aio requests (write thread)
I/O thread 102 state: waiting for completed aio requests (write thread)
I/O thread 103 state: waiting for completed aio requests (write thread)
I/O thread 104 state: waiting for completed aio requests (write thread)
I/O thread 105 state: waiting for completed aio requests (write thread)
I/O thread 106 state: waiting for completed aio requests (write thread)
I/O thread 107 state: waiting for completed aio requests (write thread)
I/O thread 108 state: waiting for completed aio requests (write thread)
I/O thread 109 state: waiting for completed aio requests (write thread)
I/O thread 110 state: waiting for completed aio requests (write thread)
I/O thread 111 state: waiting for completed aio requests (write thread)
I/O thread 112 state: waiting for completed aio requests (write thread)
I/O thread 113 state: waiting for completed aio requests (write thread)
I/O thread 114 state: waiting for completed aio requests (write thread)
I/O thread 115 state: waiting for completed aio requests (write thread)
I/O thread 116 state: waiting for completed aio requests (write thread)
I/O thread 117 state: waiting for completed aio requests (write thread)
I/O thread 118 state: waiting for completed aio requests (write thread)
I/O thread 119 state: waiting for completed aio requests (write thread)
I/O thread 120 state: waiting for completed aio requests (write thread)
I/O thread 121 state: waiting for completed aio requests (write thread)
I/O thread 122 state: waiting for completed aio requests (write thread)
I/O thread 123 state: waiting for completed aio requests (write thread)
I/O thread 124 state: waiting for completed aio requests (write thread)
I/O thread 125 state: waiting for completed aio requests (write thread)
I/O thread 126 state: waiting for completed aio requests (write thread)
I/O thread 127 state: waiting for completed aio requests (write thread)
I/O thread 128 state: waiting for completed aio requests (write thread)
I/O thread 129 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ,
aio writes: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
457913 OS file reads, 2216642 OS file writes, 1466063 OS fsyncs
0.02 reads/s, 16384 avg bytes/read, 12.27 writes/s, 6.31 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 3090, seg size 3092, 13035 merges
merged operations:
insert 18171, delete mark 921631, delete 40605
discarded operations:
insert 0, delete mark 0, delete 0
Hash table size 4730647, node heap has 9184 buffer(s)
4624.75 hash searches/s, 584.15 non-hash searches/s
---
LOG
---
Log sequence number 11056494132
Log flushed up to 11056494132
Pages flushed up to 11056493325
Last checkpoint at 11056485039
0 pending log writes, 0 pending chkp writes
601644 log i/o's done, 2.45 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 2197815296; in additional pool allocated 0
Dictionary memory allocated 6918771
Buffer pool size 131064
Free buffers 8195
Database pages 113685
Old database pages 41802
Modified db pages 7
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 243627, not young 20406340
0.14 youngs/s, 0.04 non-youngs/s
Pages read 457035, created 21037, written 1319875
0.02 reads/s, 0.02 creates/s, 8.96 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 113685, unzip_LRU len: 0
I/O sum[3664]:cur[0], unzip sum[0]:cur[0]
----------------------
INDIVIDUAL BUFFER POOL INFO
----------------------
---BUFFER POOL 0
Buffer pool size 16383
Free buffers 1025
Database pages 14208
Old database pages 5224
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 30778, not young 2261664
0.04 youngs/s, 0.00 non-youngs/s
Pages read 57167, created 2541, written 212484
0.00 reads/s, 0.00 creates/s, 1.02 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14208, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 1
Buffer pool size 16383
Free buffers 1024
Database pages 14216
Old database pages 5227
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 31387, not young 2452762
0.00 youngs/s, 0.00 non-youngs/s
Pages read 57148, created 2561, written 87727
0.00 reads/s, 0.00 creates/s, 0.67 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14216, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 2
Buffer pool size 16383
Free buffers 1025
Database pages 14204
Old database pages 5223
Modified db pages 2
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 29888, not young 3072749
0.02 youngs/s, 0.00 non-youngs/s
Pages read 56452, created 2660, written 76920
0.00 reads/s, 0.00 creates/s, 0.57 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14204, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 3
Buffer pool size 16383
Free buffers 1025
Database pages 14198
Old database pages 5221
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 29823, not young 1692755
0.02 youngs/s, 0.00 non-youngs/s
Pages read 58628, created 2673, written 217194
0.00 reads/s, 0.00 creates/s, 1.20 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14198, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 4
Buffer pool size 16383
Free buffers 1024
Database pages 14214
Old database pages 5226
Modified db pages 2
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 29923, not young 2647667
0.00 youngs/s, 0.04 non-youngs/s
Pages read 55261, created 2772, written 202153
0.02 reads/s, 0.00 creates/s, 1.57 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14214, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 5
Buffer pool size 16383
Free buffers 1024
Database pages 14220
Old database pages 5229
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 29826, not young 2812608
0.00 youngs/s, 0.00 non-youngs/s
Pages read 56997, created 2748, written 155710
0.00 reads/s, 0.00 creates/s, 0.92 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14220, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 6
Buffer pool size 16383
Free buffers 1024
Database pages 14202
Old database pages 5222
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 30469, not young 2457755
0.04 youngs/s, 0.00 non-youngs/s
Pages read 57091, created 2704, written 149810
0.00 reads/s, 0.00 creates/s, 1.59 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14202, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 7
Buffer pool size 16383
Free buffers 1024
Database pages 14223
Old database pages 5230
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 31533, not young 3008380
0.02 youngs/s, 0.00 non-youngs/s
Pages read 58291, created 2378, written 217877
0.00 reads/s, 0.02 creates/s, 1.43 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead
0.00/s
LRU len: 14223, unzip_LRU len: 0
I/O sum[458]:cur[0], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
0 read views open inside InnoDB
Main thread process no. 9548, id 140327853520640, state: sleeping
Number of rows inserted 1913027, updated 1810769, deleted 1815826, read
5414936409
4.06 inserts/s, 1.67 updates/s, 0.12 deletes/s, 8307.99 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================
--
Denis
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
s***@24saas.kz
2016-11-10 06:35:17 UTC
Permalink
Post by Maik Aussendorf
Hello Denis,
Am 08.11.2016 um 10:57 schrieb Denis
Hello Maik, thank you for your reply.
I found the problem, it was so simple, damn :-( 
the client cannot reach the storage daemon, I discovered it
because when I run a file backup it simply told me :(
Good.
But....
Now I was able to make a full backup, size is approsimately
1GB (it could be real).
The problem is that if I try to run an incremental backup I
obtain this error.
08-Nov 10:50 bareos-dir JobId 50: Start Backup JobId
50, Job=dragon-mysql.2016-11-08_10.50.55_55
08-Nov 10:50 bareos-dir JobId 50: Using Device
"FileStorage" to write.
08-Nov 10:50 dragon-fd JobId 50: python-fd: Got
to_lsn 10998506785 from restore object of job 49
08-Nov 10:50 bareos-sd JobId 50: Volume
"Incremental-0100" previously written, moving to end of
data.
08-Nov 10:50 bareos-sd JobId 50: Ready to append to
end of Volume "Incremental-0100" size=496585614
  File "/usr/lib/bareos/plugins/BareosFdWrapper.py",
line 38, in handle_plugin_event
    return
bareos_fd_plugin_object.handle_plugin_event(context,
event)
  File
"/usr/lib/bareos/plugins/BareosFdPluginBaseclass.py",
line 223, in handle_plugin_event
    return self.start_backup_job(context)
  File "/usr/lib/bareos/plugins/BareosFdPercona.py",
line 166, in start_backup_job
    last_lsn = int(os.popen(get_lsn_command).read())
''
fd_plugins.c:654 Command plugin
"python:module_path=/usr/lib/bareos/plugins:module_name=bareos-fd-percona"
requested, but is not loaded.
08-Nov 10:50 bareos-sd JobId 50: Elapsed
time=00:00:01, Transfer rate=0  Bytes/second
08-Nov 10:50 bareos-dir JobId 50: Error: Bareos
  Build OS:               x86_64-pc-linux-gnu debian
Debian GNU/Linux 8.0 (jessie)
  JobId:                  50
  Job:                  
 dragon-mysql.2016-11-08_10.50.55_55
  Backup Level:           Incremental,
since=2016-11-08 10:44:49
  Client:                 "dragon-fd" 16.2.4
(01Jul16) x86_64-pc-linux-gnu,debian,Debian GNU/Linux
8.0 (jessie),Debian_8.0,x86_64
  FileSet:                "mysql" 2016-11-04 11:14:29
  Pool:                   "Incremental" (From command
line)
  Catalog:                "MyCatalog" (From Client
resource)
  Storage:                "File" (From Job resource)
  Scheduled time:         08-Nov-2016 10:50:55
  Start time:             08-Nov-2016 10:50:57
  End time:               08-Nov-2016 10:50:58
  Elapsed time:           1 sec
  Priority:               10
  FD Files Written:       0
  SD Files Written:       0
  FD Bytes Written:       0 (0 B)
  SD Bytes Written:       0 (0 B)
  Rate:                   0.0 KB/s
  Software Compression:   None
  VSS:                    no
  Encryption:             no
  Accurate:               no
  Volume Session Id:      48
  Volume Session Time:    1478171275
  Last Volume Bytes:      0 (0 B)
  Non-fatal FD errors:    1
  SD Errors:              0
  FD termination status:  Fatal Error
  SD termination status:  Canceled
  Termination:            *** Backup Error ***
it seems that it can't recognize last lsn.
Yes, that seems to be the problem.
echo 'SHOW ENGINE INNODB STATUS' | mysql -r
It should contain a line like
Log sequence number 14882332
Can you post that line (or the whole result)?
another strange thing is that the full mysql backup has 2
files and 1GB, but if I click on "show files" I can see only
_percona/xbstream.00000XX
is this normal? I should have also xtrabackup_checkpoints
I think... or I'm wrong?
This is normal. The LSN is stored in a so called 'restore object',
in a kind of virtual file, that's why Bareos reports 2 files (1
'virtual' for the restore object, 1 for the xbstream).
Regards
Maik
thank you
--
Denis
--
With kind regards // Mit freundlichen GrÌßen
--
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016 http://opensourcesummit.paris **
Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
Hi all
I have same trouble, can't restore full backup.
"Error: No lsn information found in restore object for file /tmp/bareos-restores//_percona/xbstream.0000001874 from job 1874 "
my bareos version 15.2.2, xtrabackup 2.4.4, launched in Ubuntu 14.04. How I can troubleshooting this problem and resolve it? Pls help me. Thanks
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Maik Aussendorf
2016-11-10 07:42:04 UTC
Permalink
Hi,

both problem reports here refer to Debian / Ubuntu, while the
development and (customer) production environment was RedHat/Centos (7).
So I have to reproduce the behavior in a Debian / Ubuntu environment;
which will take some time.
Post by s***@24saas.kz
Hi all
I have same trouble, can't restore full backup.
"Error: No lsn information found in restore object for file /tmp/bareos-restores//_percona/xbstream.0000001874 from job 1874 "
my bareos version 15.2.2, xtrabackup 2.4.4, launched in Ubuntu 14.04. How I can troubleshooting this problem and resolve it? Pls help me. Thanks
--
With kind regards // Mit freundlichen Grüßen
--
Maik Außendorf ***@bareos.com
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016 http://opensourcesummit.paris **

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer: Stephan Dühr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Denis Barbazza
2016-11-11 18:37:09 UTC
Permalink
Hello Maik,
if you need some test or more detailed debug let me know.
I will be happy to help you.

thank you for your work
Post by Maik Aussendorf
Hi,
both problem reports here refer to Debian / Ubuntu, while the
development and (customer) production environment was RedHat/Centos (7).
So I have to reproduce the behavior in a Debian / Ubuntu environment;
which will take some time.
Post by s***@24saas.kz
Hi all
I have same trouble, can't restore full backup.
"Error: No lsn information found in restore object for file
/tmp/bareos-restores//_percona/xbstream.0000001874 from job 1874 "
Post by s***@24saas.kz
my bareos version 15.2.2, xtrabackup 2.4.4, launched in Ubuntu 14.04.
How I can troubleshooting this problem and resolve it? Pls help me. Thanks
--
With kind regards // Mit freundlichen GrÌßen
--
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016 http://opensourcesummit.paris **
Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Denis
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Denis Barbazza
2016-11-14 09:50:53 UTC
Permalink
Hello Maik,
another update. I've tried the same setup in debian jessie with
percona-xtrabackup-22 and it works.
both incremental and full backup.
I've tried also restore and it restores all the files.
So now it seems an incompatibility of python or mysql version, in my setup
i've this:
jessie:
mysqld Ver 5.6.34-log
Python 2.7.9
stretch:
mysqld Ver 5.6.30-1-log
Python 2.7.12+

This is the error when i run incremental backup on stretch:
08-Nov 10:50 dragon-fd JobId 50: Fatal error: python-fd: Traceback (most
recent call last):
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 38, in
handle_plugin_event
return bareos_fd_plugin_object.handle_plugin_event(context, event)
File "/usr/lib/bareos/plugins/BareosFdPluginBaseclass.py", line 223, in
handle_plugin_event
return self.start_backup_job(context)
File "/usr/lib/bareos/plugins/BareosFdPercona.py", line 166, in
start_backup_job
last_lsn = int(os.popen(get_lsn_command).read())
ValueError: invalid literal for int() with base 10: ''


If there is any kind of tests that i can do let me know.

thank you for your work
Post by Maik Aussendorf
Hi,
both problem reports here refer to Debian / Ubuntu, while the
development and (customer) production environment was RedHat/Centos (7).
So I have to reproduce the behavior in a Debian / Ubuntu environment;
which will take some time.
Post by s***@24saas.kz
Hi all
I have same trouble, can't restore full backup.
"Error: No lsn information found in restore object for file
/tmp/bareos-restores//_percona/xbstream.0000001874 from job 1874 "
Post by s***@24saas.kz
my bareos version 15.2.2, xtrabackup 2.4.4, launched in Ubuntu 14.04.
How I can troubleshooting this problem and resolve it? Pls help me. Thanks
--
With kind regards // Mit freundlichen GrÌßen
--
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016 http://opensourcesummit.paris **
Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Denis
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Denis Barbazza
2016-11-14 10:01:47 UTC
Permalink
ok :( I found it, my mistake.
in .my.cnf i wass missing the part regarding mysql command.
Maybe you can add a test in BareosFDPercon to check if mysql command can
login correctly without error.

Hope that this thread can be useful for someone else in the future ;-)
Post by Denis Barbazza
Hello Maik,
another update. I've tried the same setup in debian jessie with
percona-xtrabackup-22 and it works.
both incremental and full backup.
I've tried also restore and it restores all the files.
So now it seems an incompatibility of python or mysql version, in my setup
mysqld Ver 5.6.34-log
Python 2.7.9
mysqld Ver 5.6.30-1-log
Python 2.7.12+
08-Nov 10:50 dragon-fd JobId 50: Fatal error: python-fd: Traceback (most
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 38, in
handle_plugin_event
return bareos_fd_plugin_object.handle_plugin_event(context, event)
File "/usr/lib/bareos/plugins/BareosFdPluginBaseclass.py", line 223, in
handle_plugin_event
return self.start_backup_job(context)
File "/usr/lib/bareos/plugins/BareosFdPercona.py", line 166, in
start_backup_job
last_lsn = int(os.popen(get_lsn_command).read())
ValueError: invalid literal for int() with base 10: ''
If there is any kind of tests that i can do let me know.
thank you for your work
On Thu, Nov 10, 2016 at 8:42 AM, Maik Aussendorf <
Post by Maik Aussendorf
Hi,
both problem reports here refer to Debian / Ubuntu, while the
development and (customer) production environment was RedHat/Centos (7).
So I have to reproduce the behavior in a Debian / Ubuntu environment;
which will take some time.
Post by s***@24saas.kz
Hi all
I have same trouble, can't restore full backup.
"Error: No lsn information found in restore object for file
/tmp/bareos-restores//_percona/xbstream.0000001874 from job 1874 "
Post by s***@24saas.kz
my bareos version 15.2.2, xtrabackup 2.4.4, launched in Ubuntu 14.04.
How I can troubleshooting this problem and resolve it? Pls help me. Thanks
--
With kind regards // Mit freundlichen GrÌßen
--
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016
http://opensourcesummit.paris **
Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Denis
--
Denis
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Maik Aussendorf
2016-11-14 10:36:08 UTC
Permalink
Hi Denis,

thanks for clarification!

You are right: the plugin needs better error-handling and
error-messages. I will keep this in mind for the next release.

Regards

Maik
Post by Denis Barbazza
ok :( I found it, my mistake.
in .my.cnf i wass missing the part regarding mysql command.
Maybe you can add a test in BareosFDPercon to check if mysql command
can login correctly without error.
Hope that this thread can be useful for someone else in the future ;-)
On Mon, Nov 14, 2016 at 10:50 AM, Denis Barbazza
Hello Maik,
another update. I've tried the same setup in debian jessie with
percona-xtrabackup-22 and it works.
both incremental and full backup.
I've tried also restore and it restores all the files.
So now it seems an incompatibility of python or mysql version, in
mysqld Ver 5.6.34-log
Python 2.7.9
mysqld Ver 5.6.30-1-log
Python 2.7.12+
08-Nov 10:50 dragon-fd JobId 50: Fatal error: python-fd: Traceback
File "/usr/lib/bareos/plugins/BareosFdWrapper.py", line 38, in
handle_plugin_event
return bareos_fd_plugin_object.handle_plugin_event(context, event)
File "/usr/lib/bareos/plugins/BareosFdPluginBaseclass.py", line
223, in handle_plugin_event
return self.start_backup_job(context)
File "/usr/lib/bareos/plugins/BareosFdPercona.py", line 166, in
start_backup_job
last_lsn = int(os.popen(get_lsn_command).read())
ValueError: invalid literal for int() with base 10: ''
If there is any kind of tests that i can do let me know.
thank you for your work
On Thu, Nov 10, 2016 at 8:42 AM, Maik Aussendorf
Hi,
both problem reports here refer to Debian / Ubuntu, while the
development and (customer) production environment was
RedHat/Centos (7).
So I have to reproduce the behavior in a Debian / Ubuntu environment;
which will take some time.
Post by s***@24saas.kz
Hi all
I have same trouble, can't restore full backup.
"Error: No lsn information found in restore object for file
/tmp/bareos-restores//_percona/xbstream.0000001874 from job 1874 "
Post by s***@24saas.kz
my bareos version 15.2.2, xtrabackup 2.4.4, launched in
Ubuntu 14.04. How I can troubleshooting this problem and
resolve it? Pls help me. Thanks
--
With kind regards // Mit freundlichen GrÌßen
--
Maik Außendorf
+49221630693-93 <tel:%2B49221630693-93>
+49221630693-10 <tel:%2B49221630693-10>
** Visit us at Paris Open Source Summit 2016
http://opensourcesummit.paris **
Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the
Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from
To post to this group, send email to
For more options, visit https://groups.google.com/d/optout
<https://groups.google.com/d/optout>.
--
Denis
--
Denis
--
With kind regards // Mit freundlichen GrÌßen
--
Maik Außendorf ***@bareos.com
Bareos GmbH & Co. KG Phone: +49221630693-93
http://www.bareos.com Fax: +49221630693-10
** Visit us at Paris Open Source Summit 2016 http://opensourcesummit.paris **

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
KomplementÀr: Bareos Verwaltungs-GmbH
GeschÀftsfÃŒhrer: Stephan DÃŒhr, M. Außendorf,
J. Steffens, P. Storz
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dan
2016-11-28 17:20:54 UTC
Permalink
Denis -I'm able to backup successfully, both full and incremental. I am also getting the "Error: No lsn information found in restore object for file /tmp/bareos-restores/..." error on restore. I can actually restore successfully if I individually retire the full and each incremental and then follow the prepare procedure in xtrabackup. Can you post your .my.cnf file (obviously with the password hidden)? I don't know what you meant by "in .my.cnf i wass missing the part regarding mysql command".

Thanks,
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Denis Barbazza
2016-12-01 19:44:18 UTC
Permalink
Hello Dan,
sorry for delay.
I add the configuration for a user with privilege to write on db (in fact
root), like this:
[mysql]
user=root
password=lamiapassworddiroot

nient'altro.
Post by Dan
Denis -I'm able to backup successfully, both full and incremental. I am
also getting the "Error: No lsn information found in restore object for
file /tmp/bareos-restores/..." error on restore. I can actually restore
successfully if I individually retire the full and each incremental and
then follow the prepare procedure in xtrabackup. Can you post your .my.cnf
file (obviously with the password hidden)? I don't know what you meant by
"in .my.cnf i wass missing the part regarding mysql command".
Thanks,
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Denis
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
u***@gmail.com
2017-06-21 13:38:18 UTC
Permalink
Post by Dan
Denis -I'm able to backup successfully, both full and incremental. I am also getting the "Error: No lsn information found in restore object for file /tmp/bareos-restores/..." error on restore. I can actually restore successfully if I individually retire the full and each incremental and then follow the prepare procedure in xtrabackup. Can you post your .my.cnf file (obviously with the password hidden)? I don't know what you meant by "in .my.cnf i wass missing the part regarding mysql command".
Thanks,
Hi Dan,

I seem to have run into the same problem as you.
Did you solve the problem eventually?
What was it?

Thanks,
Leon
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dan Cushing
2017-06-21 15:18:02 UTC
Permalink
Leon -

I did not resolve this problem. I honestly decided not to spend any more time on it. As I noted in my post, the workaround is to restore each of the backups (full and eat incremental since full) individually. The Persona process doesn’t change, so the only added effort is in doing multiple retires from console of the UI. I don’t expect to have to restore very often and I do a full backup weekly. So worst case for me in the event of a system failure is to run 7 restore commands from Bareos. A bit of a pain, but the critical part is that it works.

Dan
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dan Cushing
2017-06-22 10:12:10 UTC
Permalink
Hmmm. Dang autocorrect.

I did not resolve this problem. I honestly decided not to spend any more time on it. As I noted in my post, the workaround is to restore each of the backups (full and each incremental since full) individually. The Percona process doesn’t change, so the only added effort is in doing multiple restores from bconsole or the UI. I don’t expect to have to restore very often and I do a full backup weekly. So worst case for me in the event of a system failure is to run 7 restore commands from Bareos. A bit of a pain, but the critical part is that it works.

Dan
Post by Dan Cushing
Leon -
I did not resolve this problem. I honestly decided not to spend any more time on it. As I noted in my post, the workaround is to restore each of the backups (full and eat incremental since full) individually. The Persona process doesn’t change, so the only added effort is in doing multiple retires from console of the UI. I don’t expect to have to restore very often and I do a full backup weekly. So worst case for me in the event of a system failure is to run 7 restore commands from Bareos. A bit of a pain, but the critical part is that it works.
Dan
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
b***@gmail.com
2018-06-26 08:14:12 UTC
Permalink
Hi All,

I have a similar strange problem with the plugin, I can only do full backups, the incremental backup fails with error:

26-Jun 10:02 bareos-dir JobId 286: Using Device "FileStorage" to write.
26-Jun 10:02 sql03.local JobId 286: Fatal error: python-fd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 58, in restore_object_data
return bareos_fd_plugin_object.restore_object_data(context, ROP)
File "/usr/lib64/bareos/plugins/BareosFdPercona.py", line 423, in restore_object_data
self.rop_data[ROP.jobid] = json.loads(str(self.row_rop_raw))
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
File "/usr/lib64/python2.7/json/decoder.py", line 38, in errmsg
lineno, colno = linecol(doc, pos)
TypeError: 'NoneType' object is not callable

26-Jun 10:02 sql03.local JobId 286: Fatal error: Failed to authenticate Storage daemon.
26-Jun 10:02 bareos-dir JobId 286: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage

I tried with Debian 9.4 + MariaDB 10.2 and Centos 7.5 + Percona Server 5.7.

I created a ticket as well (https://bugs.bareos.org/view.php?id=950).

Is there anybody who did a successful Incremental backup with this plugin?
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dan
2018-06-26 09:30:24 UTC
Permalink
I've been running a weekly full and daily incrementals for 2+ years now, through updates of both MariaDB and Persona, and never a problem.

xtrabackup version 2.4.11 based on MySQL server 5.7.19 Linux (x86_64) (revision id: b4e0db5)
MariaDB 10.1.33
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-862.3.2.el7.x86_64
Architecture: x86-64

FileSet {
Name = "MySQL"
Description = "Backup MySQL database using Percona xtrabackup."
Include {
Options {
Signature = MD5 # calculate md5 checksum per file
compression = GZIP
}
Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=bareos-fd
-percona:mycnf=/etc/bareos/bareos-fd.d/my.cnf"
}
}

Dan
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Zoltán Beck
2018-06-26 10:02:00 UTC
Permalink
Hello Dan,

Can you please share me the client and job definition?

I checked one more time:

[***@sql03 ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)

[***@sql03 ~]# xtrabackup --version
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)

[***@sql03 ~]# mysqld --version
mysqld Ver 5.7.22-22 for Linux on x86_64 (Percona Server (GPL), Release 22, Revision f62d93c)


FileSet {
Name = "MariaDB"
Description = "Backup MariaDB"
Include {
Options {
Signature = MD5 # calculate md5 checksum per file
}
Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/mysql/backup.cnf"
}
}

[***@sql03 ~]# cat /etc/mysql/backup.cnf
[client]
user=root
password=...

[mysql]
user=root
password=...


JobDefs {
Name = "DefaultMariaDBJob"
Type = Backup
Level = Incremental
Client = backup.local
FileSet = "MariaDB"
Schedule = "Weekly"
Storage = File
Messages = Standard
Pool = Incremental
Priority = 10
Write Bootstrap = "/var/lib/bareos/storage/%c.bsr"
Full Backup Pool = Full
Incremental Backup Pool = Incremental
}


Job {
Name = "sql03.local-MariaDB"
JobDefs = "DefaultJob"
Client = "sql03.local"
FileSet = "MariaDB"
}


If I run the Incremental backup, then I get this error:

26-Jun 11:54 sql03.local JobId 287: Fatal error: python-fd: Traceback (most recent call last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 58, in restore_object_data
return bareos_fd_plugin_object.restore_object_data(context, ROP)
File "/usr/lib64/bareos/plugins/BareosFdPercona.py", line 423, in restore_object_data
self.rop_data[ROP.jobid] = json.loads(str(self.row_rop_raw))
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
File "/usr/lib64/python2.7/json/decoder.py", line 38, in errmsg
lineno, colno = linecol(doc, pos)
TypeError: 'NoneType' object is not callable

26-Jun 11:54 sql03.local JobId 287: Fatal error: Failed to authenticate Storage daemon.
26-Jun 11:54 bareos-dir JobId 287: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage

bzg
Post by Dan
I've been running a weekly full and daily incrementals for 2+ years now, through updates of both MariaDB and Persona, and never a problem.
xtrabackup version 2.4.11 based on MySQL server 5.7.19 Linux (x86_64) (revision id: b4e0db5)
MariaDB 10.1.33
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-862.3.2.el7.x86_64
Architecture: x86-64
FileSet {
Name = "MySQL"
Description = "Backup MySQL database using Percona xtrabackup."
Include {
Options {
Signature = MD5 # calculate md5 checksum per file
compression = GZIP
}
Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=bareos-fd
-percona:mycnf=/etc/bareos/bareos-fd.d/my.cnf"
}
}
Dan
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Zoltán Beck
2018-06-26 12:15:41 UTC
Permalink
I think I found something, I tried to restore the Full backup, the restore job fails, but the files restored. It is very similar as in thread (Restore xtrabackup No lsn information found <https://groups.google.com/forum/#!searchin/bareos-users/Restore$20xtrabackup$20No$20lsn$20information$20found%7Csort:date/bareos-users/7pwz9oT0ANE/bp3LGQ1_AQAJ>).


Kind Regards,
bzg
Post by Denis Barbazza
Hello Dan,
Can you please share me the client and job definition?
CentOS Linux release 7.5.1804 (Core)
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
mysqld Ver 5.7.22-22 for Linux on x86_64 (Percona Server (GPL), Release 22, Revision f62d93c)
FileSet {
Name = "MariaDB"
Description = "Backup MariaDB"
Include {
Options {
Signature = MD5 # calculate md5 checksum per file
}
Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=bareos-fd-percona:mycnf=/etc/mysql/backup.cnf"
}
}
[client]
user=root
password=...
[mysql]
user=root
password=...
JobDefs {
Name = "DefaultMariaDBJob"
Type = Backup
Level = Incremental
Client = backup.local
FileSet = "MariaDB"
Schedule = "Weekly"
Storage = File
Messages = Standard
Pool = Incremental
Priority = 10
Write Bootstrap = "/var/lib/bareos/storage/%c.bsr"
Full Backup Pool = Full
Incremental Backup Pool = Incremental
}
Job {
Name = "sql03.local-MariaDB"
JobDefs = "DefaultJob"
Client = "sql03.local"
FileSet = "MariaDB"
}
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 58, in restore_object_data
return bareos_fd_plugin_object.restore_object_data(context, ROP)
File "/usr/lib64/bareos/plugins/BareosFdPercona.py", line 423, in restore_object_data
self.rop_data[ROP.jobid] = json.loads(str(self.row_rop_raw))
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
File "/usr/lib64/python2.7/json/decoder.py", line 38, in errmsg
lineno, colno = linecol(doc, pos)
TypeError: 'NoneType' object is not callable
26-Jun 11:54 sql03.local JobId 287: Fatal error: Failed to authenticate Storage daemon.
26-Jun 11:54 bareos-dir JobId 287: Fatal error: Bad response to Storage command: wanted 2000 OK storage
, got 2902 Bad storage
bzg
Post by Dan
I've been running a weekly full and daily incrementals for 2+ years now, through updates of both MariaDB and Persona, and never a problem.
xtrabackup version 2.4.11 based on MySQL server 5.7.19 Linux (x86_64) (revision id: b4e0db5)
MariaDB 10.1.33
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-862.3.2.el7.x86_64
Architecture: x86-64
FileSet {
Name = "MySQL"
Description = "Backup MySQL database using Percona xtrabackup."
Include {
Options {
Signature = MD5 # calculate md5 checksum per file
compression = GZIP
}
Plugin = "python:module_path=/usr/lib64/bareos/plugins:module_name=bareos-fd
-percona:mycnf=/etc/bareos/bareos-fd.d/my.cnf"
}
}
Dan
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dan
2018-06-26 13:53:47 UTC
Permalink
Every restore gives me the 'No lsn' error. As you noted, the restore of the files completes before this error is thrown. I have performed multiple restores and it works fine. The one downside (other than living with the fact that a critical piece of your backup strategy throws an error every time you restore) is that I cannot select and restore the full and all subsequent incrementals in the UI to restore all at once. I have to perform a separate restore on each of the backups. That's OK because it saves them each to separate directories anyhow. The Persona prepare and restore commands work fine from there.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dan
2018-06-26 13:48:48 UTC
Permalink
Client ...
Client {
Name = rsch-comb-fd
Address = 192.168.1.11
Password = <password hash>
}

Job ...
JobDefs {
Name = "DefaultMySQLJob"
Type = Backup
Level = Incremental
Client = bareos-fd
FileSet = "MySQL"
Schedule = "MySQLSched"
Storage = File
Messages = Standard
Pool = DBIncremental
Priority = 10
Full Backup Pool = DBFull
Differential Backup Pool = Differential
Incremental Backup Pool = DBIncremental
}
Job {
JobDefs = DefaultMySQLJob
Name = rsch-comb-mysql
Client = rsch-comb-fd
Enabled = yes
}
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Zoltán Beck
2018-07-02 21:01:12 UTC
Permalink
Absolutely strange, I didn’t found a working combination for incremental backup, the full backup works every time, but the incremental fails.
I tried all variant of FreeBSD, Ubuntu, CentOS, MariaDB, MySQL and Percona MySQL Server.

Any idea?

Kind Regards,
bzg
Post by Dan
Client ...
Client {
Name = rsch-comb-fd
Address = 192.168.1.11
Password = <password hash>
}
Job ...
JobDefs {
Name = "DefaultMySQLJob"
Type = Backup
Level = Incremental
Client = bareos-fd
FileSet = "MySQL"
Schedule = "MySQLSched"
Storage = File
Messages = Standard
Pool = DBIncremental
Priority = 10
Full Backup Pool = DBFull
Differential Backup Pool = Differential
Incremental Backup Pool = DBIncremental
}
Job {
JobDefs = DefaultMySQLJob
Name = rsch-comb-mysql
Client = rsch-comb-fd
Enabled = yes
}
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...