Post by DanRussell -
Long winded response that hopefully clears up a little bit of confusion around
this thread ... hopefully doesn't add more.
Awesome! Comments inline below, and other text removed.
Post by Dan2.) .... If the pool is set to 'Recycle = yes', then Bareos will truncate the
file (to ZERO bytes plus the label) when it is reused. Until that time that
it is reused it will remain intact and use up the space on the file system.
As a new user trying to implement Always Incremental to disk following the
manual, this is a major source of confusion.
Post by DanBruno pointed out the 'Action On Purge=Truncate' option for a storage pool.
If that is set, then Bareos will truncate the volume (to ZERO bytes plus the
label) at the time the volume is purged rather than waiting for it to be
recycled.
All of my pools already have Recycle=yes and AOP=Truncate. The behavior you
describe is the desired behavior in this scenario.
Post by DanSo the default action for all of the above is to leave the file volume fully
intact on the file system.
That default is expected to be overridden when AOP=Truncate.
Post by DanThe problem pointed out in the bug that you referenced is that the Consolidate
job is not pruning the volumes in the AI-Consolidated pool as the
documentation says that it should.
Exactly. So despite my technically correct configuration and the expectation of
files on disk to be truncated during the purge performed during the Consolidate
job, it doesn't happen. Thus it's the bug you reported and I will have to script
their purge.
Post by DanThe script that I provided, and am pasting again below, can be run after a
Consolidate job to prune the affected volumes and make them available to be
recycled.
I'm using Postgres, so it's slightly different. Instead I've created a ~/.pgpass
file for the bareos user, and placed your (modified) query in a text file.
findempty.sql:
----------------------------------------------------------------------
SELECT m.VolumeName FROM Media m where m.VolStatus not in ('Append','Purged')
and not exists (select 1 from JobMedia jm where jm.MediaId=m.MediaId);
----------------------------------------------------------------------
I have one line to run to purge these volumes.
psql -tf findempty.sql | grep -v '^$' | awk '{print "prune volume="$1,"yes"}' | bconsole
This scripted purge does not appear to have zeroed or deleted the file, despite
my Action on Purge setting for the pool.
For example, previously I used this script to identify that AI-Consolidated-0010
should be removed to free space. The script executed the following command in
bconsole:
prune volume=AI-Consolidated-0010 yes
And now the volume is listed as:
| 10 | AI-Consolidated-0010 | Purged | 1 | 53,687,078,823 | 12 | 31,104,00
0 | 1 | 0 | 0 | File | 2018-01-26 07:08:42 | File |
Yet the file remains at full size.
***@odin3:~$ ls -l storage/AI-Consolidated-0010
-rw-r----- 1 bareos bareos 53687078823 Jan 26 07:08 storage/AI-Consolidated-0010
Either this is another portion of your bug, or a new bug. I would have expected
the purge to truncate that file.
It appears I will have to script performing an 'rm' against those files in my
storage pool filesystem to recover space for my backups to continue.
Are you removing files using scripts?
I'm on 16.2.4, and on a restricted amount of space for backups. I've had a hell
of a time conserving space while getting Always Incremental working.
Thanks.
------------------------------------------------------------------
Russell Adams ***@AdamsInfoServ.com
PGP Key ID: 0x1160DCB3 http://www.adamsinfoserv.com/
Fingerprint: 1723 D8CA 4280 1EC9 557F 66E8 1154 E018 1160 DCB3
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users+***@googlegroups.com.
To post to this group, send email to bareos-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.