Eric Warnke wrote, sometime around 27/04/06 03:42:
> Unfortunately I have looked high and low, there is just no good way to
> read sparse files intelligently. Whoever though of providing the
> functionality without an API to step through the block mapping was a
> moron. It is truly a brain dead t
On Thursday 27 April 2006 06:04, Eric Warnke wrote:
> SOLUTION
I haven't found the FIBMAP doc, but are you really sure that this will permit
a program to properly read it. I say that because I pointed filefrag at a
non-sparse file, and it seems to report a discontinuity. Perhaps the
under
On Thursday 27 April 2006 04:42, Eric Warnke wrote:
> Unfortunately I have looked high and low, there is just no good way to read
> sparse files intelligently. Whoever though of providing the functionality
> without an API to step through the block mapping was a moron. It is truly
> a brain dead
SOLUTIONTurns out there is a way to not only identify sparse files, but also read the map as root! FIBMAP ioctl will do the job quite nicely and there is even a utility to read it ( as root ) filefrag -v
This sure as heck beats scanning the file for zeros manually!Cheers,EricOn 4/26/06, Scott
This is what you said Eric Warnke
> Unfortunately I have looked high and low, there is just no good way to
> read
> sparse files intelligently. Whoever though of providing the functionality
> without an API to step through the block mapping was a moron. It is truly
> a
> brain dead technology. A
Unfortunately I have looked high and low, there is just no good way to read sparse files intelligently. Whoever though of providing the functionality without an API to step through the block mapping was a moron. It is truly a brain dead technology. At least there is a fcntl under Win32 to deal w
"Scott Ruckh" <[EMAIL PROTECTED]> wrote:
> This is what you said Eric Warnke
> >
> > Oops tar -Scf lowercase s is somthing else.
> >
> > Cheers,
> > Eric
>
> Good idea.
>
> I tried:
> tar -Sc -f /var/log/lastlog.tar /var/log/lastlog
>
> But it appears to do much the same as bacula. It too must
This is what you said Eric Warnke
>
> Oops tar -Scf lowercase s is somthing else.
>
> Cheers,
> Eric
Good idea.
I tried:
tar -Sc -f /var/log/lastlog.tar /var/log/lastlog
But it appears to do much the same as bacula. It too must read through
the entire file, so it does not speed things up.
Than
A correction to what I said: it appears to be "last login" data. I'm
assuming this is used for things like finger, etc. I would probably want
to keep as much of that type of data as possible, again, if it were my
system.
Recall that that inidividual says the file is REALLY only 64k, and the
file A
Can't you use the command last (/var/log/wtmp) for that?
The size is that big, according to a google search, because a userid exists
which is probably -1 or realy big. That is why the size of the file is only
64K with all the nulls left out. One record for each userid who has logged
in. The index
lastlog contains the last logins to the system. While many hackers are
smarter than to leave it around, some are not, and if your machine is
brought down, it can be forensic evidence (or at least clues) if you
have that file. I'd back it up if it were my system.
Really, the thing that needs to be
On Wednesday 26 April 2006 15:44, Scott Ruckh wrote:
> This is what you said Pieter (NL)
>
> > Did you look, with for example wx-console, if the files you didn't want
> > to be
> > backed up are indeed excluded to make sure if your fileset is or isn't
> > the problem
> >
> > Pieter
> >
> >
> > --
>
>From what I undestand of the manual and fd source code Bacula is actually
reading all 1.2 Tb of data to scan for blocks with all zero's. I could
imagine this takes time.
What is actually stored in /var/log/lastlog? Is it worth being backed up?
Pieter
Scott Ruckh wrote:
>
>
> This is what you
This is what you said Pieter (NL)
>
> Did you look, with for example wx-console, if the files you didn't want to
> be
> backed up are indeed excluded to make sure if your fileset is or isn't the
> problem
>
> Pieter
>
>
> --
> View this message in context:
> http://www.nabble.com/Backups-too-big%2
Did you look, with for example wx-console, if the files you didn't want to be
backed up are indeed excluded to make sure if your fileset is or isn't the
problem
Pieter
--
View this message in context:
http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4083663
Sent from
This is what you said Pieter (NL)
>
> I use following filesets. First exclude in options and after that include:
>
> FileSet {
> Name= "bsdserver1 files"
> Include {
> Options {
> signature = MD5
> compression = GZIP
>
I use following filesets. First exclude in options and after that include:
FileSet {
Name= "bsdserver1 files"
Include {
Options {
signature = MD5
compression = GZIP
Exclude = yes
--
This is what you said John Kodis
> On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:
>
>> I think I have found the culprit, /var/log/lastlog . It is a sparse
>> file and appears to be 1.2TB, which is way larger then the total
>> space of the filesystem, In reality, this file only u
On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:
> I think I have found the culprit, /var/log/lastlog . It is a sparse
> file and appears to be 1.2TB, which is way larger then the total
> space of the filesystem, In reality, this file only uses 64K of
> actual used disk space, but I a
On Mon, 24 Apr 2006, Scott Ruckh wrote:
I am excluding /BACKUPS in my file list so you can see I am not backing up
my backups.
VERIFY that this is happening. It is very easy to get the syntax wrong and
have /BACKUPS being backed up unintentionally.
I think I have found the culprit, /var/lo
This is what you said Jason Martin
> On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:
>> I think I have found the culprit, /var/log/lastlog . It is a sparse
>> file
>> and appears to be 1.2TB, which is way larger then the total space of the
>> filesystem, In reality, this file only us
If you'd CC'd me on your post to the list, I'd have gotten sooner and
you'd have gotten your reply sooner too. :)
On 24 Apr 2006 at 18:50, Scott Ruckh wrote:
> This is what you said Jason Martin
> > On Mon, Apr 24, 2006 at 07:31:43PM -0400, Dan Langille wrote:
> >> > The backup is to disk for
On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:
> I think I have found the culprit, /var/log/lastlog . It is a sparse file
> and appears to be 1.2TB, which is way larger then the total space of the
> filesystem, In reality, this file only uses 64K of actual used disk
> space, but I a
This is what you said Jason Martin
> On Mon, Apr 24, 2006 at 07:31:43PM -0400, Dan Langille wrote:
>> > The backup is to disk for this single system and the backup is well
>> over
>> > 143GB in space. The actual data being backed up is less then 30GB.
>> Why
>> > is this backup so big?
>>
>> Run t
On Mon, Apr 24, 2006 at 07:31:43PM -0400, Dan Langille wrote:
> > The backup is to disk for this single system and the backup is well over
> > 143GB in space. The actual data being backed up is less then 30GB. Why
> > is this backup so big?
>
> Run the estimate command. Something is taking up t
On 24 Apr 2006 at 16:21, Scott Ruckh wrote:
> Output from df -h looks like the following:
>
> df -h
> FilesystemSize Used Avail Use% Mounted on
> /dev/sda6 129G 23G 100G 19% /
> /dev/sda1 99M 27M 68M 29% /boot
> none 1006M 0 1006M
Output from df -h looks like the following:
df -h
FilesystemSize Used Avail Use% Mounted on
/dev/sda6 129G 23G 100G 19% /
/dev/sda1 99M 27M 68M 29% /boot
none 1006M 0 1006M 0% /dev/shm
/dev/sda3 49G 109M 46G 1%
27 matches
Mail list logo