My vault is /d, and /d/przemek/dirvish/default.conf has this:
image-default: %Y-%m-%d
index: gzip
expire-default: +6 weeks
expire-rule:
#Min HourDay Month DOW Expires
* * * * * +14 days
* * * * 1 +5 weeks
* *
On Sun, Sep 13, 2009 at 1:40 PM, Jason Boxman wrote:
>
> The incident has brought back to the fore the necessity of an additional
> auxiliary backup system, for key files anyway. DVD-RAM always sounds sexy.
>
Yes, but DVD recordable media are under suspicion for deteriorating over
time. I can't
On Thu, Oct 1, 2009 at 7:44 AM, Jenny Hopkins wrote:
> Hullo there,
>
> I've got a vault that has an average image size of 30G. The entire
> vault, consisting of 24 images, however, has a size of 70G.
>
> There have not really been many changes in this image since the first
> one in the vault, a
>
> Surely, though, the du will tell me the size of image but not what
> nodes were used? So each size will be about 30G but half of them
> could be on different nodes?
What do you mean by 'nodes'? is your vault spread over multiple computers or
remote filesystems, which you call 'nodes'?
__
On Thu, Oct 1, 2009 at 10:20 AM, Jenny Hopkins wrote:
>
> Maybe I have the wrong end of the stick here. I thought that the
> reason that one see the entire tree in every single image but files
> the same actually only exist once, is because they share 'nodes' in
> the filesystem.
Yes, something
On Thu, Oct 1, 2009 at 5:41 PM, Jenny Hopkins wrote:
> 27093038 -rw-r--r-- 25 root root 15064 2008-01-03 15:34 a2ps.cfg
>
> Is the link count here 25? If so, does this mean that 25 links to this file
> exist?
yes and yes.
If you want to doublecheck you can look for other files with the sam
On Fri, Oct 2, 2009 at 6:18 AM, Dave Howorth wrote:
>
> I think if you ask du to tell you the size of just one particular
> snapshot, it will give you a realistic answer for what would be left if
> you deleted all the other snapshots. That's the easiest way to know the
> size that I'm aware of.
>
On Mon, Jan 25, 2010 at 10:25 AM, Richard wrote:
> Paul Slootman wrote:
> > On Mon 25 Jan 2010, Richard wrote:
> >
> >> *How would one retain monthly backups from the LAST day of each month?
> Not all months have 31 days.
> >>
> >
> > # MIN HR DOM MON DOWSTRFTIME_FMT
>
On Wed, Aug 11, 2010 at 7:03 PM, Eric Searcy wrote:
> Similarly, a "post-client"-ran lvsnapshot.sh script could be modified to
> change its cwd outside the mountpoint before it tries to umount.
Even that might not work, because 'cd' in the script will change the
current directory of the shell r
On Fri, Oct 22, 2010 at 1:36 PM, Rich Shepard wrote:
> Since I need to install Slackware-13.0 on the new drive, and it was
> installed on the existing main drive only a couple of days ago, I'll just
> 'cp -R ...' from the existing drives to the new one.
Ahem, cp -a or something more inspired e
On Thu, Nov 4, 2010 at 1:52 PM, wrote:
> On 04.11.2010 18:41, Dale Amon wrote:
>> I've had USB connected drives appear to be failing
>> but then work fine after powering down and reconnecting
>> them.
Could happen...
>> I've also had a drive that did not have good enough
>> air flow and appeare
On Fri, Nov 5, 2010 at 9:06 AM, Rich Shepard wrote:
> I believe that all drive manufacturers have some capacities or
> technologies that do not hold up well.
Absolutely true--and of course they themselves know well which drives
are holding up well and which they aren't. Sadly, they don't share
If there is any doubt about the drives, I recommend some serious testing:
- SMART disk firmware checks:
- display current firmware error state using smartctl -a /dev/sdc
and skdump /dev/sdc
- pay particular attention to Relocated Sectors and Current Pending count
- run tests smartctl -t long
'free' output is not really very good indication. I think you should
subtract the unused buffer space (13945).
I think what's happening is that the dirvish run reads a lot of data
from the disk and uses up 13GB of buffer space, which is still 'hot'
after dirvish terminates---the system has no way o
It is a disk error and to prevent more damage to the filesystem, the
> kernel remounted the filesystem read-only. Check the output for `dmesg`.
> My gut feeling says your disk is near or at "End of Life".
>
> Try to run a `/sbin/badblocks /dev/sda`, where you replace /dev/sda with
> the correct dev
On Tue, Nov 27, 2012 at 6:32 AM, Rolf-Werner Eilert <
eilert-sprac...@t-online.de> wrote:
> ID# Name Value Worst Thres Pretty Raw
> TypeUpdates Good Good/Past
>5 reallocated-sector-count100 10036 0 sec 0x
> prefail online yes yes
>
Well, I did -t short, as far as I remember. Do you think -t long says more?
'long' just runs a longer test (a couple of hours vs. a couple of minutes).
I always assumed that it touches more areas on disk than the short test,
but I don't know that for sure what exactly is the difference.
_
I didn't follow the entire thread, but seeing that it sees your keys but
refuses to use them, sometimes that is caused by sshd being picky about the
permissions on the key file.
THey have to be rw---, which is weird because Linux uses UID=GID, so
group permissions aren't relevant. Please make s
if your system uses
systemd, you may need to use journalct I think and/or run sshd with verbose
error logging.
Sorry for being vague, I don't remember the details and I can' check them
right now.
On Thu, Oct 24, 2019 at 12:20 PM Rich Shepard
wrote:
> On Thu, 24 Oct 2019, Przemek Kloso
On Thu, Oct 24, 2019 at 1:00 PM Rich Shepard
wrote:
> On Thu, 24 Oct 2019, Przemek Klosowski wrote:
>
> > that looks good, so it must be something else. The .ssh directory itself
> > has to be drwx-- (ls -ld /root/.ssh, or wherever your root's .ssh is)
>
> T
20 matches
Mail list logo