On 1/24/25 14:50, Phil Stracchino wrote:
On 1/24/25 14:43, Josh Fisher via Bacula-users wrote:
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
On 1/24/25 10:17 AM, Phil Stracchino wrote:
this huge atomic write *also* makes it incompatible with Galera 3
clusters.)
Are you sure about
Mandi! Roberto Greiner
In chel di` si favelave...
> Any ideas of what could cause this issue?
Sorry Roberto, it is not clear to me.
ZFS is the 'destination' of the backup, or the source? What is the backup
media?
I can confirm i have still trouble with ZFS as a SOURCE of backup: seems
that Z
On 1/24/25 14:43, Josh Fisher via Bacula-users wrote:
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
On 1/24/25 10:17 AM, Phil Stracchino wrote:
this huge atomic write *also* makes it incompatible with Galera 3
clusters.)
Are you sure about that? The only thing attribute spooling is
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
On 1/24/25 10:17 AM, Phil Stracchino wrote:
Just FYI I am very unconvinced of the benefit of attribute spooling, at
least with a MySQL back-end, because the spooling implementation in
Bacula's MySQL driver is very bad.
(It writes attrib
On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
I will let a developer comment on this as it is "Above my pay grade"™, but I
can say that I have seen horrific backup
performance with attribute spooling disabled. It has been very many years since
I last even tried, but apparently it is n
Em 24/01/2025 14:11, Phil Stracchino escreveu:
On 1/24/25 10:36, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one example, I made a (fu
On 1/24/25 10:17 AM, Phil Stracchino wrote:
Just FYI I am very unconvinced of the benefit of attribute spooling, at
least with a MySQL back-end, because the spooling implementation in
Bacula's MySQL driver is very bad.
(It writes attributes into a table to avoid writing them into a table,
then
On 1/24/25 10:48, Bill Arlofski via Bacula-users wrote:
Hello Roberto,
Any chance you have disabled attribute spooling? ie: `SpoolAttributes = no` in
a Job.
This is the first thing that I can see which could inexplicably slow things
down. If you are monitoring the MySQL
server/service, hav
On 1/24/25 10:36, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one example, I made a (full) backup of one server. The full backup of
137
Em 24/01/2025 12:48, Bill Arlofski via Bacula-users escreveu:
On 1/24/25 8:36 AM, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one exa
On 1/24/25 8:36 AM, Roberto Greiner wrote:
Hy,
I'm having a performance issue with my installation.
When I make a backup, to backup is running well bellow of the expected
speed. A restore, on the other side, works fine.
In one example, I made a (full) backup of one server. The full backup of
1
Mandi! B. Smith
In chel di` si favelave...
> I have a ZFS pool as a dedicated Bacula spool.
To be clear: the ZFS pool is only used for the bacula spool? Or 'spool' is
in 'loose' meaning, eg contain the data 'spooled' from other servers that
have to be put on LTO?
I'm fighting also on this, bec
This is all running on TrueNAS, so BSD. The HBA is an LSI 9220-8i. The NIC
is 10Gb, but not relevant here because the data and the tape library are
all on the same system.
The disks are SATA. 32GB RAM, but I don't see the system running out of RAM
while spooling/despooling.
The JBOD enclosures are
On 17/09/2024 08:58, B. Smith wrote:
Good evening,
I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB
drives, configured as three mirrors of two striped disks. My tape drive
is LTO8. All the data is local to the server. When I despool without
simultaneously spooling anot
Apologies, I misstated the configuration. I do in fact have striped mirrors.
On Mon, Sep 16, 2024, 8:59 PM Phil Stracchino wrote:
> On 9/16/24 18:58, B. Smith wrote:
> > Good evening,
> >
> > I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB
> > drives, configured as three
On 9/16/24 18:58, B. Smith wrote:
Good evening,
I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB
drives, configured as three mirrors of two striped disks.
OK, just one observation: That's generally considered the wrong way to
do it. The normally preferred arrangement
Thanks Joe for this info. It looks like it is a client issue as it is
written in the document (many small files; operations like stat(),
fstat() consume 100% cpu on the client).
I think that implementing autochanger solves my problems (mutliple
clients will write at the same time and utilize b
Ziga,
It is sad to hear your having issues with Bacula. Some of your concerns
have been here since 2005. The only thing you can do to speed things up
is to spool the whole job to very fast disk(SSD), break up your large
job(number of files), make sure your database is on very fast disk(SSD)
and ha
Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:
-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottlene
On 10/6/20 3:45 AM, Žiga Žvan wrote:
I believe that I have my spooling attributes set correctly on jobdefs
(see bellow). Spool attributes = yes; Spool data defaults to no. Any
other idea for performance problems?
Regard,
Ziga
The client version is very old. First try updating the client to
Hello Ziga,
Your client is probably too old for the 9.2.x Director.
Even CentOS 6 is old, most likely in the end of life.
Other than that you can try some tuning:
http://www.bacula.lat/tuning-better-performance-and-treatment-of-backup-bottlenecks/?lang=en
Rgds.
--
MSc Heitor Faria
CEO Bacula Lat
I believe that I have my spooling attributes set correctly on jobdefs
(see bellow). Spool attributes = yes; Spool data defaults to no. Any
other idea for performance problems?
Regard,
Ziga
JobDefs {
Name = "bazar2-job"
Type = Backup
Level = Incremental
Client = bazar2.kranj.cetrtapot.
On 10/5/20 9:20 AM, Žiga Žvan wrote:
Hi,
I'm having some performance challenges. I would appreciate some
educated guess from an experienced bacula user.
I'm changing old backup sw that writes to tape drive with bacula
writing to disk. The results are:
a) windows file server backup from a d
My one comment is that if you really want to keep this data forever,
then you should *really* be making multiple copies to tape, and then
also re-reading them and comparing them against the master data.
I also think that the biggest time sink will be the finding and
building of the daily tar fi
On 6/27/2015 1:37 AM, Andrew Noonan wrote:
> On Fri, Jun 26, 2015 at 2:17 PM, Ana Emília M. Arruda
> wrote:
>> Are you going to generate a .tar of about 250TB every day? Which will
>> be the nature of your restores? You´re going to need always the
>> restore of the whole data set or occasional
On Fri, Jun 26, 2015 at 2:17 PM, Ana Emília M. Arruda
wrote:
> Hello Andrew,
>
> On Fri, Jun 19, 2015 at 5:10 PM, Andrew Noonan wrote:
>>
>> Hi all,
>>
>> After wrestling with a Dell TL4000 in the thread marked "Dell
>> TL4000 labeling timeout", it looks like the autochanger is going to be
>
Hello Andrew,
On Fri, Jun 19, 2015 at 5:10 PM, Andrew Noonan wrote:
> Hi all,
>
> After wrestling with a Dell TL4000 in the thread marked "Dell
> TL4000 labeling timeout", it looks like the autochanger is going to be
> fine thanks to the efforts of several people, especially Ana, on this
>
Zitat von Rodrigo Abrantes Antunes :
> Hi, when restoring, listing files, backing up, purging or pruning mysql
> process uses 100% CPU and the machine is unusable, and such operations last
> to long. Doing some research I found that this can be related to database
> indexes, but I didn't understa
Disregard. It's flying along now at the expected speeds. I blame sun
spots.
James
> -Original Message-
> From: James Harper [mailto:james.har...@bendigoit.com.au]
> Sent: Monday, 17 October 2011 4:14 PM
> To: bacula-users@lists.sourceforge.net
> Subject: [Bacula-users] performance over WA
On Mon, 2011-10-03 at 10:34 +0200, mayak-cq wrote:
>
> zurich and copenhagen are 22.589 ms apart on a shared 100mbit
> connection -- using the bandwidth delay product:
>
> theoretical
> bandwidth delayproductBits bitsPerByte
> bytesInWindow
> 500 000 000 * .022589 = 11 2
On Sun, 2011-10-02 at 11:45 +0200, Radosław Korzeniewski wrote:
> Hello,
>
> 2011/9/30 reaper
>
> sounds like your sysctl.conf needs tweaking? have you tried
> something like this on both sides?
>
> kernel.msgmnb = 65536
> kernel.msgmax = 65536
>
Hello,
2011/9/30 reaper
> sounds like your sysctl.conf needs tweaking? have you tried something like
> this on both sides?
>
> kernel.msgmnb = 65536
> kernel.msgmax = 65536
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296
>
>
These are the IPC (inter process communication) kernel p
On Sat, 2011-10-01 at 07:57 -0700, reaper wrote:
> i'm going through a similar issue -- zurich, barca, copenhagen ...
>
> mayak-cq, can you test bacula performance under your conditions? With and
> without ssh (or something similar) tunnel.
hi reaper,
sure -- i can do that tomorrow.
are you
On Fri, 2011-09-30 at 02:18 -0700, reaper wrote:
> if i understand you correctly, bacula is only using a 128k -- way too
> small? curious -- have you played with "Maximum Network Buffer Size" ?
> does this help?
>
> Yes, that's correct, bacula can only scale window to 128k that's why
> throughput
On Thu, 2011-09-29 at 23:43 -0700, reaper wrote:
> i have noticed that scp is not a good measure of throughput -- i do not know
> why. i use an openvpn tunnel between sites and loose about 20% of throughput
> due to the tunnel. check window size on distant machine (using wireshark) to
> verify
On Thu, 2011-09-29 at 21:12 -0700, reaper wrote:
> sounds like your sysctl.conf needs tweaking? have you tried something like
> this on both sides?
>
> kernel.msgmnb = 65536
> kernel.msgmax = 65536
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296
>
> # long fat pipes
> net
On Thu, 2011-09-29 at 04:20 -0700, reaper wrote:
> Hello.
>
> I saw this post recently
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg47393.html
> and it seems I'm affected by this problem too. Bacula shows extremely low
> performance in networks with high rtt. I have sd i
On Fri, Jul 08, 2011 at 08:30:17AM +0200, Adrian Reyer wrote:
> Speed improved many many times. My incremental backup finished after
> just 10 minutes while it took 2h earlier.
This had been the benefit of using InnoDB over MyISAM. However, at 12GB
RAM and 8900 File entries (12GB file on disk)
FWIW the backups sped up considerably and finished after 1.5 days at an
overall transfer rate of about 1.5 MB/s. I'm really not sure what caused the
slowdown yesterday but the eventual speed up seems to imply an environmental
state on the machines that went away. I checked to see if the AV software
On Tue, 26 Jul 2011 06:18:25 -0700
Steve Ellis wrote:
> >> Another point, even with your current config, if you
> >> aren't doing data spooling you are probably slowing things down
> >> further, as well as wearing out both the tapes and heads on the
> >> drive with lots of shoeshining.
> > (I'm a
On 7/26/2011 5:04 AM, Konstantin Khomoutov wrote:
> On Tue, 26 Jul 2011 00:18:05 -0700
> Steve Ellis wrote:
>
> [...]
>> Another point, even with your current config, if you
>> aren't doing data spooling you are probably slowing things down
>> further, as well as wearing out both the tapes and hea
I disabled the Compression and my speed rate boosted from 8.2 MB/s to 40.8
MB/s but I'm still using Bacula encryption.
On Mon, Jul 25, 2011 at 10:14 PM, James Harper <
james.har...@bendigoit.com.au> wrote:
> > 2011/7/25 Rickifer Barros :
> > > Hello Guys...
> > >
> > > This weekend I did a backup
On Tue, 26 Jul 2011 00:18:05 -0700
Steve Ellis wrote:
[...]
> Another point, even with your current config, if you
> aren't doing data spooling you are probably slowing things down
> further, as well as wearing out both the tapes and heads on the drive
> with lots of shoeshining.
(I'm asking as
>> I was under the impression that_all_ LTO4 drives implemented encryption
>> (though if having the data traversing the LAN encrypted is your goal,
>> you'd still have to do something). I don't know enough about it to know
>> how good the encryption in LTO4 is, however (or for that matter, how th
> >> Disable software compression. The tape drive will compress much
faster
> >> than the client.
> >>
> > If you can find compressible patterns in the encrypted data stream
then
> > you are not properly encrypting it. The only option would be to
compress
> > before encryption which means you can't
On 7/25/2011 6:14 PM, James Harper wrote:
>> 2011/7/25 Rickifer Barros:
>>> Hello Guys...
>>>
>>> This weekend I did a backup with a size of 41.92 GB that took 1 hour
> and 24
>>> minutes with a rate of 8.27 MB/s.
>>>
>>> My Bacula Server is installed in a IBM server connected in a Tape
> Drive LTO
> 2011/7/25 Rickifer Barros :
> > Hello Guys...
> >
> > This weekend I did a backup with a size of 41.92 GB that took 1 hour
and 24
> > minutes with a rate of 8.27 MB/s.
> >
> > My Bacula Server is installed in a IBM server connected in a Tape
Drive LTO4
> > (120 MB/s) via SAS connection (3 Gb/s).
On 07/25/11 11:13, John Drescher wrote:
> On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
> wrote:
>> I did this beforebut, I didn't know how check in Debian if it really is
>> being compressed by the tape drive. The only thing that I got was the bacula
>> information about the SD and FD Wri
OK John...I'll test it.
On Mon, Jul 25, 2011 at 12:13 PM, John Drescher wrote:
> On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
> wrote:
> > I did this beforebut, I didn't know how check in Debian if it really
> is
> > being compressed by the tape drive. The only thing that I got was the
On Mon, Jul 25, 2011 at 11:06 AM, Rickifer Barros
wrote:
> I did this beforebut, I didn't know how check in Debian if it really is
> being compressed by the tape drive. The only thing that I got was the bacula
> information about the SD and FD Written and the "mt" command in Linux don't
> say
I did this beforebut, I didn't know how check in Debian if it really is
being compressed by the tape drive. The only thing that I got was the bacula
information about the SD and FD Written and the "mt" command in Linux don't
say me the real data size of the volume, so I chose to trust on the so
2011/7/25 Rickifer Barros :
> Hello Guys...
>
> This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
> minutes with a rate of 8.27 MB/s.
>
> My Bacula Server is installed in a IBM server connected in a Tape Drive LTO4
> (120 MB/s) via SAS connection (3 Gb/s).
>
> I'm using En
I forgot to say that the files I backed up are locally in the Bacula Server.
On Mon, Jul 25, 2011 at 11:48 AM, Rickifer Barros
wrote:
> Hello Guys...
>
> This weekend I did a backup with a size of 41.92 GB that took 1 hour and 24
> minutes with a rate of 8.27 MB/s.
>
> My Bacula Server is install
On Wed, Jul 06, 2011 at 11:08:44AM -0400, Phil Stracchino wrote:
> for table in $(mysql -N --batch -e 'select
> concat(table_schema,'.',table_name) from information_schema.tables where
> engine='MyISAM' and table_schema not in
> ('information_schema','mysql')'); do mysql -N --batch -e "alter table
mailto:eric.bolleng...@baculasystems.com]
Sent: Wednesday, July 6, 2011 11:20 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Performance options for single large (100TB) server
backup?
Hello,
On 07/06/2011 04:20 PM, Florian Heigl wrote:
> Saving multiple streams is somethi
Hello,
On 07/06/2011 04:20 PM, Florian Heigl wrote:
> Saving multiple streams is something that has been proven as a
> solution for many years, and where that is still too slow NDMP comes
> into place. (in case of ZFS NDMP is still at a unusable stage)
>
> 100TB is a lot, but I wonder if everyone
On 07/06/11 10:41, Adrian Reyer wrote:
> On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
>> should I use for my tables?" is MyISAM.[1] At this point, wherever
>> possible, EVERYONE should be using InnoDB.
>
> I will, if the current backup ever finishes. For a start on MySQL 5.1
>
ving it
to a closed source one if that was possible (it's not like I'm a large company
here at all).
-Original Message-
From: Florian Heigl [mailto:florian.he...@gmail.com]
Sent: Wednesday, July 6, 2011 09:20 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users]
On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
> should I use for my tables?" is MyISAM.[1] At this point, wherever
> possible, EVERYONE should be using InnoDB.
I will, if the current backup ever finishes. For a start on MySQL 5.1
though (Debian squeeze). I am aware InnoDB has a
Hi,
Breaking the server into multiple file daemons sounds as broken as
breaking the stuff amanda users had to do (break your filesystem into
something that fits a tape).
Saving multiple streams is something that has been proven as a
solution for many years, and where that is still too slow NDMP co
On 07/06/11 08:04, Adrian Reyer wrote:
> Hi,
>
> I am using bacula for a bit more than a month now and the database gets
> slower and slower both for selecting stuff and for running backups as
> such.
> I am using a MySQL database, still myisam tables and I am considering
> switching to InnoDB tab
Ok, still running into some troubles here following the previous
suggestions.I now have multiple jobs that are being kicked off at
the same time. In order to do this I had to create a Job & Fileset
pair for each directory as below. However now that I have both jobs
in the queue, the
Am 28.06.2011 18:40, schrieb Steve Costaras:
>
>
> How would the the various parts communicate if you're running multiple
> instances on different ports? I would think just by creating multiple
> jobs would create multiple socket streams and do the same thing.
I should have gotten another coff
Yes, in this case the 'client' is the backup server, as I had a free
slot for the tape drives and due to the size didn't want to carry this
over the network.
If I split this up to separate jobs, say one job per mount point (have
~30 mount points at this time) that may work however I may be
Hi Out of curiosity, why do you do such "forklift replacements" when ZFS
supports replacing individual drives, letting the pool resilver and then
automatically grow to the new size? roy - Original Message -
> I have been using Bacula for over a year now and it has been providing
> 'passab
Problem is not really just tape I/O speeds but the ability to get data
to it. I.e. the SD is running at about 50% cpu overhead right now
(single core) so it could possible handle (2) LTO4 drives assuming a new
SD is not spawned off per drive?
I don't really need 'rait' itself as that wou
How would the the various parts communicate if you're running multiple
instances on different ports? I would think just by creating multiple
jobs would create multiple socket streams and do the same thing.
On 2011-06-28 02:09, Christian Manal wrote:
- File daemon is single threaded so
On 6/27/2011 8:43 PM, Steve Costaras wrote:
>
>
>
>
> - How to stream a single job to multiple tape drives. Couldn't
> figure this out so that only one tape drive is being used.
>
There are hardware RAIT controllers available from Ultera
(http://www.ultera.com/tapesolutions.htm). A R
> - File daemon is single threaded so is limiting backup performance. Is there
> was a way to start more than one stream at the same time for a single machine
> backup? Right now I have all the file systems for a single client in the same
> file set.
>
> - Tied in with above, accurate backups
Hi,
> The issue, I imagine with transfer rates is between the bacula-fd and
> bacula-sd.
Correct
> Do we presume the -sd is in Sydney? You don't say what speed
> the Sydney ADSL2+ link is (though apparently it can manage at least
> 2.2MByte/sec).
24mbit down, 1mbit up
> Is tha
Hi,
On Mon, 11 Apr 2011, Peter Hoskin wrote:
> I'm using bacula to do backups of some remote servers, over the Internet
> encapsulated in OpenVPN (just to make sure things are encrypted and kept off
> public address space).
>
> The bacula-fd is in Montreal Canada with 100mbit Ethernet. I also ha
On 23/09/10 15:26, Andrés Yacopino wrote:
> I think i am getting worst performance because of ramdon disk access
> speed, is that true?
>
Yes. If you use the time command on your tar process you will find it is
similarly slow.
Actually it's not so much random disk access speed as the fixed tim
> I need to improve performance of a Job which backups 150 files (mail
> and File Server).
> I was compressing the files on disk in some tgz files first (tar and
> gzip) ,then backuping then on tape with Bacula, i was getting about:
>
> Job write elapsed time = 00:32:16, Transfer rate = 44.93 M
Hi,
i'm talking about despooling speed (not overall). But it's the same
speed without spooling, but it's clear that over the network i can reach
max. 90MB/s.
I use 32 bit because of the OpenVZ-template but just some minutes ago, i
create the same machine as 64bit, and the speed is a little bit
Citando James Harper :
> Does MySQL have a 'profiler' tool like MSSQL does? I spend most of my
> time in MSSQL and if some operation is running slow I just attach the
> profiler to it and capture the queries and focus on the ones that are
> taking most of the time.
>
> James
What is the impact
On Saturday 20 June 2009 08:51:53 Tom Sommer wrote:
> Tom Sommer wrote:
> > Mike Holden wrote:
> >> Jari Fredriksson wrote:
> INSERT INTO Filename( Name )
> SELECT a.Name
> FROM (
>
> SELECT DISTINCT Name
> FROM batch
> ) AS a
> WHERE NOT
> EXISTS (
> >>
Tom Sommer wrote:
> Mike Holden wrote:
>
>> Jari Fredriksson wrote:
>>
>>
INSERT INTO Filename( Name )
SELECT a.Name
FROM (
SELECT DISTINCT Name
FROM batch
) AS a
WHERE NOT
EXISTS (
SELECT Name
FROM Filename AS f
WHERE f.Na
Jari Fredriksson wrote:
>> James Harper wrote:
>>> The subquery returns a very small result set (0 or 1,
>>> assuming you use DISTINCT) and so isn't too inefficient.
>>> It's when you say 'WHERE NOT EXISTS (SOME QUERY WITH
>>> LOTS OF RESULTS)' that you start to really bog down
>> True, but if the
Jari Fredriksson wrote:
>> James Harper wrote:
>>> The subquery returns a very small result set (0 or 1,
>>> assuming you use DISTINCT) and so isn't too inefficient.
>>> It's when you say 'WHERE NOT EXISTS (SOME QUERY WITH
>>> LOTS OF RESULTS)' that you start to really bog down
>> True, but if the
> James Harper wrote:
>> The subquery returns a very small result set (0 or 1,
>> assuming you use DISTINCT) and so isn't too inefficient.
>> It's when you say 'WHERE NOT EXISTS (SOME QUERY WITH
>> LOTS OF RESULTS)' that you start to really bog down
>
> True, but if the outer query contains a very
>
> James Harper wrote:
> > The subquery returns a very small result set (0 or 1, assuming you
use
> > DISTINCT) and so isn't too inefficient. It's when you say 'WHERE NOT
> > EXISTS (SOME QUERY WITH LOTS OF RESULTS)' that you start to really
bog
> > down
>
> True, but if the outer query contains
James Harper wrote:
> The subquery returns a very small result set (0 or 1, assuming you use
> DISTINCT) and so isn't too inefficient. It's when you say 'WHERE NOT
> EXISTS (SOME QUERY WITH LOTS OF RESULTS)' that you start to really bog
> down
True, but if the outer query contains a very large num
> > INSERT INTO Filename(Name)
> > SELECT DISTINCT Name
> > FROM batch AS a
> > WHERE NOT EXISTS
> > (
> > SELECT Name
> > FROM Filename AS f
> > WHERE f.Name = a.Name
> > )
>
> You may also want to consider using a JOIN rather than a subquery with
a
> NOT EXISTS, something like (untes
Mike Holden wrote:
> Jari Fredriksson wrote:
>
>>> INSERT INTO Filename( Name )
>>> SELECT a.Name
>>> FROM (
>>>
>>> SELECT DISTINCT Name
>>> FROM batch
>>> ) AS a
>>> WHERE NOT
>>> EXISTS (
>>>
>>> SELECT Name
>>> FROM Filename AS f
>>> WHERE f.Name = a.Name
>>> )
>>>
>>>
>> That looks s
Jari Fredriksson wrote:
>>
>> INSERT INTO Filename( Name )
>> SELECT a.Name
>> FROM (
>>
>> SELECT DISTINCT Name
>> FROM batch
>> ) AS a
>> WHERE NOT
>> EXISTS (
>>
>> SELECT Name
>> FROM Filename AS f
>> WHERE f.Name = a.Name
>> )
>>
>
> That looks silly.
>
> I would write it shorter as
>
> INSERT
> On Fri, 19 Jun 2009 09:51:20 +0200, Tom Sommer said:
>
> Martin Simmons wrote:
> >> On Thu, 18 Jun 2009 17:11:04 +0200, Michel Meyers said:
> >>
> >> Martin Simmons wrote:
> >>
> On Wed, 17 Jun 2009 13:48:58 +0200, Tom Sommer said:
>
> On Fri, 19 Jun 2009 03:00:54 +0300, Jari Fredriksson said:
>
> >
> > INSERT INTO Filename( Name )
> > SELECT a.Name
> > FROM (
> >
> > SELECT DISTINCT Name
> > FROM batch
> > ) AS a
> > WHERE NOT
> > EXISTS (
> >
> > SELECT Name
> > FROM Filename AS f
> > WHERE f.Name = a.Name
> > )
> >
Martin Simmons wrote:
>> On Thu, 18 Jun 2009 17:11:04 +0200, Michel Meyers said:
>>
>> Martin Simmons wrote:
>>
On Wed, 17 Jun 2009 13:48:58 +0200, Tom Sommer said:
Martin Simmons wrote:
>> On Tue, 16 Jun 2009 1
Jari Fredriksson wrote:
>> INSERT INTO Filename( Name )
>> SELECT a.Name
>> FROM (
>>
>> SELECT DISTINCT Name
>> FROM batch
>> ) AS a
>> WHERE NOT
>> EXISTS (
>>
>> SELECT Name
>> FROM Filename AS f
>> WHERE f.Name = a.Name
>> )
>>
>>
>
> That looks silly.
>
> I would write it shorter as
>
> I
>
> INSERT INTO Filename( Name )
> SELECT a.Name
> FROM (
>
> SELECT DISTINCT Name
> FROM batch
> ) AS a
> WHERE NOT
> EXISTS (
>
> SELECT Name
> FROM Filename AS f
> WHERE f.Name = a.Name
> )
>
That looks silly.
I would write it shorter as
INSERT INTO Filename(Name)
SELECT DISTINCT Name
FRO
> On Thu, 18 Jun 2009 17:11:04 +0200, Michel Meyers said:
>
> Martin Simmons wrote:
> >> On Wed, 17 Jun 2009 13:48:58 +0200, Tom Sommer said:
> >> Martin Simmons wrote:
> On Tue, 16 Jun 2009 15:05:18 +0200, Tom Sommer said:
>
> Hi,
>
> I have a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Martin Simmons wrote:
>> On Wed, 17 Jun 2009 13:48:58 +0200, Tom Sommer said:
>> Martin Simmons wrote:
On Tue, 16 Jun 2009 15:05:18 +0200, Tom Sommer said:
Hi,
I have a somewhat pressing problem with th
> On Wed, 17 Jun 2009 13:48:58 +0200, Tom Sommer said:
>
> Martin Simmons wrote:
> >> On Tue, 16 Jun 2009 15:05:18 +0200, Tom Sommer said:
> >>
> >> Hi,
> >>
> >> I have a somewhat pressing problem with the performance of my Bacula
> >> installation.
> >>
> >> My MySQL dat
Martin Simmons wrote:
>> On Tue, 16 Jun 2009 15:05:18 +0200, Tom Sommer said:
>>
>> Hi,
>>
>> I have a somewhat pressing problem with the performance of my Bacula
>> installation.
>>
>> My MySQL database currently holds 247,342,127 (36GB) records in the File
>> table, and 78,57
> On Tue, 16 Jun 2009 15:05:18 +0200, Tom Sommer said:
>
> Hi,
>
> I have a somewhat pressing problem with the performance of my Bacula
> installation.
>
> My MySQL database currently holds 247,342,127 (36GB) records in the File
> table, and 78,576,199 (10GB) records in the Filename table.
>
Hi folks,
thanks for your suggestions, I tried your tar suggestion and indeed it
turns out that transfer rates drop to the dozens of kb/sec in one
special directory stored on the ocfs2 filesystem.
I'm now in contact with the ocfs2 devs on the users list to see if
they have any suggestions.
All
On Wed, 04 Mar 2009 11:24:22 +0100, Uwe Schuerkamp wrote:
> Hello folks,
>
> we're experiencing massive problems backing up an ocfs2 cluster
> filesystem mounted on SLES 10 SP2 machines located on a shared SAN
> storage). The cluster has 8 members, and we've already tried certain
> mount options
On Fri, 2009-01-23 at 10:58 +0200, Ari Suutari wrote:
> Hi,
>
> >This is know problem in bacula versions up to 2.4.4
> >It is fixed in recent beta 2.5.28-b1
>
> This sounds great ! Are there any possibilities that
> the fix might be seen in future 2.4 versions, or should
Dont think so, there are
Hi,
>This is know problem in bacula versions up to 2.4.4
>It is fixed in recent beta 2.5.28-b1
This sounds great ! Are there any possibilities that
the fix might be seen in future 2.4 versions, or should
I just upgrade to beta versions ? Using beta versions
is tempting, because I would like to us
Hi,
>What version of Bacula are you running? Which OS? What kind of hardware do
>you have? :)
Sorry, I forgot those: Bacula 2.4.4, FreeBSD 7.1, disks
are SATA disks and tape is HP DAT160.
Ari S.
--
This SF.net ema
1 - 100 of 144 matches
Mail list logo