If the files are located in "babylon4-sd", why are you passing
"babylon5-sd" in the command line?
.mod restoreclient="babylon5" fileset="Dummy" storage=*"babylon5-sd"*
2012/12/3
>
> Zitat von Phil Stracchino :
>
> > I just tried to restore two files to my workstation. I have two SDs,
> > one
2012/11/29 Dan Langille
> On Nov 29, 2012, at 4:56 PM, Jonathan Horne wrote:
>
> If i have say, 2 data base servers, can i set bacula to ensure they are
> not being backed up at the same time? Even if they are the last 2 jobs
> running, id like to not back them both up simultaneously.
>
> Firs
Hello list.
I have several Storages configured but all of them are pointing to the same
Device, they differ only by their Address. It's something like this:
Storage {
Name = st.servers
Address = 192.168.1.254
Password = "XXX"
Device = dev.tpc
Media Type = LTO4
Autochanger
> So only one real storage device? Try assigning a single dns name and
> using that most modern resolvers when presented with more than one A record
> will use one on a shared subnet before others.
>
>
Yeah, I've tried that before, using views... But that created such a mess
on my DNS server (spe
he DNS would always resolv to 192.168.0.254 in that particular lan.
Plus, I have over 15 differents LAN being backed up (yes, my SD server has
15 vlan interfaces configured in it), I used only 2 to simplify my problem.
I want to avoid changing my DNS config to solve this problem.
> On 12/7/2012 12:
Hello list!
I'd like to start using Verify Jobs and I'd would like your opinion on how
to automatically run it daily after all normal backup jobs have finished.
So far, what I came up with is creating a second job (a verify job) for
every client in my Bacula Director and using a different schedul
>
>
> Hi,
>
>
Hello!
> we use priority to tell bacula to run verify jobs only after backup jobs.
>
> greets
>
>
yeah, that would work too, but since I have multiple storage systems
(three tapes devices), sometimes I could run a verify job on one storage
while bacula is performing a normal backup
Hello everyone.
I know that Copy (and Migrate) jobs must occur in the same SD, but if that
SD has a single Library Tape with 2 (or more) drivers, can I run a Copy Job
in the same library tape (and the same SD)?
Today, I have two different Library Tapes with 1 driver in each, both
controlled by th
2013/3/4 Uwe Schuerkamp
>
> I've recently implemented a new "offline backup" methodology where we
> just copy the most recent full online backups for all clients to tape,
> using a bacula job.
>
> The downside is you won't be able to use file-level restore on those
> volumes, but with bls, bscan
Hello everyone.
We're migrating our Bacula database from Postgres 8.4 to 9.2 and all we've
done so far is a generate a dump from 8.4 database (using the 9.2 pg_dump)
and import it to the new server. And, of course, changing the values in the
Catalog section.
it seemed that this procedure worked j
>
> We're migrating our Bacula database from Postgres 8.4 to 9.2 and all we've
> done so far is a generate a dump from 8.4 database (using the 9.2 pg_dump)
> and import it to the new server. And, of course, changing the values in the
> Catalog section.
>
> it seemed that this procedure worked just
I forgot, my Bacula version is 3.0.2...
2010/5/25 Rodrigo Renie Braga
> Hello List
>
> Here's my sittuation, I've two different Storages configured in Bacula,
> which are my two Sun StorageTek SL24 tape storages. In one (called TPA),
> i've all my monthly FULL ba
Hello List
Here's my sittuation, I've two different Storages configured in Bacula,
which are my two Sun StorageTek SL24 tape storages. In one (called TPA),
i've all my monthly FULL backups and on the other (called TPB), i've mine
DIFFERENCIAL and INCREMENTAL backups.
The backups are working fine,
Anyone have any idea on how to solve my problem (in bacula 3.0.2)?
I also couldn't figure it out how to make a search for this problem on the
old list's emails, before I signup...
Thanks
2010/5/25 Rodrigo Renie Braga
> Hello List
>
> Here's my sittuation, I
Hello List
I've been trying to get help from the Bacula IRC Channel, but no success.
I have two tape Storages, TPA and TPB. For all my Clients, I run a Full
Backup which saves the data on TPA, and every subsequent Differential backup
uses the TPB tapes.
My problem is when making a Full restore f
ll
backups...
BTW, i'm using Bacula 5.0.3...
2011/1/10 Phil Stracchino
> On 01/10/11 12:21, Rodrigo Renie Braga wrote:
> > Hello List
> >
> > I've been trying to get help from the Bacula IRC Channel, but no success.
> >
> > I have two tape Storages, TPA
dule {
Name = sch.tpa
Run = Level=Full Pool=pool.tpa.full 1st sun at 01:00
Run = Level=Differential FullPool=pool.tpa.full Pool=pool.tpb.diff
2nd-5th sun at 01:00
Run = Level=Incremental FullPool=pool.tpa.full Pool=pool.tpb.inc mon-sat
at 01:00
}
2011/1/10 Phil Stracchino
> On 0
Well, thank you very much for your time, and as for the FullPool directive,
I already change it to do this in the Job section in my new fresh install,
as soon as I get the results, I'll post it here...
Thanks again!
2011/1/10 Phil Stracchino
> On 01/10/11 14:21, Rodrigo Renie Bra
Sorry, actually the Volume is marked as "Full", not "Used" as I posted
before...
thanks...
2011/1/18 Rodrigo Renie Braga
> Hello everyone..
>
> I'm currently using, in my Tape Storage, LTO-4 Tapes (800Gb each). I had a
> Volume on a specific Pool with a
Hello everyone..
I'm currently using, in my Tape Storage, LTO-4 Tapes (800Gb each). I had a
Volume on a specific Pool with already 100G of space used by previous Full
backups. After that, I started a Full Backup job on the same Pool, which
ended up using the same Volume (no problem there). Since t
ob on that space, i thought it would be possible to
recover it... and that "until some day you recycle the hole volume" is going
to be in 4 months... :)
Well, thanks for the answer!
2011/1/18 Timo Neuvonen
> "Rodrigo Renie Braga" kirjoitti viestissä
> news:AANLkTi=vLB
2,523
Files Restored: 2,523
Bytes Restored: 1,955,791,042
Rate: 2597.3 KB/s
FD Errors: 0
FD termination status: OK
SD termination status: OK
Termination:Restore OK
YEY!! :)
Maybe the issue before was because I upgraded Bacul
Hello everyone.
I have two file servers that *each* takes up to 30 hours to run a Full
Backup on the first Sunday of the month. But I also run a Incremental backup
of these servers every day, and since I only let 1 job run at a time, the
Incremental Backups on the first Monday of the month may wai
Hello list!
In my current Bacula configuration, a single Client have different Pools for
it's Incremental, Differential and Full Backup. For a specific Client, I
have the following configuration:
Client {
Name = client.ptierp-teste-top
Address = ptierp-teste-top.pti
Catalog = cat.defa
2011/2/2 Jeremy Maes
Hello list!
>
> In my current Bacula configuration, a single Client have different Pools
> for it's Incremental, Differential and Full Backup. For a specific Client, I
> have the following configuration:
>
> Job {
> Name = job.ptierp-teste-top
> Client = client.ptierp
> >>>>> On Wed, 2 Feb 2011 12:03:13 -0200, Rodrigo Renie Braga said:
> >
> > Before 28-Jan, I had only ran 1 Incremental Backup, because I started the
> > backups for this Client at 26-Jan (which was a Full Backup). Hence, I
> > believe that the In
Sorry, I'm resending my question because I think I send it directly to
Martin only...
> >>>>> On Wed, 2 Feb 2011 12:03:13 -0200, Rodrigo Renie Braga said:
> >
> > Before 28-Jan, I had only ran 1 Incremental Backup, because I started the
> > backups fo
2011/2/3 Phil Stracchino
> On 02/03/11 10:53, Rodrigo Renie Braga wrote:
> > Humm, very interesting... So, since I have three Pools for each
> > Incremental, Differential and Full with different retention Volume
> > Retention periods, basically I'd need to create three
I personally am getting those data (clients configs and catalog dump) and
sending them to an account at Dropbox (they have a command line interface).
If there are any more data to backup from Bacula it self besides those two,
I'd like to know too...
2011/2/27 David Clements
> I have used Bacula
Hello everyone.
I'd like to know if anyone could point me some directions about how I could
create a routine in Bacula to make a Full backup of my main servers and then
manually remove these tapes to take them to a safe place (in case of a
catastrophe).
I've a few doubts about how I could make th
You probably have Nagios or some other monitoring tool in your network
connecting to your Bacula Director port every 5 minutes... You can generate
that message just by using telnet in Bacula Director port and exiting right
after...
I think there's a bacula plugin for Nagios that can avoid that mes
Hello list.
I'm trying to run my Bacula Server using two different Catalogs, because I
want to use a second isolated database that I will use to store Full Backups
to take it's tapes offsite.
Since the Catalog configuration goes on the Client, I basically need to
create different Client resources
Hello again everyone.
Currently, I have 2 Tape Storages configured on my Bacula Server, but I only
using one (which is enough to hold all my Full/Diff backups). I want to use
the second Tape Storage to store Backups and take it's tapes somewhere else
safe. I was thinking about using Copy Jobs, to
Hugo, please post your results to the list because I think there's a lot of
people (me included) that could use a solution like that...
2011/3/30 John Drescher
> > My first question is : How can I tell bacula to choose dynamically the
> > client according to the result of a script ?
> >
>
> You
Hello list.
Can someone send me the SQL Query to get the size that all my Full, Diff and
Inc backups are currently consuming? In my config, I have 5 Pools that store
Full Backup tapes, the same happens for Diff and Inc Backups. But if you
send me a SQL Query.
I'm going to use this to know when sh
n fire or water
> incident.
> But, if the tapes are stored off-site you will able to recreate catalog
> with bscan with success. Some informations cannot be recreated with bscan,
> but probably all you need will be available on catalog.
>
> Kleber
>
> 2011/3/30 Rodrigo Renie
Hello everyone.
I'm testing with Copy Jobs and I want to check if my results are actually
the expected ones.
First, the Bacula log when running the Copy Job:
01-Abr 16:06 dir.ptibacula-dir JobId 2109: The following 1 JobId was chosen
> to be copied: 2107
> 01-Abr 16:06 dir.ptibacula-dir JobId 21
Once again, hello everyone.
I've been trying out with Copy Jobs and everything seems to work just fine,
but I have one last doubt:
just like any other Job, I have to configure the Client for the Copy Job
resource, but since I'm using "Selection Type = SQL Query", I basically
could select any JobI
I think you could use Migration Jobs for that, but since i've just started
trying out with that myself, I can't help you with details about
implementing... But there's a good documentation about that on:
http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html
And off course, there'
The jobs will only be pruned when their respective volumes are recycled or
purged... since the Volume Retention in your Pool is 3 days, that will
happen at the seventh day of running backups, when the second volume expires
and Bacula recycles the first volume...
2011/4/6 Jérôme Blion
> Hello,
I guess it would be nice to have priorities separated per Storage...
2011/4/6
> You are right that no priority 10 jobs will get run while there are higher
> priority (lower number) jobs running. To run jobs in parallel they need to
> be the same priority.
>
> However, to some extent you can con
For the last few days, I've been struggling with the same problem, and I
don't if my experience can help you or not, but here goes:
First of all, I have a special Admin Job that runs every day at 12:00pm,
which basically sends my Catalog Backup (postgres), the bacula-dir and
bacula-sd config and s
Hello list
I've been trying to create an Admin Job to execute a script on the director
itself, but the Admin Job simply ignore the RunScript section. I know that
Admin Jobs can only run Director Script, not remote Client Script, but my
Client is the Director, so, what am I doing wrong?
Here's the
You're absolutely right, Admin Jobs doesn't like the RunScript section, I
replaced it with RunBeforeJob and RunAfterJob and it worked like a charm.
Thanks!
2011/4/15 Jeremy Maes
> Op 14/04/2011 15:55, Rodrigo Renie Braga schreef:
>
> Hello list
>>
>> I've b
gt;> File, Job, Volume retentions are differents parameters. Volumes can
> only
> >> be automatically recycled if no more jobs references it.
> >>
> >> Best regards.
> >> Jerome Blion.
> >>
> >>
> >> On Wed, 6 Apr 2011 22:04:24
Flecther, is not recommended to use UTF8 on the Catalog database, in the
create_postgresql_database script there's a BIG warning about it:
#
# Please note: !!!
# We do not recommend that you use ENCODING 'SQL_UTF8'
# It can result in creating filenames in
Can't say the same... it's been 2 weeks that I've been touching it every
day... damn..
2011/4/20 Graham Keeling
> On Wed, Apr 20, 2011 at 12:31:44PM -0400, Dan Langille wrote:
> > Hi,
> >
> > My name is Dan, and it's been 16 days since I last touched my
> bacula-dir.conf file.
>
> Haha! This mad
Vitor, esta lista é predominantemente em Inglês, seria bom se suas dúvidas
futuras também fossem em inglês para que tenha mais chances de ser ajudado.
Vitor, this list is predominantly in english, it'd be nice if your next
doubts were also in english so more people can have a chance to help you...
Hello everyone.
I've been having a problem with Bacula with a Pool that rotates the tapes
daily, like that:
Pool {
Name = pool.inc.muitobaixo
Pool Type = Backup
Storage = st.tpa
Volume Use Duration = 1 day
Volume Retention = 1 day
Scratch Pool = scratch.tpa
RecyclePool
uot;... guess I was wrong...
And this problem gives me another doubt: if I add the "Recycle Pool" on the
"scratch.tpa" like you said, only running the "update all volumes from all
pools" command will be enough to update all volume parameters?
Thanks again!
2011/5/10 Maxim
Hello list.
I'm receiving theses error messages when executing a Copy Job:
13-Mai 03:42 dir.ptibacula-dir JobId 4957: Warning: Got SHA1 digest but not
> same File as attributes
>
This message is repeating A LOT, like 2000 times PER MINUTE, but it only
started a long time after the Copy Job start
Any thoughts on this? It is still happening and I have no idea why...
2011/5/13 Rodrigo Renie Braga
> Hello list.
>
> I'm receiving theses error messages when executing a Copy Job:
>
> 13-Mai 03:42 dir.ptibacula-dir JobId 4957: Warning: Got SHA1 digest but not
>&g
> it is not so important if i cant restore a single file from an old backup,
> but i want to make a new full-backup, with correct catalog update.
> i do not understand where the batch relation come from and why there is an
> sql-Error when the file, filename and path table are empty?
Well, from w
Hello everyone
I'd like to create a SQL Query to determine which Volumes (Tapes) were used
by my CopyJobs. I thought that it would be as simple as determining the
Volumes used by a Full Backup Job (for example), but apparently the JobID of
a CopyJob, shown in a "list jobs" command, isn't related t
How do you move the tapes from Scratch to the Daily Pool? You're supposed to
have a "Scratch Pool = Scratch" entry on your Daily Pool so Bacula gets the
tape automatically from the Scratch Pool when running the Jobs...
2011/5/27 Mauro Colorio
> I've a scratch pool defined as
>
> Pool {
> Name
Please, do not forget to reply to the list also, not only directly to me...
I actually don't think it's a BUG, when you put your tapes on the Scratch
Pool using "label barcodes", the volumes will inherit any configuration on
that pool, and only if that volume gets cycled to the Daily Pool
automat
Sorry, no experience on that specific Tape Drive, but if your OS support
that hardware (i.e the devices for bacula-sd to read/write on are created),
than Bacula will work without problems with it...
2011/5/30 Rickifer Barros
> Hello Everyone,
>
> I would like to know if somebody here have used B
Just to give a feedback, I believe it was a physical error on the tape, i
bought brand new ones and ran again the same Copy Job and now the job
terminated without errors!
2011/5/16 John Drescher
> > Any thoughts on this? It is still happening and I have no idea why...
> >
>
> Do a test restore
Hello Cleuson.
What exactly is your problem, I mean, why would you need to restore some
Incremental backup but not all? Maybe by understanding your problem we can
help you.
Anyway, you can restore only specific JobID's using restore in bconsole,
maybe you can pass the jobid's of the Incremental B
Just giving my 2cents here: I solved the same problem you're having by using
the /etc/hosts file...
In bacula-dir configs, the I've configured the FD Address parameter with the
FQDN of the bacula-sd server, and on the bacula-fd client, using the hosts
file, I've pointed the FQDN to the IP of the f
Hello list!!
Has anyone used Bacula with the brand new Postgres 9? I've seen that now
Postgres supports multiple encoding for it's databases, and that's really
helpful to me because all my websites are using UTF8 and only for Bacula I'm
using 'latin1' (thats the correct encoding right?) and I'd li
Well, I'm really starting to figure this bacula feature yet, but I'd
recomend taking a look at Copy Jobs.
The ideia would be only running your normal Full/Diff/Inc Backups and then,
weekly, create a "copy" of them on your offsite storage. When restoring, it
will require only your normal Full/Diff/
Hello everyone.
In my first attempt using Copy Jobs, I was creating one Copy Job for each of
my Clients, with the following SQL Selection Pattern:
SELECT max(JobId)
FROM Job
WHERE Name = 'someClientJobName'
AND Type = 'B'
AND Level = 'F
Very good, I'll give it a try... Thank you!!!
2011/9/11 Jim Barber
> On 12/09/2011 10:26 AM, Rodrigo Renie Braga wrote:
> > Hello everyone.
> >
> > In my first attempt using Copy Jobs, I was creating one Copy Job for each
> of my Clients, with the fo
hanks!
2011/9/11 Rodrigo Renie Braga
> Very good, I'll give it a try... Thank you!!!
>
>
> 2011/9/11 Jim Barber
>
>> On 12/09/2011 10:26 AM, Rodrigo Renie Braga wrote:
>> > Hello everyone.
>> >
>> > In my first attempt using Copy Jobs, I was
Hello everyone.
I run a Full Backup monthly, on the First Monday of the month (using the
"1st mon" directive in the schedule), and I need to run a Copy Job that will
copy all of my Full Backups to a different Pool every Thursday, because on
Friday will have a routine for taking these copy jobs tap
efault type in the template.
>
> Thomas
>
>
>
> On Friday 09 September 2011 15:04:45 Rodrigo Renie Braga wrote:
> > Hello list!!
> >
> > Has anyone used Bacula with the brand new Postgres 9? I've seen that now
> > Postgres supports multiple encoding for i
Hello once again list.
I'd like to know if the "Level" option on a Copy Job makes any difference at
all for the job. Since my Copy Job looks at "JobID" to copy (using an SQL
Statement), it won't know that that JobID was Full or Incremental, right?
For example:
Job {
Name = job.copy.full
Is there an option on Bacula that makes it checks for duplicate files (using
MD5 or any other hash) in order to send only ONE file to the Storage Daemon?
That would save me a few GB of space on my tapes, higher processing on the
bacula server is not a problem.
Thanks!
9/15 Konstantin Khomoutov
> On Thu, 15 Sep 2011 10:48:31 -0300
> Rodrigo Renie Braga wrote:
>
> > Is there an option on Bacula that makes it checks for duplicate files
> > (using MD5 or any other hash) in order to send only ONE file to the
> > Storage Daemon?
> >
>
I really recommend you taking a look at Copy Jobs, they allow me to have a
safe "copy" of my Backups on a off-site location but I still have my local
backups to restore from accidental daily deletion, like you said...
2011/9/15 Wouter Verhelst
> Hi,
>
> So, Backups are made for two reasons:
> -
2011/9/16 Eric Pratt
> Thank you for your feedback, Rodrigo. I looked up the copy job
> information as you suggested. From what I can tell, you have to purge
> the original job before you can use a copy. This means to me that to
> do a restore, we have to:
>
> 1) identify all the jobs associat
2011/9/16 Tilman Schmidt
> If I read the manual correctly, you'll need to have two tape drives
> connected to the same machine if you want to create an off-site copy
> that way. Is there a viable solution for off-site backups with only one
> tape drive?
>
Yes, you're right, you need to have 2 ta
Hello everyone.
I'm running Copy Jobs from my Full Backups, here's the config (the parts
that matter):
*Pool {
Name = pool.full
Pool Type = Backup
Storage = st.tpc
Volume Use Duration = 1 month
Volume Retention = 6 months
Scratch Pool = scratch.tpc
RecyclePool = scratc
> Notice the Incremental Level of the Job? Why is that?
> That's not so good for me because while the Copy Job is running, I have
> other Incremental Jobs that can be run because they don't use either of the
> Pools used by this Copy Job...
>
BTW, the "normal" Incremental Backup that are ran after
What the URL? Too lazy to Google it.. ;)
2011/9/22 Bacula-Dev
> Dear all,
>
> I'm proud to announce that the Bacula-Web project's web site has been
> updated with more content and better design
>
>- Documentation page and content
>- RSS feeds subscriptions
>- Newsletter subscription
You also could use that script directly on the "File" parameter, like:
File = "|yourscript.sh";
That way you wont need the local crontab to run your script, it can be ran
by bacula itself.
2011/10/26 Alberto Fuentes
> To answer myself
>
> I was not able to do it via wild or regex so I just cr
Simone, I'm trying to use your repository on a CentOS 5.7 adm64 machine,
and "yum update" returns the following:
Loaded plugins: fastestmirror
Determining fastest mirrors
addons
| 951 B 00:00
addons/primary
| 204 B 00:00
base
| 1.1 kB 00:00
base/primary
| 1.2 MB 00:00
base
3566
ave any rhel 5 platform at hand at the
> moment to check that the repo is working.
>
> Regards,
> --Simone
>
>
> On 29 December 2011 12:56, Rodrigo Renie Braga
> wrote:
> > Simone, I'm trying to use your repository on a CentOS 5.7 adm64 machine,
> and
> >
Which version of bacula are you using?
I remember having the same problem with 3.x, but it got resolved on 5.x...
Em 17 de janeiro de 2012 16:38, DMS escreveu:
> Right now I have all my backups going to a 6 TB raid array. I am trying to
> keep the fulls on the array, and the Incrementals on anot
It actually makes perfect sense, since you can have several different jobs
(maybe with different fileset, for example) for one single client
configuration...
Anyway, I guess your problem is solved now... :)
Em 17 de janeiro de 2012 17:10, DMS escreveu:
> Never mind. I guess it bases it off of jo
Hello everyone.
I've written a post on my Blog about my personal experience with off-site
backups with Bacula, and I'd like your insights to improve this post, since
this particular topic is very difficult to find on the Internet (at least
the way I wanted it to work).
Any comment would be very m
This directive only works for newly create Jobs, if you added it after the
Jobs were created, they won't get canceled.
If you don't want to stop your current running backups by simply stopping
the bacula-director, you have to cancel these duplicated jobs manually with
"cancel jobid=".
Em 12 de ma
Hello list.
I need to backup several machines from different networks (VLANs), so to
facilitate, I just plugged several network interfaces on my SD and Director
server (both are on the same machine), with each interface on a different
network with it's own IP address. That is great, because now my
to the Job resource for each client...
Once again, Bacula did not let me down, neither did this mailing list...
Thanks again Bryan!
2012/5/28 Bryan Harris
> Hello,
>
> On May 27, 2012, at 11:28 PM, Rodrigo Renie Braga wrote:
>
> My ideal solution which Bacula, apparently, D
2012/5/28 Alan Brown
>
>
> I have a similar problem. Setting the non-FQDn IPs required in /etc/hosts
> does the trick.
>
> I though about that at first too, but like I said, there are some VLANs
that I have no control of, and also depending on local static configuration
of 300+ servers can start t
e "Storage" option from the Pool resource, every
time Bacula upgrade a Job from Inc to Full, it does not change de Storage,
so it stops asking to mount a new volume...
NOW I'm completely stuck, don't know what to do... Any help would be very
much appreciated...
Thanks!
en storage,
>
> Storage {
> Name = backup1
> Address = backup_address_vlanX
> ...
> }
>
> I hope I haven't misunderstood. Does this look like something worth
> trying?
>
> Bryan
>
> On May 29, 2012, at 6:35 PM, Rodrigo Renie Braga wrote:
>
> Hello ev
> Define Full Pool (and other pools) directive in Job resource not a
> Schedule.
>
>
Hello Radosław.
Bacula does use the correct Pool when upgrading a Incremental to Full, i.e,
instead of using the pool.inc Pool, when the Job is upgrades, is starts
using the pool.full Pool.
My problem is that eac
Hello Josh.
Have you tried restricting the number of concurrent jobs in the device
> definition in bacula-sd, as opposed to elsewhere? for example:
>
Yes, it already is set to only 1 concurrent Job in bacula-sd.conf, but the
problem is that, in my SD server, I have multiple IP address on multiple
There's a query that comes with Bacula that does something like that, it
will return the JobID's where Bacula could find a determined filename.
Check out the "query" command. If it comes up empty, you should verify the
/etc/bacula/query.txt file (that path is when installing the Director using
rpm
91 matches
Mail list logo