Hello,
I am looking for some advise on troubleshooting a issue I am seeing
with mmap() (probably how I am using it).
I have a issue where a driver returns a correct value, but it seem to
get lost when that value is validated by mmap() and returns
MAP_FAILED.
Obviously gdb is not going to be able
The TRB has not been called upon to resolve any technical disputes in quite a
while. It seems that instead, developers have been able to discuss ideas and
work out disputes either on arch@ or through some other means. There is
certainly nothing wrong with this as the FreeBSD Project certainly
On Tue, Aug 16, 2005 at 03:10:34PM +0200, Joost Bekkers wrote:
> On Tue, Aug 16, 2005 at 04:51:15PM +0400, Dmitry Agaphonov wrote:
> > Hello,
> >
> >
> > I have user A from group G creating shared memory M with permissions
> > 0060. After this, A fails to attach M due to permission denied.
> > H
On Tue, Aug 16, 2005 at 04:51:15PM +0400, Dmitry Agaphonov wrote:
> Hello,
>
>
> I have user A from group G creating shared memory M with permissions
> 0060. After this, A fails to attach M due to permission denied.
> However, another user B from the same group G successfully attaches M.
> User
Opps, it should be gzip -c, instead of gzip -dc
--- Patrick Dung <[EMAIL PROTECTED]> wrote:
> Yes mysqldump.
>
> More solutions
>
> mysqldump database | split -b 1900m
> mysqldump database | gzip -dc | split -b 1900m
> mysqlhotcopy database /tmp && tar czvf /tmp/db.tar /tmp && split -b
> 1900m
Hello,
I have user A from group G creating shared memory M with permissions
0060. After this, A fails to attach M due to permission denied.
However, another user B from the same group G successfully attaches M.
User A manages to attach only if permissions 0600 added for M.
Why the system disreg
Yes mysqldump.
More solutions
mysqldump database | split -b 1900m
mysqldump database | gzip -dc | split -b 1900m
mysqlhotcopy database /tmp && tar czvf /tmp/db.tar /tmp && split -b
1900m /tmp/db.tar
mysqlhotcopy uses lots of space...
--- Dmitry Mityugov <[EMAIL PROTECTED]> wrote:
> On 8/16/05,
Patrick Dung wrote:
It consumes lots of space to do tar+gzip and split.
I am thinking if I could copy all mysql files to another server (by
ftp/ssh) periodically.
MySQL have something called RAID tables for filesystems where max
filesize is 2GB (read old Linux ext2fs systems).
You can also u
Thanks for the answers John, Glen
I think John was right that it is better to build my own mfsroot and
then make config scripts etc.
The `sysctl -a` can do the thing with detecting the hardware etc. It
is a bit lammy but is working. Then dd fdisk bsdlabel and newfs do the
rest.
However if someone i
On 8/16/05, Patrick Dung <[EMAIL PROTECTED]> wrote:
> Hi
>
> We are using an old backup product which can only backup files < 2GB.
> Now we have a mysql file > 2GB. The backup product refuse to backup
> that file.
>
> So, whats the alternatives to perform backup for this situation?
> This is curr
The main problem is money. Buying a new product needs money.
The old backup product (commercial) is scalable and is working fine
with various OS. However it is outdated and is not supported anymore
(unless we have a service agreement with the vendor of the backup
product.)
It consumes lots of spa
On Mon, Aug 15, 2005 at 11:41:09PM -0700, Patrick Dung wrote:
> Hi
>
> We are using an old backup product which can only backup files < 2GB.
> Now we have a mysql file > 2GB. The backup product refuse to backup
> that file.
>
> So, whats the alternatives to perform backup for this situation?
> Th
12 matches
Mail list logo