> The fact that the state of the art appears to be ibus
> and that it is not
> the default on Solaris is a detriment to projects
> such as this that are
> integrating new features for ibus.
>
> Hasegawa-san, I believe you were both the case owner
> and the project
> team for the ibus integrat
> As far as I understand that bug report, some
> hardware configuration setting ("GTT") is lost
> after a suspend / resume? Did you use
> suspend / resume before Xorg locked up?
No.
Just browsing the web with firefox.
I do not think suspend/resume does work on my hw. I can not even close the lid
CC-ed to xwinow-discuss, thread started at
http://www.opensolaris.org/jive/thread.jspa?threadID=125150&tstart=0
> > Do you find the following error message in /var/log/Xorg.0.log ?
> >
> > intel_bufmgr_fake.c:392: Error waiting for fence: Device busy.
>
> No. I find in Xorg.0.log :
> [...]
> Er
additional info :
dumped Xorg core :
# gcore -o xorg.core 629
gcore: xorg.core.629 dumped
random ~ # mdb xorg.core.629
Loading modules: [ libc.so.1 libsysevent.so.1 libnvpair.so.1 libproc.so.1
ld.so.1 ]
> ::stack
libc.so.1`ioctl+0xa()
libdrm.so.2`drmCommandWrite+0x1b()
intel_drv.so`I830Sync+0x13b
Hello,
It just happened again.
It seems related to running firefox + flash
# ps ax |grep Xorg
629 vt/2 R 5:37 /usr/bin/Xorg :0 -nolisten tcp -br -auth /var/run/gdm/a
# truss -p 629
/1: Received signal #14, SIGALRM, in ioctl() [caught]
/1: ioctl(14, 0x46445, 0xFD7FFFDFF3FC)
> Do you find the following error message in
> /var/log/Xorg.0.log ?
>
> intel_bufmgr_fake.c:392: Error waiting for fence:
> Device busy.
No. I find in Xorg.0.log :
[...]
Error in I830WaitLpRing(), timeout for 2 seconds
pgetbl_ctl: 0xcffc0001 getbl_err: 0x0100
ipeir: 0x iphdr: 0x5400
i am attempting to follow the recipe in:
http://blogs.sun.com/sa/entry/hotplugging_sata_drives
the recipe copies the vtoc from the old drive to the new drive and then does an
attach. when i get to the attach - the partition slices on the new drive
overlap (the partition slices on the old drive o
Just disregard this thread. I'm resolving the issue using other methods (not
including Solaris).
//Svein
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
I'm falling back to a rather similar solution, but on a different approach.
The FreeBSD "istgt" daemon can share targets from files, not zvols, and since
it has a plain ascii config file, if I do things "less fancy", and use a flat
/storage (for the storage zpool) with subdirectories, that can b
> Is there any work on an upgrade of zfs send/receive to handle resuming
> on next media?
See Darren's post, regarding mkfifo. The purpose is to enable you to use
"normal" backup tools that support changing tapes, to backup your "zfs send"
to multiple split tapes. I wonder though - During a rest
a number of apps support zfs extended attributes, but not many support backup
up snapshots in a format that can be *easily* recovered (ie - as a regular
snapshot of a filesystem). hopefully this will change in the future ;-)
if you want to backup snapshot too, then perhaps something like:
mkfif
hmmm I guess I have to give amanda another look. I'm on SVN133 (since the yge
driver is present there and not in the older release, and all four NICs on my
server are Marvell 88e8056 yukon chips and I really want dladm to work). Yet
another chicken-and-egg situation...
Does the amanda packages
And again ...
Is there any work on an upgrade of zfs send/receive to handle resuming on next
media?
I was thinking something along the lines of zfs send (when device goes full)
returning
"send suspended. To resume insert new media and issue zfs resume "
and receive handling:
"zfs receive stre
Svein Skogen wrote:
And herein lies the main clue. It seems Solaris, like FreeBSD, has the same
lack-of-backup-options for zfs (dump doesn't work), unless you have a secondary
server with at least the same amount of diskspace, running UFS, and willing to
cache zfs sends on that ufs for writing
Replying to my own post (isn't that a sure indicator of insanity starting or
something?)
What are the chances the next Opensolaris x86/x64 livecd including the
statically compiled bacula (or other) binaries allowing the use of that for
disaster-recovery from a backup?
//Svein
--
This message
On Wed, 2010-03-03 at 08:08 -0600, Shawn Walker wrote:
> On 03/ 3/10 03:59 AM, Tony Williams wrote:
> > Thanks, that part works OK now. However, I am now seeing another issue
> > when running pkg install ha-cluster-full:
> >
> > r...@node1:~# /usr/bin/pkg install ha-cluster-full
> > Creating Plan
On a side note 2009.06 doesn't exactly install its repo as easily as the read
me puts. Least not with a dell amd64 e521.
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
And herein lies the main clue. It seems Solaris, like FreeBSD, has the same
lack-of-backup-options for zfs (dump doesn't work), unless you have a secondary
server with at least the same amount of diskspace, running UFS, and willing to
cache zfs sends on that ufs for writing those to the tape. Wh
18 matches
Mail list logo