On Mon, Nov 10, 2008 at 12:42 PM, Keith Bierman <[EMAIL PROTECTED]> wrote:
>
> On Nov 10, 2008, at 4:47 AM, Vikash Gupta wrote:
>
>> Hi Parmesh,
>>
>> Looks like this tender specification meant for Veritas.
>>
>> How do you handle this particular clause ?
Shall provide Centralized, Cross platf
My home server running snv_94 is tipping with the same assertion when someone
list a particular file:
::status
Loading modules: [ unix genunix specfs dtrace cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci ufs md ip hook neti sctp arp
usba qlc fctl nca lofs zfs audiosup sd cpc random
On Nov 11, 2008, at 1:56 AM, Victor Latushkin wrote:
> Henrik Johansson wrote:
>> Hello,
>> I have a snv101 machine with a three disk raidz pool which
>> allocation of about 1TB with for no obvious reason, no snapshot,
>> no files, nothing. I tried to run zdb on the pool to see if I got
>
I've been replicating a number of filesystems from a Solaris 10 update 6 system
to an update 5 one. All of the filesystems receive fine except for one, which
fails with
cannot receive: invalid backup stream
I can receive this stream on another update 6 system.
Are there any zfs options that
Henrik Johansson wrote:
> Hello,
>
> I have a snv101 machine with a three disk raidz pool which allocation
> of about 1TB with for no obvious reason, no snapshot, no files,
> nothing. I tried to run zdb on the pool to see if I got any useful
> info, but it has been working for over two hours
I was wondering if you ever figured this out or if you've reported
it. I'm testing a configuration using snv_99 and am seeing similar
behavior.
>I want to take advantage of the iSCSI target support in the latest
>release (svn_91) of OpenSolaris, and I'm running into some
>performance problems
We have a server (opensolaris 2008.05 upgraded to snv_90) that recently stopped
responding. Trying to SSH in would fail with a fork, no space left error. Maybe
a runaway mem leak, not sure, so we hard rebooted...
Now the kernel panics on boot with the following message:
panic[cpu0]/thread=ff
Hello,
I have a snv101 machine with a three disk raidz pool which allocation
of about 1TB with for no obvious reason, no snapshot, no files,
nothing. I tried to run zdb on the pool to see if I got any useful
info, but it has been working for over two hours without any more
output.
I know
Will Murnane wrote:
>> the Barracuda ES.2 disks from Seagate are available in a SAS-version
>> and would seem to be a perfect fit for J4000 arrays. Does anyone have
>> any experience with these disks? Is it possible to install disks in
>> the "Disk Drive Filler Panels" which are delivered with the
On Mon, Nov 10, 2008 at 3:07 PM, Andy Lubel <[EMAIL PROTECTED]> wrote:
> LOL, I guess Sun forgot that they had xvm! I wonder if you could use a
> converter (vmware converter) to make it work on vbox etc?
>
> I would also like to see this available as an upgrade to our 4500's..
> Webconsole/zfs ju
FWIW:
[EMAIL PROTECTED]:01#kstat vmem::heap
module: vmeminstance: 1
name: heapclass:vmem
alloc 25055
contains0
contains_search 0
crt
LOL, I guess Sun forgot that they had xvm! I wonder if you could use a
converter (vmware converter) to make it work on vbox etc?
I would also like to see this available as an upgrade to our 4500's..
Webconsole/zfs just stinks because it only paints a tiny fraction of the
overall need for a web dr
It's a 64 bit dual processor 4 core Xeon kit. 16GB RAM. Supermicro-Marvell
SATA boards featuring the same S-ATA chips as the Sun x4500.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
On Nov 10, 2008, at 4:47 AM, Vikash Gupta wrote:
> Hi Parmesh,
>
> Looks like this tender specification meant for Veritas.
>
> How do you handle this particular clause ?
>>> Shall provide Centralized, Cross platform, Single console management
> GUI
>
Does it really make sense to have a discussion
What, no VirtualBox image?
This VMware image won't run on VMware Workstation 5.5 either :-(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Solaris 10 U6 and Solaris Express Community Edition can both be installed in
text mode. Nexenta and Belenix will also run on this machine with 512MB.
Nexenta will probably be what you want if you are saying you are running ubuntu
on this box.
-Original Message-
>>Fit-PC Slim uses Geo
Are these machines 32-bit by chance? I ran into similar seemingly
unexplainable hangs, which Marc correctly diagnosed and have since not
reappeared:
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-August/049994.html
Thomas
___
zfs-discuss mailin
I attempted a live upgrade of snv_100 ufs laptop over the weekend to zfs.
It failed a couple of times complaining about not having enough space.
I decided to put the zfs upgrade on the back burner and deleted the
failed BE - naturally no luactivate was done at any time.
I didn't delete the the BE's
On Nov 10, 2008, at 14:05, Eric Schrock wrote:
> If you want to give it a spin, be sure to check out the freely
> available VM images.
Took a bit of digging, but the VMware image is at:
http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp
On Nov 10, 2008, at 10:55 AM, Tim wrote:
> Just got an email about this today. Fishworks finally unveiled?
Yup, that's us! On behalf of the Fishworks team, we'd like to extend a
big thank you to the ZFS team and the ZFS community here who have
contributed to such a huge building block in our new
On Mon, Nov 10, 2008 at 12:55:52PM -0600, Tim wrote:
> Just got an email about this today. Fishworks finally unveiled?
>
> http://www.sun.com/launch/2008-1110/index.jsp
Yes. The official homepage:
http://www.sun.com/unifiedstorage
And from the technical side:
http://blogs.sun
On Nov 10, 2008, at 13:55, Tim wrote:
> Just got an email about this today. Fishworks finally unveiled?
>
> http://www.sun.com/launch/2008-1110/index.jsp
Looks like it:
http://blogs.sun.com/fishworks/entry/launch_blogs
http://blogs.sun.com/main/tags/sunstorage7000
_
Thanks for the reply and corroboration, Brent. I just liveupgraded the machine
from Solaris 10 u5 to Solaris 10 u6, which purports to have fixed all known
issues with the Marvell device, and am still experiencing the hang. So I guess
this set of facts would imply one of:
1) they missed one, o
Just got an email about this today. Fishworks finally unveiled?
http://www.sun.com/launch/2008-1110/index.jsp
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If anyone out there has a support contract with sun that covers Solaris 10
support. Feel free to email me and/or sun and have them add you to my
support case.
The Sun Case is 66104157 and I am seeking to have 6333409 and 6418042
putback into Solaris 10.
CR 6712788 was closed as a duplicate of CR
River Tarnell wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Andrew Gabriel:
>
>> This is quite easily worked around by putting a buffering program
>> between the network and the zfs receive.
>>
>
> i tested inserting mbuffer with a 250MB buffer between the zfs send and zfs
>
Fit-PC Slim uses Geode LX800 which is 500 MHz CPU with 512 megs RAM.
> Well... it's easy to disable graphical login:
>
> svcadm disable cde-login
The problem is there's no option during install to say "no graphics" so during
firstboot it's going to try anyhow. At which point my console is hose
I have an open ticket to have these putback into Solaris 10.
On Fri, Nov 7, 2008 at 3:24 PM, Ian Collins <[EMAIL PROTECTED]> wrote:
> Brent Jones wrote:
> > Theres been a couple threads about this now, tracked some bug
> ID's/ticket:
> >
> > 6333409
> > 6418042
> I see these are fixed in build 10
On 10 November, 2008 - arun tomar sent me these 0,9K bytes:
> hi!
>
> I"m creating a storage based on ZFS file system. I have 3 hard disk of 500 GB
> capacity, totaling to almost 1.4 TB.
>
> now when i create a raidz pool with these 3 disks
>
> zpool create mypool raidz c0d0 c0d1 c1d1
>
> m
Hi
> Not merely a little pokey it was unacceptably slow and the casing got very
> warm. I am guessing it was pushing CPU right to 100% all the time. Took
> hours to load and when booting took minutes. Also didn't see an easy way to
> disable graphical login so on boot every time it would go
hi!
I"m creating a storage based on ZFS file system. I have 3 hard disk of 500 GB
capacity, totaling to almost 1.4 TB.
now when i create a raidz pool with these 3 disks
zpool create mypool raidz c0d0 c0d1 c1d1
my understanding & what i've read from zfs best practices says, that i should
get
I loaded OpenSolaris nv101 on it and the result was very disappointing.
Not merely a little pokey it was unacceptably slow and the casing got very
warm. I am guessing it was pushing CPU right to 100% all the time. Took hours
to load and when booting took minutes. Also didn't see an easy way t
[EMAIL PROTECTED] wrote:
I notice the sys/atomic.h atomic_xxx interfaces are limited to things
that do read/modify/write (inc/dec/swap/etc). There is no atomic_set to
do a simple assignment.
My question is: what protocol is used to update a specific variable to
a specific
On Mon, Nov 10, 2008 at 12:32, Philipp Tobler
<[EMAIL PROTECTED]> wrote:
> Hello,
>
> the Barracuda ES.2 disks from Seagate are available in a SAS-version
> and would seem to be a perfect fit for J4000 arrays. Does anyone have
> any experience with these disks? Is it possible to install disks in
>
>I notice the sys/atomic.h atomic_xxx interfaces are limited to things
>that do read/modify/write (inc/dec/swap/etc). There is no atomic_set to
>do a simple assignment.
My question is: what protocol is used to update a specific variable to
a specific value WHILE AT THE SAME TIME another part c
I notice the sys/atomic.h atomic_xxx interfaces are limited to things
that do read/modify/write (inc/dec/swap/etc). There is no atomic_set to
do a simple assignment.
In a couple headers defining wrappers around the atomic_xxx interfaces,
some define an atomic_set that does a simple assignment,
Caimaniacs,
we are currently seeing bunch of bugs reporting
problem when people end up in 'grub>' prompt
after the installation of recent OpenSolaris
build (101a).
The root cause of this problem hasn't been
identified yet and there are more possibilities
what might be happening here.
So far it s
Hello,
the Barracuda ES.2 disks from Seagate are available in a SAS-version
and would seem to be a perfect fit for J4000 arrays. Does anyone have
any experience with these disks? Is it possible to install disks in
the "Disk Drive Filler Panels" which are delivered with the J4000?
Cheers, Ph
Hi Parmesh,
Looks like this tender specification meant for Veritas.
How do you handle this particular clause ?
>>Shall provide Centralized, Cross platform, Single console management
GUI
Rgds
Vikash
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Sean Sp
> We require urgent help on the compliance sheet attached for
> filesystem ZFS for a USD 20 million storage tender in India.
And from where I come from, companies at this stage of the tendering
process generally do not wish their details/requirements to be widely
publicised. Thus the referenc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Andrew Gabriel:
> This is quite easily worked around by putting a buffering program
> between the network and the zfs receive.
i tested inserting mbuffer with a 250MB buffer between the zfs send and zfs
recv. unfortunately, it seems to make very lit
I have filed following bug in 'solaris/kernel/zfs' category for tracking
this issue:
6769487 Ended up in 'grub>' prompt after installation of OpenSolaris
2008.11 (build 101a)
Thank you,
Jan
jan damborsky wrote:
> Hi ZFS team,
>
> when testing installation with recent OpenSolaris builds,
> we
Title: Parmesh Sharma | Sales Manager – Data Management Group
Hi Dick,
We had already responded to the RFP two months back and now they have
come with clarifications. We want to respond at the earliest as to
close the loop (FUD) created by veritas.
Thank You,
Regards,
Parmesh
Parmesh Sharma wrote:
> ZFS for a USD 20 million storage tender in India.
>From where I come from, 20 million dollar projects are never decided in a
VERY URGENT manner.
--
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u6 10/08 ZFS+
___
44 matches
Mail list logo