On Fri, Jul 2, 2010 at 9:55 PM, James C. McPherson wrote:
> On 3/07/10 12:25 PM, Richard Elling wrote:
>
>> On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
>>
>>> Given that the most basic of functionality was broken in Nexenta, and not
>>> Opensolaris, and I couldn't get a single response, I have a
On 3/07/10 12:25 PM, Richard Elling wrote:
On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
Given that the most basic of functionality was broken in Nexenta, and not
Opensolaris, and I couldn't get a single response, I have a hard time
recommending ANYONE go to Nexenta. It's great they're employi
On Fri, Jul 2, 2010 at 9:25 PM, Richard Elling wrote:
> On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
> > Given that the most basic of functionality was broken in Nexenta, and not
> Opensolaris, and I couldn't get a single response, I have a hard time
> recommending ANYONE go to Nexenta. It's grea
On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
> Given that the most basic of functionality was broken in Nexenta, and not
> Opensolaris, and I couldn't get a single response, I have a hard time
> recommending ANYONE go to Nexenta. It's great they're employing you now, but
> the community edition
On Fri, Jul 2, 2010 at 8:06 PM, Richard Elling wrote:
> On Jul 2, 2010, at 12:53 PM, Steve Radich, BitShop, Inc. wrote:
>
> > I see in NexentaStor's announcement of Community Edition 3.0.3 they
> mention some backported patches in this release.
>
> Yes. These patches are in the code tree, curren
On Fri, Jul 2, 2010 at 1:18 AM, Ray Van Dolson wrote:
> We have a server with a couple X-25E's and a bunch of larger SATA
> disks.
>
> To save space, we want to install Solaris 10 (our install is only about
> 1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL
> attached to a z
On Jul 2, 2010, at 12:53 PM, Steve Radich, BitShop, Inc. wrote:
> I see in NexentaStor's announcement of Community Edition 3.0.3 they mention
> some backported patches in this release.
Yes. These patches are in the code tree, currently at b143 (~18 weeks
newer than b134)
> Aside from their man
> I think I'll try booting from a b134 Live CD and see
> that will let me fix things.
Sadly it appears not - at least not straight away.
Running "zpool import" now gives
pool: storage2
id: 14701046672203578408
state: FAULTED
status: The pool was last accessed by another system.
action: Th
As most others have - I've been having issues with dedup.
Here's my situation, 4TB pool for daily backups of sql server - dedup enabled -
so a typical directory has 100+ files that are mostly identical (some all are
identical).
If I do rm * OpenSolaris is dead, zfs hung, etc. sometimes it come
> Andrew,
>
> Looks like the zpool is telling you the devices are
> still doing work of
> some kind, or that there are locks still held.
>
Agreed; it appears the CSV1 volume is in a fundamentally inconsistent state
following the aborted zfs destroy attempt. See later in this thread where
Vict
I see in NexentaStor's announcement of Community Edition 3.0.3 they mention
some backported patches in this release.
Aside from their management features / UI what is the core OS difference if we
move to Nexenta from OpenSolaris b134?
These DeDup bugs are my main frustration - if a staff membe
Dear Forum
Just technical infos for all who have the same problem with AMD Chipset and
Kingston SSDs.
Official statement from Kingston :
We have 2 major problem sources:
- chipset comes from AMD (there is no better chipset for SSDs than Intel)
- your OS is not Windows (in which case all driver
On Jul 1, 2010, at 10:28 AM, Andrew Jones wrote:
> Victor,
>
> I've reproduced the crash and have vmdump.0 and dump device files. How do I
> query the stack on crash for your analysis? What other analysis should I
> provide?
Output of 'echo "::threadlist -v" | mdb 0' can be a good start in th
On 07/02/10 11:14, Erik Trimble wrote:
On 7/2/2010 6:30 AM, Neil Perrin wrote:
On 07/02/10 00:57, Erik Trimble wrote:
That's what I assumed. One further thought, though. Is the DDT is
treated as a single entity - so it's *all* either in the ARC or in
the L2ARC? Or does it move one entry at
On Fri, Jul 02, 2010 at 08:18:48AM -0700, Erik Ableson wrote:
> Le 2 juil. 2010 à 16:30, Ray Van Dolson a écrit :
>
> > On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote:
> >>> We have a server with a couple X-25E's and a bunch of larger SATA
> >>> disks.
> >>>
> >>> To save space, we w
On 02/07/2010 17:56, Lutz Schumann wrote:
I don't know about the rest of your test, but writing
zeroes to a ZFS
filesystem is probably not a very good test, because
ZFS recognizes
these blocks of zeroes and doesn't actually write
anything. Unless
maybe encryption is on, but maybe not even then.
On 02/07/2010 17:57, Cindy Swearingen wrote:
I think the answer is no, you cannot rename the root pool and expect
that any other O/S-related boot operation will complete successfully.
Live Upgrade in particular would be unhappy and changing the root
dataset mount point might cause the system not
> "np" == Neil Perrin writes:
np> The L2ARC just holds blocks that have been evicted from the
np> ARC due to memory pressure. The DDT is no different than any
np> other object (e.g. file).
The other cacheable objects require pointers to stay in the ARC
pointing to blocks in the L
I have created one of my system from a flash archive which was created from a
system running zfs root .. but since its update 6 it didn;t work with flash
archive .. after the system is built from the same flash archive .. system is
up but i get
following error for rpool ..
How can i remove th
On 7/2/2010 6:30 AM, Neil Perrin wrote:
On 07/02/10 00:57, Erik Trimble wrote:
That's what I assumed. One further thought, though. Is the DDT is
treated as a single entity - so it's *all* either in the ARC or in
the L2ARC? Or does it move one entry at a time into the L2ARC as it
fills the A
Hi Julie,
I think the answer is no, you cannot rename the root pool and expect
that any other O/S-related boot operation will complete successfully.
Live Upgrade in particular would be unhappy and changing the root
dataset mount point might cause the system not to boot.
Thanks,
Cindy
On 07/02/
Hi,
> I don't know about the rest of your test, but writing
> zeroes to a ZFS
> filesystem is probably not a very good test, because
> ZFS recognizes
> these blocks of zeroes and doesn't actually write
> anything. Unless
> maybe encryption is on, but maybe not even then.
Not true. If I want ZFS
so - if i boot up off of cdrom and do an export of the root pool under name
rpool1 can i reimport it under rpool2 (using the same disk) and keep it at
that name permanently and have not issues with booting in the future or any
other O/S related issues?
That is the question. :-)
On Fri, Jul 2, 2
Thank you all, especially Edward, for the enlightenment.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Ray,
In general, using components from one pool for another pool is
discouraged because this configuration can cause deadlocks. Using this
configuration for ZIL usage would probably work fine (with a performance
hit because of the volume) until something unforeseen goes wrong. This
config is u
Hi Roy,
Yes, but Julie asked about renaming the root pool, which isn't
recommended because it can cause the system not to boot. You
also can't export the root pool without booting from alternate
media.
Thanks,
Cindy
On 07/02/10 09:22, Roy Sigurd Karlsbakk wrote:
- Original Message -
- Original Message -
> Cindy Swearingen oracle.com> writes:
>
> Cindy - this discusses how to rename the rpool temporarily. Is there a
> way to
> do it permanently and will it break anything? I have to rename a root
> pool
> because of a type-o.
> This is on a Solaris sparc environment.
>
On Fri, 2 Jul 2010, Julie LaMothe wrote:
Cindy - this discusses how to rename the rpool temporarily. Is there a
way to do it permanently and will it break anything? I have to rename a
root pool because of a type-o. This is on a Solaris sparc environment.
Please help!
The only difference
On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote:
> > We have a server with a couple X-25E's and a bunch of
> > larger SATA
> > disks.
> >
> > To save space, we want to install Solaris 10 (our
> > install is only about
> > 1.4GB) to the X-25E's and use the remaining space on
> > the SSD'
Cindy Swearingen oracle.com> writes:
Cindy - this discusses how to rename the rpool temporarily. Is there a way to
do it permanently and will it break anything? I have to rename a root pool
because of a type-o.
This is on a Solaris sparc environment.
Please help!
thanks
Julie LaMothe
_
On 07/02/10 00:57, Erik Trimble wrote:
On 7/1/2010 10:17 PM, Neil Perrin wrote:
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> # zfs create mytest/peter
>
> where mytest is a zpool filesystem.
>
> When does it make sense to create such a filesystem versus just
> creating a directory?
This is a thorny
- Original Message -
> Sorry roy, but reading the post you pointed me
> "meaning about 1,2GB per 1TB stored on 128kB blocks"
> I have 1,5TB and 4 Gb of RAM, and not all of this is deduped.
> Why you say it's *way* too small. It should be *way* enough.
> From the performance point of view, i
> We have a server with a couple X-25E's and a bunch of
> larger SATA
> disks.
>
> To save space, we want to install Solaris 10 (our
> install is only about
> 1.4GB) to the X-25E's and use the remaining space on
> the SSD's for ZIL
> attached to a zpool created from the SATA drives.
>
> Currently
Sorry roy, but reading the post you pointed me
"meaning about 1,2GB per 1TB stored on 128kB blocks"
I have 1,5TB and 4 Gb of RAM, and not all of this is deduped.
Why you say it's *way* too small. It should be *way* enough.
>From the performance point of view, it is not a problem, I use that machine
Dear Cindy and Edward
Many thanks for your input. Indeed there is something wrong with the SSD.
Smartmontools confirm me also couples of errors.
So I open a case and hopefully they will replace the SSD. What I learned?
- Be careful of special offers
- Use also rock solid components for your homese
On Jul 1, 2010, at 7:29 PM, Derek Olsen wrote:
> doh! It turns out the host in question is actually a Solaris 10 update 6
> host. It appears that an Solaris 10 update 8 host actually sets the start
> sector at 256.
Yes, this is a silly bug, fixed years ago.
> So to simplify the question.
On 01/07/2010 23:58, Derek Olsen wrote:
Folks.
My env is Solaris 10 update 8 amd64. Does LUN alignment matter when I'm
creating zpool's on disks (lun's) with EFI labels and providing zpool the
entire disk?
http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified
--
D
On 07/ 2/10 04:12 PM, Peter Taps wrote:
Folks,
While going through a quick tutorial on zfs, I came across a way to create zfs
filesystem within a filesystem. For example:
# zfs create mytest/peter
where mytest is a zpool filesystem.
When does this way, the new filesystem has the mount point
On Thu, Jul 1, 2010 at 04:33, Lutz Schumann wrote:
> Hello list,
>
> I wanted to test deduplication a little and did a experiment.
>
> My question was: can I dedupe infinite or is ther a upper limit ?
>
> So for that I did a very basic test.
> - I created a ramdisk-pool (1GB)
> - enabled dedup a
> However, SVM+UFS is more annoying to work with as far as LiveUpgrade is
> concerned. We'd love to use a ZFS root, but that requires that the
> entire SSD be dedicated as an rpool leaving no space for ZIL. Or does
> it?
>
> It appears that we could do a:
>
> # zfs create -V 24G rpool/zil
>
I've recently acquired some storage and have been trying to copy data from a
remote data center to hold backup data. The copies had been going for weeks,
with about 600GB transferred so far, and then I noticed the throughput on the
router stopped. I see a pool disappeared.
# zpool status -x
On 7/1/2010 10:17 PM, Neil Perrin wrote:
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using ARC?
Or is there some sort of memory p
43 matches
Mail list logo