I am using ZFS as the backing store for an iscsi target running a virtual
machine.
I am looking at using 8K block size on the zfs volume.
I was looking at the comstar iscsi settings and there is also a blk size
configuration, which defaults as 512 bytes. That would make me believe that
al
On Sun, 9 May 2010, Edward Ned Harvey wrote:
So, Bob, rub it in if you wish. ;-) I was wrong. I knew the behavior in
Linux, which Roy seconded as "most OSes," and apparently we both assumed the
same here, but that was wrong. I don't know if solaris and opensolaris both
have the same swap beh
On Sun, May 09, 2010 at 09:24:38PM -0500, Mike Gerdts wrote:
> The best thing to do with processes that can be swapped out forever is
> to not run them.
Agreed, however:
#1 Shorter values of "forever" (like, say, "daily") may still be useful.
#2 This relies on knowing in advance what these proc
On Sun, May 9, 2010 at 7:40 PM, Edward Ned Harvey
wrote:
>
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Richard Elling
> >
> > For a storage server, swap is not needed. If you notice swap being used
> > then your storage server is und
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> For a storage server, swap is not needed. If you notice swap being used
> then your storage server is undersized.
Indeed, I have two solaris 10 fileservers that have uptime
On Sat, May 8 at 23:39, Ben Rockwood wrote:
The drive (c7t2d0)is bad and should be replaced. The second drive
(c7t5d0) is either bad or going bad. This is exactly the kind of
problem that can force a Thumper to it knees, ZFS performance is
horrific, and as soon as you drop the bad disks thing
I know that according to the documentation Solaris is supposed to be
fully operational in the absences of swap devices. However, I've experienced
cases which I have not been able to trace the root cause of yet where the disk
access has increased drastically and caused the system to hang but it ma
Hello,
I see strange behaviour when qualifying disk drives for ZFS. The tests I want
to run should make sure that the drives honour the cache flush command. For
this I do the following:
1) Create singe disk pools (only one disk in the pool)
2) Perorm I/O on the pools
This is done via SQLIte an
size of snapshot?
r...@filearch1:/var/adm# zfs list mpool/export/projects/project1...@today
NAMEUSED AVAIL REFER MOUNTPOINT
mpool/export/projects/project1...@today 0 - 407G -
r...@filearch1:/var/adm# zfs list tank/export/projects/project1...@
On May 9, 2010, at 11:16 AM, Jim Horng wrote:
> Okay, so after some test with dedup on snv_134. I decided we can not to use
> dedup feature for the time being.
>
> While unable to destroy a dedupped file system. I decided to migrate the
> file system to another pool then destroy the pool. (se
On May 9, 2010, at 6:30 AM, Roy Sigurd Karlsbakk wrote:
> - "Bob Friesenhahn" skrev:
>
>> On Sat, 8 May 2010, Edward Ned Harvey wrote:
>>>
>>> A vast majority of the time, the opposite is true. Most of the
>> time, having
>>> swap available increases performance. Because the kernel is able
Okay, so after some test with dedup on snv_134. I decided we can not to use
dedup feature for the time being.
While unable to destroy a dedupped file system. I decided to migrate the file
system to another pool then destroy the pool. (see below)
http://opensolaris.org/jive/thread.jspa?threadI
On May 8, 2010, at 7:01 PM, Tony wrote:
> Ok, this is definitely the kind of feedback I was looking for. I'll have
> to check out the docs on these technologies it looks like. Appreciate it.
>
> I figured I would load balance the hosts with a Cisco device, since I can get
> around the IOS o
On Sun, 9 May 2010, Roy Sigurd Karlsbakk wrote:
Are you sure about this? It is always good to be sure ...
This is the case with most OSes now. Swap out stuff early, perhaps
keep it in RAM and swap at the same time, and the kernel can choose
what to do later. In Linux you can set it in
/pro
Additionally, I would like to mention that the only ZFS filesystem not
mounting -- causing the entire "zpool import backup" command to hang,
is the only filesystem configured to be exported via NFS:
backup/insightiq sharenfs root=* local
Is there any chance the NFS s
I'm answering my own question, having just decided to try it. Yes, anything you
want to persist beyond reboot with EON that's not in the zfs pools has to have
an image update done before shutdown.
I had this Doh! moment after I did the trial. Of course all the system config
has to be on the sy
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> On Sat, 8 May 2010, Edward Ned Harvey wrote:
> >
> > A vast majority of the time, the opposite is true. Most of the time,
> having
> > swap available increases performance. Because the kernel is able to
> choose:
> > "Should I swa
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> On Sat, 8 May 2010, Edward Ned Harvey wrote:
> >
> > A vast majority of the time, the opposite is true. Most of the time,
> having
> > swap available increases performance. Because the kernel is able to
> choose:
> > "Should I swa
- "Giovanni" skrev:
> Hi,
>
> Were you ever able to solve this problem on your AOC-SAT2-MV8 card? I
> am in need of purchasing it to add more drives to my server.
>
What problem was this? I have two servers with these cards and the work well
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47)
- "Bob Friesenhahn" skrev:
> On Sat, 8 May 2010, Edward Ned Harvey wrote:
> >
> > A vast majority of the time, the opposite is true. Most of the
> time, having
> > swap available increases performance. Because the kernel is able to
> choose:
> > "Should I swap out this idle process, or shou
Hi Ben,
> The drive (c7t2d0)is bad and should be replaced.
> The second drive (c7t5d0) is either bad or going bad.
Dagnabbit. I'm glad you told me this, but I would have thought that running a
scrub would have alerted me to some fault?
> and as soon as you drop the bad disks things m
21 matches
Mail list logo