Something occurs to me: how full is your current 4 vdev pool? I'm
assuming it's not over 70% or so.
yes, by adding another 3 vdevs, any writes will be biased towards the
"empty" vdevs, but that's for less-than-full-stripe-width writes (right,
Richard?). That is, if I'm doing a write that w
On Sat, Nov 21 at 15:44, Emily Grettel wrote:
Thanks Mike, this fixed it.
I'll stick with this version and see if the CIFS problems continue.
Thanks a lot for everyones assistance.
Cheers,
Em
Out of curiosity, do any of your users have problems seeing files on
CIFS shares from within cmd.ex
I'm wondering if anyone knows of any good guides on setting up users on
opensolaris and ZFS. Previously we were using Ext3 with LVM with Samba and
everyone had a samba account and /homes would be their home directory.
Can something like that (or better) be done in ZFS + CIFS on OpenSolaris
Thanks Mike, this fixed it.
I'll stick with this version and see if the CIFS problems continue.
Thanks a lot for everyones assistance.
Cheers,
Em
> Date: Fri, 20 Nov 2009 22:06:15 -0600
> Subject: Re: [zfs-discuss] CIFS shares being lost
> From: mger...@gmail.com
> To: emilygrettelis
On Fri, Nov 20, 2009 at 7:55 PM, Emily Grettel
wrote:
> Well I took the plunge updating to the latest dev version. (snv_127) and I
> don't seem to be able to remotely login via ssh via putty:
>
> Using username "emilytg".
> Authenticating with public key "dsa-pub" from agent
> Server refused to al
On Fri, Nov 20, 2009 at 7:55 PM, Emily Grettel <
emilygrettelis...@hotmail.com> wrote:
> Well I took the plunge updating to the latest dev version. (snv_127) and I
> don't seem to be able to remotely login via ssh via putty:
>
> Using username "emilytg".
> Authenticating with public key "dsa-pub"
Well I took the plunge updating to the latest dev version. (snv_127) and I
don't seem to be able to remotely login via ssh via putty:
Using username "emilytg".
Authenticating with public key "dsa-pub" from agent
Server refused to allocate pty
Sun Microsystems Inc. SunOS 5.11 snv_127 No
Hello,
This sounds similar to a problem I had a few months ago:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6869512
I don't have a solution, but information from this possibly related
bug may help.
Andrew
___
zfs-discuss mailing list
zfs
> by a Win7 client was crashing our CIFS server within 5-10 seconds.
Hmmm thats probably it then. Most of our users have been using Windows 7 and
people put their machines on standby when they leave the office for the day.
Maybe this is why we've had issues and having to restart on a daily ba
Aha! that makes perfect sense looking at the logs :-) However there is no
ddclient on this box so I'm a bit lost as to whats going on. There are no NT
Domains either - its basically a test lab. People just use a standard
opensolaris account we've setup (for now!) and upload stuff to the NAS.
On Sat, Nov 21 at 11:41, Emily Grettel wrote:
Wow that was mighty quick Tim!
Sorry, I have to reboot the server. I can SSH into the box, VNC etc
but no CIFS shares are visible.
I found 2009.06 to be unusable for CIFS due to hangs that weren't
resolved until b114/b116. We had to revert to 200
> The latter, we run these VMs over NFS anyway and had
> ESXi boxes under test already. we were already
> separating "data" exports from "VM" exports. We use
> an in-house developed configuration management/bare
> metal system which allows us to install new machines
> pretty easily. In this case we
On Fri, Nov 20, 2009 at 6:56 PM, Emily Grettel <
emilygrettelis...@hotmail.com> wrote:
> Ah!
>
> Here are the outputs
>
>
> *cat /var/adm/messages | grep smbd*
> Nov 20 23:38:38 sta-nas-01 smbd[552]: [ID 413393 daemon.error] dyndns:
> failed to get domainname
> Nov 20 23:38:39 sta-nas-01 smbd[552
Ah!
Here are the outputs
cat /var/adm/messages | grep smbd
Nov 20 23:38:38 sta-nas-01 smbd[552]: [ID 413393 daemon.error] dyndns: failed
to get domainname
Nov 20 23:38:39 sta-nas-01 smbd[552]: [ID 413393 daemon.error] dyndns: failed
to get domainname
Nov 20 23:48:55 sta-nas-01 smbd[552]:
On Fri, Nov 20, 2009 at 6:41 PM, Emily Grettel <
emilygrettelis...@hotmail.com> wrote:
> Wow that was mighty quick Tim!
>
> Sorry, I have to reboot the server. I can SSH into the box, VNC etc but no
> CIFS shares are visible.
>
> Here are the last few messages from /var/adm/messages
>
> Nov 20 23
Wow that was mighty quick Tim!
Sorry, I have to reboot the server. I can SSH into the box, VNC etc but no CIFS
shares are visible.
Here are the last few messages from /var/adm/messages
Nov 20 23:48:55 sta-nas-01 pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata)
instance 0 irq 0xe vector 0
On Fri, Nov 20, 2009 at 6:17 PM, Emily Grettel <
emilygrettelis...@hotmail.com> wrote:
> Hi,
>
> I'm just starting out on ZFS and OpenSolaris (2009.06) and I'm having an
> issue with CIFS shares not working after a while. I have about 30 users,
> I've updated our NT scripts to mount the share at
Hi,
I'm just starting out on ZFS and OpenSolaris (2009.06) and I'm having an issue
with CIFS shares not working after a while. I have about 30 users, I've updated
our NT scripts to mount the share at startup and this works OK. If I leave the
server up, after a day the CIFS stop working.
On Nov 20, 2009, at 12:14 PM, Jesse Stroik wrote:
There are, of course, job types where you use the same set of data
for multiple jobs, but having even a small amount of extra memory
seems to be very helpful in that case, as you'll have several
nodes reading the same data at roughly the sa
Bruno,
Bruno Sousa wrote:
Interesting, at least to me, the part where/ "this storage node is very
small (~100TB)" /:)
Well, that's only as big as two x4540s, and we have lots of those for a
slightly different project.
Anyway, how are you using your ZFS? Are you creating volumes and pres
I've done some work on such things. The difficulty in design is
figuring
out how often to do the send. You will want to balance your send time
interval with the write rate such that the send data is likely to be
in the ARC.
There is no magic formula, but empirically you can discover a
reaso
Interesting, at least to me, the part where/ "this storage node is very
small (~100TB)" /:)
Anyway, how are you using your ZFS? Are you creating volumes and present
them to end-nodes over iscsi/fiber , nfs, or other? Could be helpfull to
use some sort of cluster filesystem to have some more contro
Hi,
You have two options to use this chassis , being :
* add a motherboard, that can hold redundant power supplies, and
this will be just a server with a 4U with several disks
* use a server with the LSI card (or other one) and connect this LSI
with a SAS cable to the chassis,
There are, of course, job types where you use the same set of data for
multiple jobs, but having even a small amount of extra memory seems to
be very helpful in that case, as you'll have several nodes reading the
same data at roughly the same time.
Yep. More, faster memory closer to the cons
On Nov 20, 2009, at 11:27 AM, Adam Serediuk wrote:
I have several X4540 Thor systems with one large zpool that
replicate data to a backup host via zfs send/recv. The process works
quite well when there is little to no usage on the source systems.
However when the source systems are under us
On Nov 20, 2009, at 10:16 AM, Jesse Stroik wrote:
Thanks for the suggestions thus far,
Erik:
In your case, where you had a 4 vdev stripe, and then added 3
vdevs, I would recommend re-copying the existing data to make sure
it now covers all 7 vdevs.
Yes, this was my initial reaction as we
I have several X4540 Thor systems with one large zpool that replicate
data to a backup host via zfs send/recv. The process works quite well
when there is little to no usage on the source systems. However when
the source systems are under usage replication slows down to a near
crawl. Without
Hi,
Can anyone identify whether this is a known issue (perhaps 6667208) and
if the fix is going to be pushed out to Solaris 10 anytime soon? I'm
getting badly beaten up over this weekly, essentially anytime we drop a
packet between our twenty-odd iscsi-backed zones and the filer.
Chris was
On Wed, Nov 18, 2009 at 3:24 AM, Bruno Sousa wrote:
> Hi Ian,
>
> I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that
> has :
>
> Power Control Card
>
> SAS 846EL2/EL1 BP External Cascading Cable
>
> SAS 846EL1 BP 1-Port Internal Cascading Cable
>
> I don't do any monitori
On Fri, 20 Nov 2009, Jesse Stroik wrote:
Yes, this was my initial reaction as well, but I am concerned with the fact
that I do not know how zfs populates the vdevs. My naive guess is that it
either fills the most empty, or (and more likely) fills them at a rate
relative to their amount of fr
On Fri, 20 Nov 2009, Richard Elling wrote:
Buy a large, read-optimized SSD (or several) and add it as a cache device :-)
But first install as much RAM as the machine will accept. :-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagi
Thanks for the suggestions thus far,
Erik:
In your case, where you had a 4 vdev stripe, and then added 3 vdevs, I
would recommend re-copying the existing data to make sure it now covers
all 7 vdevs.
Yes, this was my initial reaction as well, but I am concerned with the
fact that I do not k
Richard Elling wrote:
Buy a large, read-optimized SSD (or several) and add it as a cache
device :-)
-- richard
On Nov 20, 2009, at 8:44 AM, Jesse Stroik wrote:
I'm migrating to ZFS and Solaris for cluster computing storage, and
have some completely static data sets that need to be as fast as
Buy a large, read-optimized SSD (or several) and add it as a cache
device :-)
-- richard
On Nov 20, 2009, at 8:44 AM, Jesse Stroik wrote:
I'm migrating to ZFS and Solaris for cluster computing storage, and
have some completely static data sets that need to be as fast as
possible. One of t
On Nov 20, 2009, at 1:47 AM, Mart van Santen wrote:
Richard Elling wrote:
On Nov 19, 2009, at 7:39 AM, Mart van Santen wrote:
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup
server. I wonder if anybody knows wh
I'm migrating to ZFS and Solaris for cluster computing storage, and have
some completely static data sets that need to be as fast as possible.
One of the scenarios I'm testing is the addition of vdevs to a pool.
Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more
vdevs and
Erin wrote:
> The issue that we have is that the first two vdevs were almost full, so we
> will quickly be in the state where all writes will be on the 3rd vdev. It
> would
> also be useful to have better read performance, but I figured that solving the
> write performance optimization would also
> All new writes will be spread across the 3 vdevs. Existing data stays where it
> is for reading, but if you update it, those writes will be balanced across
> all 3
> vdevs. If you are mostly concerned with write performance, you don't have to
> do
> anything.
>
> Regards,
> Eric
The issue tha
On Wed, 18 Nov 2009, Joe Cicardo wrote:
For performance I am looking at disabling ZIL, since these files have almost
identical names.
Out of curiosity, what correlation is there between ZIL and file
names? The ZIL is used for synchronous writes (e.g. the NFS write
case). After a file has
On 11/18/09 12:21, Joe Cicardo wrote:
Hi,
My customer says:
Application has NFS directories with millions of files in a directory,
and this can't changed.
We are having issues with the EMC appliance and RPC timeouts on the NFS
lookup. I am looking doing
Erin wrote:
> How do we spread the data that is stored on the first two raidz2 devices
> across all three so that when we continue to write data to the storage pool,
> we will get the added performance of writing to all three devices instead of
> just the empty new one?
All new writes will be spre
> "m" == Michael writes:
m> zpool built from iSCSI targets from several machines at
m> present, i'm considering buying a 16 port SATA controller and
m> putting all the drives into one machine, if i remove all the
m> drives from the machines offering the iSCSI targets and
Michael wrote:
Hey guys, I have a zpool built from iSCSI targets from several
machines at present, i'm considering buying a 16 port SATA controller
and putting all the drives into one machine, if i remove all the
drives from the machines offering the iSCSI targets and place them
into the 1 mac
Michael wrote:
Hey guys, I have a zpool built from iSCSI targets from several
machines at present, i'm considering buying a 16 port SATA controller
and putting all the drives into one machine, if i remove all the
drives from the machines offering the iSCSI targets and place them
into the 1 mac
Hey guys, I have a zpool built from iSCSI targets from several machines at
present, i'm considering buying a 16 port SATA controller and putting all
the drives into one machine, if i remove all the drives from the machines
offering the iSCSI targets and place them into the 1 machine, connected via
Thanks a lot. This clears many of the doubts I had.
I was actually trying to improve the performance of our email storage. We are
using dovecot as the LDA on a set of RHEL boxes and the email volume seems to
be saturating the write throughput of our infortrend iSCSI SAN.
So it looks like a mail
Hi,
My customer says:
Application has NFS directories with millions of files in a directory,
and this can't changed.
We are having issues with the EMC appliance and RPC timeouts on the NFS
lookup. I am looking doing
is moving one of the major NFS exports to
Thanks for your note:
Re type of 7xxx system: It is mostlikely a 7310 with one tray, two if we can
squeeze it in.
Each tray will , we hope be 22x1TB disks and 2x 18GB SSDS. In a private
response to this I got:
>> With SSD it performs better than the Thumper. My feeling would be two
>> trays plu
Cindy:
Thanks for your reply. These units are located at a remote site
300km away, so you're right about the main issue being able to
map the OS and/or ZFS device to a physical disk. The use of
alias devices was one way we thought to make this mapping more
intuitive, although of couse we'd always
Erik Trimble wrote:
I'm just wondering if anyone has tried this, and what the performance
has been like.
Scenario:
I've got a bunch of v20z machines, with 2 disks. One has the OS on it,
and the other is free. As these are disposable client machines, I'm not
going to mirror the OS disk.
I
I'm just wondering if anyone has tried this, and what the performance
has been like.
Scenario:
I've got a bunch of v20z machines, with 2 disks. One has the OS on it,
and the other is free. As these are disposable client machines, I'm not
going to mirror the OS disk.
I have a disk server wi
Richard Elling wrote:
On Nov 19, 2009, at 7:39 AM, Mart van Santen wrote:
Hi,
We are using multiple opensolaris 06/09 and solaris 10 servers.
Currently we are 'dumping' (incremental)backups to a backup server. I
wonder if anybody knows what happens when I send/recv a zfs volume
from version
Hi,
I read the other posts in this thread, they look fine. But still, I
thing you have bad concept. Generally, it is a good idea to separate
root pool from data, but as I understand, you have only one physical
disk.
I would install Solaris using ZFS as the root filesystem on the whole
disk an
Hello list,
I pre-created the pools we would use for when the SSD eventually come in. Not my
finest moment perhaps.
Since I knew the SSDs would be 32GB in size, I created 32GB slices on HDDs in
slot 36 and 44.
* For future reference to others thinking to do the same, do not bother setting
54 matches
Mail list logo