Tomorrow, Ian Collins wrote:
On 04/26/12 10:34 AM, Paul Archer wrote:
That assumes the data set will fit on one machine, and that machine won't
be a
performance bottleneck.
Aren't those general considerations when specifying a file server?
I suppose. But I meant specifically tha
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different
m
2:20pm, Richard Elling wrote:
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
Interesting, something more complex than NFS to avoid the
complexities of NFS? ;-)
We have data coming in on multiple nodes (with local storage) that is
needed on other multiple nodes. The only
9:08pm, Stefan Ring wrote:
Sorry for not being able to contribute any ZoL experience. I've been
pondering whether it's worth trying for a few months myself already.
Last time I checked, it didn't support the .zfs directory (for
snapshot access), which you really don't want to miss after getting
>To put it slightly differently, if I used ZoL in production, would I be likely
to experience performance or stability
problems?
I saw one team revert from ZoL (CentOS 6) back to ext on some backup servers
for an application project, the killerĀ was
stat times (find running slow etc.), perhaps
11:26am, Richard Elling wrote:
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
The point of a clustered filesystem was to be able to spread our data out
among all nodes and still have access
from any node without having to run NFS. Size of the data set (once you
get past the
9:59am, Richard Elling wrote:
On Apr 25, 2012, at 5:48 AM, Paul Archer wrote:
This may fall into the realm of a religious war (I hope not!), but
recently several people on this list have
said/implied that ZFS was only acceptable for production use on FreeBSD
(or Solaris, of
This may fall into the realm of a religious war (I hope not!), but recently
several people on this list have said/implied that ZFS was only acceptable for
production use on FreeBSD (or Solaris, of course) rather than Linux with ZoL.
I'm working on a project at work involving a large(-ish) amoun
3:26pm, Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote:
I realize that I did things in the wrong order. I should have removed the
oldest snapshot first, on to the newest, and then removed the data in the
FS itself.
For the problem in question, this is
Yesterday, Erik Trimble wrote:
Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short perio
3:08pm, Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short periods of time off and on
7:51pm, Richard Jahnel wrote:
This sounds like the known issue about the dedupe map not fitting in ram.
When blocks are freed, dedupe scans the whole map to ensure each block is not
is use before releasing it. This takes a veeery long time if the map doesn't
fit in ram.
If you can try adding
I have an approx 700GB (of data) FS that I had dedup turned on for. (See
previous posts.) I turned on dedup after the FS was populated, and was not
sure dedup was working. I had another copy of the data, so I removed the data,
and then tried to destroy the snapshots I had taken. The first two di
is and Linux.
Paul
Yesterday, Paul Archer wrote:
Yesterday, Arne Jansen wrote:
Paul Archer wrote:
Because it's easier to change what I'm doing than what my DBA does, I
decided that I would put rsync back in place, but locally. So I changed
things so that the backups go to a sta
Yesterday, Arne Jansen wrote:
Paul Archer wrote:
Because it's easier to change what I'm doing than what my DBA does, I
decided that I would put rsync back in place, but locally. So I changed
things so that the backups go to a staging FS, and then are rsync'ed
over to another
I've got a bit of a strange problem with snapshot sizes. First, some
background:
For ages our DBA backed up all the company databases to a directory NFS
mounted from a NetApp filer. That directory would then get dumped to tape.
About a year ago, I built an OpenSolaris (technically Nexenta) machi
5:12pm, Cyril Plisko wrote:
Question: Is there a facility similar to inotify that I can use to monitor a
directory structure in OpenSolaris/ZFS, such that it will block until a file
is modified (added, deleted, etc), and then pass the state along (STDOUT is
fine)? One other requirement: inotify
/data/images/incoming, and
a /data/images/incoming/100canon directory gets created, then the files under
that directory will automatically be monitored as well.
Thanks,
Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Someone posted this link: https://slx.sun.com/1179275620 for a video on ZFS
deduplication. But the site isn't responding (which is typical of Sun, since
I've been dealing with them for the last 12 years).
Does anyone know of a mirror site, or if the video is on YouTube?
Paul
___
You don't like http://www.supermicro.com/products/nfo/chassis_storage.cfm
?
I must admit I don't have a price list of these.
I am using an SC846xxx for a project here at work.
The hardware consists of an ASUS server-level motherboard with 2 quad-core
Xeons, 8GB of RAM, an LSI PCI-e SAS/SATA car
9:51am, Ware Adams wrote:
On Sep 29, 2009, at 9:32 AM, p...@paularcher.org wrote:
I am using an SC846xxx for a project here at work.
The hardware consists of an ASUS server-level motherboard with 2 quad-core
Xeons, 8GB of RAM, an LSI PCI-e SAS/SATA card, and 24 1.5TB HD, all in one
of these ca
11:04pm, Paul Archer wrote:
Cool.
FWIW, there appears to be an issue with the LSI 150-6 card I was using. I
grabbed an old server m/b from work, and put a newer PCI-X LSI card in it,
and I'm getting write speeds of about 60-70MB/sec, which is about 40x the
write speed I was seeing wit
orrow, Robert Milkowski wrote:
Paul Archer wrote:
In light of all the trouble I've been having with this zpool, I bought a
2TB drive, and I'm going to move all my data over to it, then destroy the
pool and start over.
Before I do that, what is the best way on an x86 system to format/l
In light of all the trouble I've been having with this zpool, I bought a
2TB drive, and I'm going to move all my data over to it, then destroy the
pool and start over.
Before I do that, what is the best way on an x86 system to format/label
the disks?
Thanks,
Paul
_
ors
* 2930277101 accessible sectors
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First SectorLast
* Partition Tag FlagsSector CountSector Mount Directory
0 1700 34 2930277101 2930277134
Thanks for the help!
Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
8:30am, Paul Archer wrote:
And the hits just keep coming...
The resilver finished last night, so rebooted the box as I had just upgraded
to the latest Dev build. Not only did the upgrade fail (love that instant
rollback!), but now the zpool won't come online:
r...@shebop:~# zpool i
Yesterday, Paul Archer wrote:
I estimate another 10-15 hours before this disk is finished resilvering and
the zpool is OK again. At that time, I'm going to switch some hardware out
(I've got a newer and higher-end LSI card that I hadn't used before because
it's PCI-X,
d that I hadn't used before
because it's PCI-X, and won't fit on my current motherboard.)
I'll report back what I get with it tomorrow or the next day, depending on
the timing on the resilver.
Paul Archer
___
zfs-discuss maili
My controller, while normally a full RAID controller, has had its BIOS
turned off, so it's acting as a simple SATA controller. Plus, I'm seeing
this same slow performance with dd, not just with ZFS. And I wouldn't think
that write caching would make a difference with using dd (especially
writ
.0 0.3 0.33.33.1 9 14 c11d0
0.00.0 0.00.0 0.0 0.00.00.0 0 0 c12t0d0
Paul Archer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Something to do with the fact that this is
a very old SATA card (LSI 150-6)?
This is driving me crazy. I finally got my zpool working under Solaris so
I'd have some stability, and I've got no performance.
Paul Archer
Friday, Paul Archer wrote:
Since I got my zfs pool working under
Oh, for the record, the drives are 1.5TB SATA, in a 4+1 raidz-1 config.
All the drives are on the same LSI 150-6 PCI controller card, and the M/B
is a generic something or other with a triple-core, and 2GB RAM.
Paul
3:34pm, Paul Archer wrote:
Since I got my zfs pool working under solaris (I
Since I got my zfs pool working under solaris (I talked on this list
last week about moving it from linux & bsd to solaris, and the pain that
was), I'm seeing very good reads, but nada for writes.
Reads:
r...@shebop:/data/dvds# rsync -aP young_frankenstein.iso /tmp
sending incremental file lis
Thanks for the info. Glad to hear it's in the works, too.
Paul
1:21pm, Mark J Musante wrote:
On Thu, 24 Sep 2009, Paul Archer wrote:
I may have missed something in the docs, but if I have a file in one FS,
and want to move it to another FS (assuming both filesystems are on the
sam
here a way to split an existing filesystem? To
use the example above, let's say I have an ISO directory in my home
directory, but it's getting big, plus I'd like to share it out on my
network. Is there a way to split my home directory's FS, so that the ISO
directory bec
Thursday, Paul Archer wrote:
Tomorrow, Fajar A. Nugraha wrote:
There was a post from Ricardo on zfs-fuse list some time ago.
Apparently if you do a "zpool create" on whole disks, Linux on
Solaris behaves differently:
- solaris will create EFI partition on that disk, and use the pa
Tomorrow, Fajar A. Nugraha wrote:
There was a post from Ricardo on zfs-fuse list some time ago.
Apparently if you do a "zpool create" on whole disks, Linux on
Solaris behaves differently:
- solaris will create EFI partition on that disk, and use the partition as vdev
- Linux will use the whole
7:37pm, Darren J Moffat wrote:
Paul Archer wrote:
r...@ubuntu:~# fdisk -l /dev/sda
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xce13f90b
Device Boot Start End
6:44pm, Darren J Moffat wrote:
Paul Archer wrote:
What kind of partition table is on the disks, is it EFI ? If not that
might be part of the issue.
I don't believe there is any partition table on the disks. I pointed zfs to
the raw disks when I setup the pool.
If you run fdi
5:08pm, Darren J Moffat wrote:
Paul Archer wrote:
10:09pm, Fajar A. Nugraha wrote:
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer wrote:
I can reboot into Linux and import the pools, but haven't figured out why
I
can't import them in Solaris. I don't know if it makes
10:40am, Paul Archer wrote:
I can reboot into Linux and import the pools, but haven't figured out why
I
can't import them in Solaris. I don't know if it makes a difference (I
wouldn't think so), but zfs-fuse under Linux is using ZFS version 13,
where
Nexenta is using
10:09pm, Fajar A. Nugraha wrote:
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer wrote:
I can reboot into Linux and import the pools, but haven't figured out why I
can't import them in Solaris. I don't know if it makes a difference (I
wouldn't think so), but zfs-fuse unde
I recently (re)built a fileserver at home, using Ubuntu and zfs-fuse to
create a ZFS filesystem (RAIDz1) on five 1.5TB drives.
I had some serious issues with NFS not working properly (kept getting
stale file handles), so I tried to switch to OpenSolaris/Nexenta, but my
SATA controller wasn't s
43 matches
Mail list logo