I'm surprised no-one else has posted about this - part of the Sun Oracle
Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48 or 96 GB of
SLC, a built-in SAS controller and a super-capacitor for cache protection.
http://www.sun.com/storage/disk_systems/sss/f20/specs.xml
There's no pr
Bob Friesenhahn writes:
> On Wed, 23 Sep 2009, Ray Clark wrote:
>
> > My understanding is that if I "zfs set checksum=" to
> > change the algorithm that this will change the checksum algorithm
> > for all FUTURE data blocks written, but does not in any way change
> > the checksum for prev
Roch wrote:
Bob Friesenhahn writes:
> On Wed, 23 Sep 2009, Ray Clark wrote:
>
> > My understanding is that if I "zfs set checksum=" to
> > change the algorithm that this will change the checksum algorithm
> > for all FUTURE data blocks written, but does not in any way change
> > the che
Jennifer Bauer Scarpino wrote:
To: Developers and Students
You are invited to participate in the first OpenSolaris Security Summit
OpenSolaris Security Summit
Tuesday, November 3rd, 2009
Baltimore Marriott Waterfront
700 Aliceanna Street
Baltimore, Maryland 21202
I will be giving a talk an
I've been comparing zfs send and receive to cp, cpio etc.. for a customer data
migration
and have found send and receive to be twice as slow as cp or cpio.
Im migrating zfs data from one array to a temporary array on the same server,
its 2.3TB in total, and was looking for the fastest way to do
chris bannayan wrote:
I've been comparing zfs send and receive to cp, cpio etc.. for a customer data
migration
and have found send and receive to be twice as slow as cp or cpio.
Did you run sync after the cp/cpio finished to ensure the data really is
on disk ? cp and cpio do not do synchronu
bertram fukuda wrote:
Would I just do the following then:
zpool create -f zone1 c1t1d0s0
zfs create zone1/test1
zfs create zone1/test2
Woud I then use zfs set quota=xxxG to handle disk usage?
yes
___
zfs-discuss mailing list
zfs-discuss
Hello,
Quick question:
I have changed the recordsize of an existing file system and I would
like to do the conversion while the file system is online.
Will a disk replacement change the recordsize of the existing blocks?
My idea is to issue a "zpool replace
Will this work?
Thanks in ad
Javier Conde wrote:
Hello,
Quick question:
I have changed the recordsize of an existing file system and I would
like to do the conversion while the file system is online.
Will a disk replacement change the recordsize of the existing blocks?
My idea is to issue a "zpool replace
Will thi
On 23 Sep, 2009, at 21.54, Ray Clark wrote:
My understanding is that if I "zfs set checksum=" to
change the algorithm that this will change the checksum algorithm
for all FUTURE data blocks written, but does not in any way change
the checksum for previously written data blocks.
I need to
On 24 Sep 2009, at 03:09, Mark J Musante wrote:
On 23 Sep, 2009, at 21.54, Ray Clark wrote:
My understanding is that if I "zfs set checksum=" to
change the algorithm that this will change the checksum algorithm
for all FUTURE data blocks written, but does not in any way change
the check
On Thu, 24 Sep 2009, James Lever wrote:
I was of the (mis)understanding that only metadata and writes smaller than
64k went via the slog device in the event of an O_SYNC write request?
What would cause you to understand that?
Is there a way to tune this on the NFS server or clients such that
I may have missed something in the docs, but if I have a file in one FS,
and want to move it to another FS (assuming both filesystems are on the
same ZFS pool), is there a way to do it outside of the standard
mv/cp/rsync commands? For example, I have a pool with my home directory as
a FS, and I
I would like to clone the configuration on a v210 with snv_115.
The current pool looks like this:
-bash-3.2$ /usr/sbin/zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
On Sep 24, 2009, at 2:19 AM, Darren J Moffat wrote:
Jennifer Bauer Scarpino wrote:
To: Developers and Students
You are invited to participate in the first OpenSolaris Security
Summit
OpenSolaris Security Summit
Tuesday, November 3rd, 2009
Baltimore Marriott Waterfront
700 Aliceanna Street
Ba
comment below...
On Sep 23, 2009, at 10:00 PM, James Lever wrote:
On 08/09/2009, at 2:01 AM, Ross Walker wrote:
On Sep 7, 2009, at 1:32 AM, James Lever wrote:
Well a MD1000 holds 15 drives a good compromise might be 2 7 drive
RAIDZ2s with a hotspare... That should provide 320 IOPS instead
On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:
I'm surprised no-one else has posted about this - part of the Sun
Oracle Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with
48 or 96 GB of SLC, a built-in SAS controller and a super-capacitor
for cache protection. http://www.su
On Thu, Sep 24, 2009 at 12:10 PM, Richard Elling
wrote:
> On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:
>
> I'm surprised no-one else has posted about this - part of the Sun Oracle
>> Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48 or 96 GB of
>> SLC, a built-in SAS contro
Richard Elling wrote:
On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:
I'm surprised no-one else has posted about this - part of the Sun
Oracle Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48
or 96 GB of SLC, a built-in SAS controller and a super-capacitor for
cache prote
On Thu, 24 Sep 2009, Paul Archer wrote:
I may have missed something in the docs, but if I have a file in one FS,
and want to move it to another FS (assuming both filesystems are on the
same ZFS pool), is there a way to do it outside of the standard
mv/cp/rsync commands?
Not yet. CR 6483179
Hello,
Given the following configuration:
* Server with 12 SPARCVII CPUs and 96 GB of RAM
* ZFS used as file system for Oracle data
* Oracle 10.2.0.4 with 1.7TB of data and indexes
* 1800 concurrents users with PeopleSoft Financial
* 2 PeopleSoft transactions per day
* HDS USP1100 with LUN
Thanks for the info. Glad to hear it's in the works, too.
Paul
1:21pm, Mark J Musante wrote:
On Thu, 24 Sep 2009, Paul Archer wrote:
I may have missed something in the docs, but if I have a file in one FS,
and want to move it to another FS (assuming both filesystems are on the
same ZFS poo
On Sep 24, 2009, at 10:30 AM, Javier Conde wrote:
Hello,
Given the following configuration:
* Server with 12 SPARCVII CPUs and 96 GB of RAM
* ZFS used as file system for Oracle data
* Oracle 10.2.0.4 with 1.7TB of data and indexes
* 1800 concurrents users with PeopleSoft Financial
* 2 Peo
On Sep 24, 2009, at 10:17 AM, Tim Cook wrote:
On Thu, Sep 24, 2009 at 12:10 PM, Richard Elling > wrote:
On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:
I'm surprised no-one else has posted about this - part of the Sun
Oracle Exadata v2 is the Sun Flash Accelerator F20 PCIe card, wi
> Thanks for the info. Glad to hear it's in the works, too.
It is not in the works. If you look at the bug IDs in the bug database
you will find no indication of work done on them.
>
> Paul
>
>
> 1:21pm, Mark J Musante wrote:
>
>> On Thu, 24 Sep 2009, Paul Archer wrote:
>>
>>> I may have missed
Hi Karl,
Manually cloning the root pool is difficult. We have a root pool
recovery procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
and manually tweaking.
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting
Hi Richard,
Thanks for your reply.
We are using Solaris 10 u6 and ZFS version 10.
Regards,
Javi
Richard Elling wrote:
On Sep 24, 2009, at 10:30 AM, Javier Conde wrote:
Hello,
Given the following configuration:
* Server with 12 SPARCVII CPUs and 96 GB of RAM
* ZFS used as file system for
Richard Elling wrote:
On Sep 24, 2009, at 10:30 AM, Javier Conde wrote:
Hello,
Given the following configuration:
* Server with 12 SPARCVII CPUs and 96 GB of RAM
* ZFS used as file system for Oracle data
* Oracle 10.2.0.4 with 1.7TB of data and indexes
* 1800 concurrents users with PeopleSof
Hi Cindy,
Could you provide a list of system specific info stored in the root pool?
Thanks
Peter
2009/9/24 Cindy Swearingen :
> Hi Karl,
>
> Manually cloning the root pool is difficult. We have a root pool recovery
> procedure that you might be able to apply as long as the
> systems are identic
Richard, Tim,
yes, one might envision the X4275 as OpenStorage appliances, but
they are not. Exadata 2 is
- *all* Sun hardware
- *all* Oracle software (*)
and that combination is now an Oracle product: a database appliance.
All nodes run Oracles Linux; as far as I understand - and that is not
As Cindy said, this isn't trivial right now.
Personally, I'd do it this way:
ASSUMPTIONS:
* both v210 machines are reasonably identical (may differ in RAM or CPU
speed, but nothing much else).
* Call the original machine A and the new machine B
* machine B has no current drives in it.
MET
Richard Elling wrote:
On Sep 24, 2009, at 10:17 AM, Tim Cook wrote:
On Thu, Sep 24, 2009 at 12:10 PM, Richard Elling
wrote:
On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:
I'm surprised no-one else has posted about this - part of the Sun
Oracle Exadata v2 is the Sun Flash Acceler
Hi Peter,
I can't provide it because I don't know what it is.
Even if we could provide a list of items, tweaking
the device informaton if the systems are not identical
would be too difficult.
cs
On 09/24/09 12:04, Peter Pickford wrote:
Hi Cindy,
Could you provide a list of system specific in
Thanks for the help.
Since the v210's in question are at a remote site. It might be a bit of a pain
getting the drives swapped by end users.
So I thought of something else. Could I netboot the new v210 with snv_115, use
zfs send/receive with ssh to grab the data on the old server, install the b
Roland Rambau wrote:
Richard, Tim,
yes, one might envision the X4275 as OpenStorage appliances, but
they are not. Exadata 2 is
- *all* Sun hardware
- *all* Oracle software (*)
and that combination is now an Oracle product: a database appliance.
Is there any reason the X4275 couldn't be an Op
Karl,
I'm not sure I'm following everything. If you can't swap the drives,
the which pool would you import?
If you install the new v210 with snv_115, then you would have a bootable
root pool.
You could then receive the snapshots from the old root pool into the
root pool on the new v210.
I wo
Oracle use Linux :-(
But on the positive note have a look at this:- http://www.youtube.com/watch?v=rmrxN3GWHpM
It's Ed Zander talking to Larry and asking some great questions.
29:45 Ed asks what parts of Sun are you going to keep - all of it!
45:00 Larry's rant on Cloud Computing
Hi Cindy,
Wouldn't
touch /reconfigure
mv /etc/path_to_inst* /var/tmp/
regenerate all device information?
AFIK zfs doesn't care about the device names it scans for them
it would only affect things like vfstab.
I did a restore from a E2900 to V890 and is seemed to work
Created the pool and zfs
On 09/24/09 15:54, Peter Pickford wrote:
Hi Cindy,
Wouldn't
touch /reconfigure
mv /etc/path_to_inst* /var/tmp/
regenerate all device information?
It might, but it's hard to say whether that would accomplish everything
needed to move a root file system from one system to another.
I just g
On 25/09/2009, at 2:58 AM, Richard Elling wrote:
On Sep 23, 2009, at 10:00 PM, James Lever wrote:
So it turns out that the problem is that all writes coming via NFS
are going through the slog. When that happens, the transfer speed
to the device drops to ~70MB/s (the write speed of his SLC
On 25/09/2009, at 1:24 AM, Bob Friesenhahn wrote:
On Thu, 24 Sep 2009, James Lever wrote:
Is there a way to tune this on the NFS server or clients such that
when I perform a large synchronous write, the data does not go via
the slog device?
Synchronous writes are needed by NFS to suppor
Im measuring time by using the time command in the command arguments
ie: time cp * / etc..
for copy i just used time cp pool/fs1/* /newpool/fs1 etc...
for cpio i used, time find /pool/fs1 |cpio -pdmv /newpool/fs1
for zfs i ran a snapshot fist then, time zfs -R send snapshot| zfs receive -F
-d
On Fri, 25 Sep 2009, James Lever wrote:
NFS Version 3 introduces the concept of "safe asynchronous writes.?
Being "safe" then requires a responsibilty level on the client which
is often not present. For example, if the server crashes, and then
the client crashes, how does the client resend
On 25/09/2009, at 11:49 AM, Bob Friesenhahn wrote:
The commentary says that normally the COMMIT operations occur during
close(2) or fsync(2) system call, or when encountering memory
pressure. If the problem is slow copying of many small files, this
COMMIT approach does not help very much
I thought I would try the same test using dd bs=131072 if=source of=/
path/to/nfs to see what the results looked liked…
It is very similar to before, about 2x slog usage and same timing and
write totals.
Friday, 25 September 2009 1:49:48 PM EST
extended device st
Try exporting and reimporting the pool. That has done the trick for me in the
past
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
snapshot a few days ago:
# zfs snapshot a...@b
# zfs clone a...@b tank/a
# zfs clone a...@b tank/b
The system started panicing after I tried:
# zfs snapshot tank/b...@backup
So, I destroyed tank/b:
# zfs destroy tank/b
On Fri, Sep 25, 2009 at 05:21:23AM +, Albert Chin wrote:
> [[ snip snip ]]
>
> We really need to import this pool. Is there a way around this? We do
> have snv_114 source on the system if we need to make changes to
> usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the "zfs
> destroy" tr
Cheers, I did try that, but still got the same total on import - 2.73TB
I even thought I might have just made a mistake with the numbers, so I made a
sort of 'quarter scale model' in VMware and OSOL 2009.06, with 3x250G and
1x187G. That gave me a size of 744GB, which is *approx* 1/4 of what I ge
49 matches
Mail list logo