e
> write operation to the destination.
>
> On Wed, Apr 27, 2016 at 2:26 PM, Tyler Wilson
> wrote:
> > Hello Jason,
> >
> > Thanks for the quick reply, this was copied from an VM instance snapshot
> to
> > my backup pool (rbd snap create, rbd cp (to backup p
-be5a-5577d3f9307e | grep data | awk '{
SUM += $2 } END { print SUM/1024/1024 " MB" }'
49345.4 MB
Thanks for the help.
On Wed, Apr 27, 2016 at 12:22 PM, Jason Dillaman
wrote:
> On Wed, Apr 27, 2016 at 2:07 PM, Tyler Wilson
> wrote:
> > $ rbd diff backup/cd4e5d37-302
Hello All,
I am currently trying to get an accurate count of bytes used for an rbd
image. I've tried trimming the filesystem which relieves about 1.7gb
however there is still a huge disparity of size reported in the filesystem
vs what 'rbd diff' shows;
$ rbd map backup/cd4e5d37-3023-4640-be5a-557
Hello All,
Are there any documented steps to remove a placement group that is stuck
inactive? I had a situation where we had two nodes go offline and tried
rescuing with https://ceph.com/community/incomplete-pgs-oh-my/ however the
PG remained inactive after importing and starting, now I am just tr
o 8192 got me the expected object size of 8MB.
On Thu, Dec 18, 2014 at 6:22 PM, Tyler Wilson wrote:
>
> Hey All,
>
> On a new Cent7 deployment with firefly I'm noticing a strange behavior
> when deleting RBD child disks. It appears upon deletion cpu usage on each
> OSD p
Hey All,
On a new Cent7 deployment with firefly I'm noticing a strange behavior when
deleting RBD child disks. It appears upon deletion cpu usage on each OSD
process raises to about 75% for 30+ seconds. On my previous deployments
with CentOS 6.x and Ubuntu 12/14 this was never a problem.
Each RBD
Brian,
Please see http://ceph.com/docs/master/start/os-recommendations/ I would go
with anything with a 'C' rating matching the version of Ceph that you will
want to install.
On Wed, Jul 23, 2014 at 11:12 AM, Brian Lovett
wrote:
> I'm evaluating ceph for our new private and public cloud enviro
Greg,
Not a real fix for you but I too run a full-ssd cluster and am able to get
112MB/s with your command;
[root@plesk-test ~]# dd if=/dev/zero of=testfilasde bs=16k count=65535
oflag=direct
65535+0 records in
65535+0 records out
1073725440 bytes (1.1 GB) copied, 9.59092 s, 112 MB/s
This of cou
Hey All,
Simple question, does 'rbd export-diff' work with children snapshot aka;
root:~# rbd children images/03cb46f7-64ab-4f47-bd41-e01ced45f0b4@snap
compute/2b65c0b9-51c3-4ab1-bc3c-6b734cc796b8_disk
compute/54f3b23c-facf-4a23-9eaa-9d221ddb7208_disk
compute/592065d1-264e-4f7d-8504-011c2ea3bce3_
soon :)
Christian Balzer writes:
>
> On Wed, 14 May 2014 19:28:17 -0500 Mark Nelson wrote:
>
> > On 05/14/2014 06:36 PM, Tyler Wilson wrote:
> > > Hey All,
> >
> > Hi!
> >
> > >
> > > I am setting up a new storage cluster that absolutely
Hey All,
I am setting up a new storage cluster that absolutely must have the best
read/write sequential speed @ 128k and the highest IOps at 4k read/write as
possible.
My current specs for each storage node are currently;
CPU: 2x E5-2670V2
Motherboard: SM X9DRD-EF
OSD Disks: 20-30 Samsung 840 1TB
11 matches
Mail list logo