>
> [root] osci-1001.infra.cin1.corp:~/cephdeploy # ceph-deploy osd create
>> --filestore --fs-type xfs --data /dev/sdb2 --journal /dev/sdb1 osci-1001
>
> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /root/.cephdeploy.conf
>
> [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-d
ps://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
> >
> > The latter link shows pretty poor numbers for M500DC drives.
> >
> >
> > Thanks,
> >
> > Igor
> >
> >
> > On 12/11/2018 4:58 AM, T
All 4 of these SSD that i've converted to Bluestore are behaving this
way. I have around 300 of these drives in a very large production
cluster and do not have this type of behavior with Filestore.
On the filestore setup these SSD are partitioned 20GB for journal and
800GB for data.
The systems
| avq
58.39 | avio 7.49 ms |
_
Tyler Bishop
EST 2007
O: 513-299-7108 x1000
M: 513-646-5809
http://BeyondHosting.net
This email is intended only for the recipient(s) above and/or
otherwise authorized personnel. The information contained herein and
Older Crucial/Micron M500/M600
_
*Tyler Bishop*
EST 2007
O: 513-299-7108 x1000
M: 513-646-5809
http://BeyondHosting.net <http://beyondhosting.net/>
This email is intended only for the recipient(s) above and/or
otherwise authorized personne
/
>
>
> ./gdbpmp -t 1000 -p`pidof ceph-osd` -o foo.gdbpmp
>
> ./gdbpmp -i foo.gdbpmp -t 1
>
>
> Mark
>
> On 12/10/18 6:09 PM, Tyler Bishop wrote:
> > Hi,
> >
> > I have an SSD only cluster that I recently converted from filestore to
> > bluestore and pe
Hi,
I have an SSD only cluster that I recently converted from filestore to
bluestore and performance has totally tanked. It was fairly decent before,
only having a little additional latency than expected. Now since
converting to bluestore the latency is extremely high, SECONDS. I am
trying to d
ait at all.
_____
*Tyler Bishop*
EST 2007
O: 513-299-7108 x1000
M: 513-646-5809
http://BeyondHosting.net <http://beyondhosting.net/>
This email is intended only for the recipient(s) above and/or
otherwise authorized personnel. The information cont
I have a fairly large cluster running ceph bluestore with extremely fast
SAS ssd for the metadata. Doing FIO benchmarks I am getting 200k-300k
random write iops but during sustained workloads of ElasticSearch my
clients seem to hit a wall of around 1100 IO/s per RBD device. I've tried
1 RBD and 4
/bluestore-config-ref/
list the defaults of ram to be used almost exclusively for the KV cache.
With a system like mine do you think It would be safe to allow 3GB cache
and change the KV ratio to 0.60?
Thanks
_
*Tyler Bishop*
EST 2007
O: 513-299-7108 x1000
After moving back to tcmalloc my random crash issues have been resolved.
I would advise disabling support for jemalloc on bluestore since its not
stable or safe... seems risky to allow this?
_
*Tyler Bishop*
EST 2007
O: 513-299-7108 x1000
M: 513-646
wrote:
> Have you created the blockdb partitions or LVM manually ?
>
> What size?
> On 27/08/18 23:48, Tyler Bishop wrote:
>
> My host has 256GB of ram. 62GB used under most heavy io workload.
> _____
>
> *Tyler Bishop*
> EST 2007
ell
> tested with Bluestore and lead to lots of segfaults. We moved back to
> the default of tcmalloc with Bluestore and these stopped.
>
> Check /etc/sysconfig/ceph under RHEL based distros.
>
> --
> Adam
> On Mon, Aug 27, 2018 at 9:51 PM Tyler Bishop
> wrote:
> >
&g
Did you solve this? Similar issue.
_
On Wed, Feb 28, 2018 at 3:46 PM Kyle Hutson wrote:
> I'm following up from awhile ago. I don't think this is the same bug. The
> bug referenced shows "abort: Corruption: block checksum mismatch", and I'm
> not see
My host has 256GB of ram. 62GB used under most heavy io workload.
_
*Tyler Bishop*
EST 2007
O: 513-299-7108 x1000
M: 513-646-5809
http://BeyondHosting.net <http://beyondhosting.net/>
This email is intended only for the recipient(s) above
ZE/huge/release/12.2.7/rpm/el7/BUILD/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:132]
[default] Level summary: base level 1 max bytes base 268435456 files[2 4 1
0 0 0 0] max score 0.84
2018-08-28 02:32:06.156252 7f64a895a700 4 rocksdb:
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/
I have also dual processor nodes,
> > and was wondering if there is some guide on how to optimize for numa.
> >
> >
> >
> >
> > -Original Message-
> > From: Tyler Bishop [mailto:tyler.bis...@beyondhosting.net]
> > Sent: vrijdag 24 augustus 2
reduced the load on these
> nodes too. At busy times, the filestore host loads were 20-30, even
> higher (on a 28 core node), while the bluestore nodes hummed along at a
> lot of perhaps 6 or 8. This also confirms that somehow lots of xfs
> mounts don't work in parallel.
>
>
Gorbachev wrote:
>
> On Wed, Aug 22, 2018 at 11:39 PM Tyler Bishop
> wrote:
> >
> > During high load testing I'm only seeing user and sys cpu load around
> > 60%... my load doesn't seem to be anything crazy on the host and iowait
> > stays between 6 a
ian Balzer wrote:
> Hello,
>
> On Wed, 22 Aug 2018 23:00:24 -0400 Tyler Bishop wrote:
>
> > Hi, I've been fighting to get good stability on my cluster for about
> > 3 weeks now. I am running into intermittent issues with OSD flapping
> > marking other OSD down th
Hi, I've been fighting to get good stability on my cluster for about
3 weeks now. I am running into intermittent issues with OSD flapping
marking other OSD down then going back to a stable state for hours and
days.
The cluster is 4x Cisco UCS S3260 with dual E5-2660, 256GB ram, 40G
Network to 4
Where did you find the iscsi rpms ect? I looked all through the repo and can't
find anything but the documentation.
_____
Tyler Bishop
Founder EST 2007
O: 513-299-7108 x10
M: 513-646-5809
[ http://beyondhosting.net/ | http://BeyondHostin
We had to change these in our cluster for some drives to come up.
_
Tyler Bishop
Founder EST 2007
O: 513-299-7108 x10
M: 513-646-5809
[ http://beyondhosting.net/ | http://BeyondHosting.net ]
This email is intended only for the recipient(s
Enjoy the leap second guys.. lol your cluster gonna be skewed.
_
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
This is a cool project, keep up the good work!
_
Tyler Bishop
Founder
O: 513-299-7108 x10
M: 513-646-5809
http://BeyondHosting.net
This email is intended only for the recipient(s) above and/or otherwise
authorized personnel. The information
We easily see line rate sequential io of most disk.
I would say that 150GB/s with 40G networking and a minimum of 20 host is no
problem.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this
The crush does however the status does not.
16/330 in osds are down
When in reality it was 56/330.
I am also having issues of io deadlock from clients until a full rebuild or it
comes back up. I have the priorities set but I believe its still trying to
write to the down osds.
Tyler Bishop
356 5.43999 osd.356 down 1.0 1.0
357 5.43999 osd.357 down 1.0 1.0
358 5.43999 osd.358 down 0 1.0
369 5.43999 osd.369 down 1.0 1.0
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the inten
800TB of NVMe? That sounds wonderful!
- Original Message -
From: "Ryan Leimenstoll"
To: "ceph new"
Sent: Saturday, September 24, 2016 5:37:08 PM
Subject: [ceph-users] CephFS metadata pool size
Hi all,
We are in the process of expanding our current Ceph deployment (Jewel, 10.2.2)
to
Your monitors are sending the new cluster map out every time it changes.
This is a known issue IIRC, I remember reading a really interesting article on
it a few months ago.
I think theres a slideshow from CERN that explained it.
- Original Message -
From: "Stillwell, Bryan J"
To: cep
Hi,
My systems have 56 x 6T disk, dual 12 core processors and 256gb ram. CentOS 7
x64.
During boot I'm having issues with the system going into emergency mode.
When starting udevd "a start job is running for dev-disk-by" the timer of 1
minute 30 seconds runs out and the system fails to boot
We're having the same issues. I have a 1200TB pool at 90% utilization however
disk utilization is only 40%
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are not
Use Haproxy.
sudomakeinstall.com/uncategorized/ceph-radosgw-nginx-tengine-apache-and-now-civetweb
- Original Message -
From: c...@jack.fr.eu.org
To: ceph-users@lists.ceph.com
Sent: Tuesday, May 24, 2016 5:01:05 AM
Subject: Re: [ceph-users] civetweb vs Apache for rgw
I'm using mod_rewrit
I'm using 2x replica on that pool for storing rbd volumes. Our workload is
pretty heavy, id imagine objects an ec would be light in comparison.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipie
. The
only thing shared is the quad power supplies.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action in reliance on
but anyway?
>
>
>
>
>
>
>
>
>
>2016-02-16 16:12 GMT+08:00 Nick Fisk :
>>
>>
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
>Behalf Of
>>> Tyler Bishop
>>> Sent: 16 February
You should look at a 60 bay 4U chassis like a Cisco UCS C3260.
We run 4 systems at 56x6tB with dual E5-2660 v2 and 256gb ram. Performance is
excellent.
I would recommend a cache tier for sure if your data is busy for reads.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis
Great work as always sage!
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action in reliance on the
contents of this
Your probably running into issues with sysvinit / upstart / whatever.
Try partitioning the DM and then mapping it directly in your ceph.conf under
the osd section.
It should work, ceph is just a process using the filesystem.
Tyler Bishop
Chief Technical Officer
513-299-7108
http://ceph.mirror.beyondhosting.net/
I need to know what server will be keeping the master copy for rsync to pull
changes from.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are
You need to get your OSD back online.
From: "Jeffrey McDonald"
To: ceph-users@lists.ceph.com
Sent: Saturday, February 6, 2016 8:18:06 AM
Subject: [ceph-users] CEPH health issues
Hi,
I'm seeing lots of issues with my CEPH installation. The health of the system
is degraded and many of th
Covered except that the dreamhost mirror is constantly down or broken.
I can add ceph.mirror.beyondhosting.net for it.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that
I have ceph pulling down from eu. What *origin* should I setup rsync to
automatically pull from?
download.ceph.com is consistently broken.
- Original Message -
From: "Tyler Bishop"
To: "Wido den Hollander"
Cc: "ceph-users"
Sent: Friday, February
We would be happy to mirror the project.
http://mirror.beyondhosting.net
- Original Message -
From: "Wido den Hollander"
To: "ceph-users"
Sent: Saturday, January 30, 2016 9:14:59 AM
Subject: [ceph-users] Ceph mirrors wanted!
Hi,
My PR was merged with a script to mirror Ceph properly:
What approach did sandisk take with this for jewel?
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action
This is an interesting topic that i've been waiting for.
Right now we run the journal as a partition on the data disk. I've build drives
without journals and the write performance seems okay but random io performance
is poor in comparison to what it should be.
Ty
No they need it to work.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action in reliance on the
tyte... ceph pool go rogue?
- Original Message -
From: "Gregory Farnum"
To: "John Hogenmiller"
Cc: ceph-users@lists.ceph.com
Sent: Wednesday, January 27, 2016 2:08:36 PM
Subject: Re: [ceph-users] downloads.ceph.com no longer valid?
Infrastructure guys say it's down and they are working
dependently verified, possibly by multiple sources,
before putting too much weight into it.
Mark
On 01/18/2016 01:02 PM, Tyler Bishop wrote:
> One of the other guys on the list here benchmarked them. They spanked every
> other ssd on the *recommended* tree..
>
> - Original Message
One of the other guys on the list here benchmarked them. They spanked every
other ssd on the *recommended* tree..
- Original Message -
From: "Gregory Farnum"
To: "Tyler Bishop"
Cc: "David" , "Ceph Users"
Sent: Monday, January 18, 2016 2:
Well that's interesting.
I've mounted block devices to the kernel and exported them to iscsi but the
performance was horrible.. I wonder if this is any different?
From: "Dominik Zalewski"
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 6:35:20 AM
Subject: [ceph-users] CentO
You should test out cephfs exported as an NFS target.
- Original Message -
From: "david"
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 4:36:17 AM
Subject: [ceph-users] Ceph and NFS
Hello All.
Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
require
Check these out to:
http://www.seagate.com/internal-hard-drives/solid-state-hybrid/1200-ssd/
- Original Message -
From: "Christian Balzer"
To: "ceph-users"
Sent: Sunday, January 17, 2016 10:45:56 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
Hello,
On Sat, 16 Jan
Adding to this thought, even if you are using a single replica for the cache
pool, will ceph scrub the cached block against the base tier? What if you have
corruption in your cache?
From: "Tyler Bishop"
To: ceph-users@lists.ceph.com
Cc: "Sebastien han"
Sent: Sunday, J
Based off Sebastiens design I had some thoughts:
http://www.sebastien-han.fr/images/ceph-cache-pool-compute-design.png
Hypervisors are for obvious reason more susceptible to crashes and reboots for
security updates. Since ceph is utilizing a standard pool for the cache tier it
creates a requir
The changes you are looking for are coming from Sandisk in the ceph "Jewel"
release coming up.
Based on benchmarks and testing, sandisk has really contributed heavily on the
tuning aspects and are promising 90%+ native iop of a drive in the cluster.
The biggest changes will come from the memory
http://sudomakeinstall.com/uncategorized/ceph-make-configuration-changes-in-realtime-without-restart
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying
Add this under osd.
osd op threads = 8
Restart the osd services and try that.
From: "Florian Rommel"
To: "Wade Holler"
Cc: ceph-users@lists.ceph.com
Sent: Saturday, December 26, 2015 4:55:06 AM
Subject: Re: [ceph-users] more performance issues :(
Hi, iostat shows all OSDs working
http://www.seagate.com/files/www-content/product-content/ssd-fam/1200-ssd/en-us/docs/1200-2-sas-ssd-ds1858-2-1509us.pdf
Which of these have you tested? I didn't even know seagate had good flash.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyle
I didn't read the whole thing but if your trying to do HA NFS, you need to run
OCFS2 on your RBD and disable read/write caching on the rbd client.
From: "Steve Anthony"
To: ceph-users@lists.ceph.com
Sent: Friday, December 25, 2015 12:39:01 AM
Subject: Re: [ceph-users] nfs over rbd problem
Due to the nature of distributed storage and a filesystem built to distribute
itself across sequential devices.. you're going to always have poor performance.
Are you unable to use XFS inside the vm?
If you are not the intended recipient of this transmission you are notified
that disclosing, c
Write endurance is kinda bullshit.
We have crucial 960gb drives storing data and we've only managed to take 2% off
the drives life in the period of a year and hundreds of tb written weekly.
Stuff is way more durable than anyone gives it credit.
- Original Message -
From: "Lionel Bouto
194:6789/0}
>
> election epoch 480, quorum 0,1,2 integ-hm5,integ-hm6,integ-hm7
>
> osdmap e49780: 2 osds: 2 up, 2 in
>
> pgmap v2256565: 190 pgs, 2 pools, 1364 GB data, 410 kobjects
>
>2559 GB used, 21106 GB / 24921 GB avail
>
>
48.95
- - [13/Oct/2015:16:36:42 -0400] "POST /testing/testfile HTTP/1.1" -1 0 - -
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that discl
You need to disable RBD caching.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action in reliance on the
gt; DELETE
)
[x-aws-request-url] =>
https://s3.example.com/admin/user?key&uid=C1&access-key=ANNMJKDEZ2RN60I03GI9
[x-aws-redirects] => 0
[x-aws-stringtosign] => DELETE
application/x-www-form-urlencoded
Fri, 10 Jul 2015 17:42:48 GMT
/admin/user?key
[x-aws-requestheade
Turn off write cache on the controller. Your probably seeing the flush to disk.
Tyler Bishop
Chief Executive Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying
You want write cache to disk, no write cache for SSD.
I assume all of your data disk are single drive raid 0?
Tyler Bishop
Chief Executive Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
I have this ceph node that will correctly recover into my ceph pool and
performance looks to be normal for the rbd clients. However after a few minutes
once finishing recovery the rbd clients begin to fall over and cannot write
data to the pool.
I've been trying to figure this out for weeks! N
When trying to zap and prepare a disk it fails to find the partitions.
[ceph@ceph0-mon0 ~]$ ceph-deploy -v disk zap
ceph0-node1:/dev/mapper/35000c50031a1c08b
[ ceph_deploy.conf ][ DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ ceph_deploy.cli ][ INFO ] Invoked (1.5.21)
70 matches
Mail list logo