After dealing with ubuntu for a few days I decided to circle back to centos 7.
It appears that the latest ceph deploy takes care of the initial issues I had.
Now i'm hitting a new issue that has to do with an improperly defined url.
When I do "ceph-deploy install node1 node2 node3" it fails beca
Found the file. You need to edit /usr/lib/python2.7/site-
packages/ceph_deploy/hosts/centos/install.py line 31 change to return 'rhel'
+ distro.normalized_release.major
Probably a bug that needs to be fixed in the deploy packages.
___
ceph-users mail
Hello List
To confirm what Christian has said. We have been playing with a 3 node
4 SSD (3610) per node cluster. Putting the journals on the OSD SSDs we
were getting 770MB /s sustained with large sequential writes, and 35
MB/s and about 9200 IOPS with small random writes. Putting an NVME as
journa
Same here
[Connecting to download.ceph.com (173.236.253.173)]
On Sat, Jun 25, 2016 at 4:50 AM, Jeronimo Romero wrote:
> Dear ceph overlords. It seems that the ceph download server is down.
>
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.
+1 for 18TB and all SSD - If you need any decent IOPS with a cluster
this size then I all SSDs are the way to go.
On Mon, Jun 27, 2016 at 11:47 AM, David wrote:
> Yes you should definitely create different pools for different HDD types.
> Another decision you need to make is whether you want ded
Hi Mario
Perhaps its covered under proxmox support, Do you have support on your
proxmox install from the guys in Proxmox?
Otherwise you can always buy from Redhat
https://www.redhat.com/en/technologies/storage/ceph
On Thu, Jun 30, 2016 at 7:37 AM, Mario Giammarco wrote:
> Last two questions:
Ceph really wouldn't make sense on a single node.. Are you sure thats
what you want to do?
On Sat, Jul 16, 2016 at 9:34 PM, Marc Roos wrote:
>
> I am looking a bit at ceph on a single node. Does anyone have experience
> with cloudfuse?
>
> Do I need to use the rados-gw? Does it even work with cep
Such great detail in this post David.. This will come in very handy
for people in the future
On Thu, Jul 21, 2016 at 8:24 PM, David Turner
wrote:
> The Mon store is important and since your cluster isn't healthy, they need
> to hold onto it to make sure that when things come up that the mon can
>
Hi Felix
If you have R730XD then you should have 2 x 2.5" slots on the back.
You can stick in SSDs in RAID1 for your OS here.
On Fri, Aug 12, 2016 at 12:41 PM, Félix Barbeira wrote:
> Hi,
>
> I'm planning to make a ceph cluster but I have a serious doubt. At this
> moment we have ~10 servers D
You're going to see pretty slow performance on a cluster this size
with spinning disks...
Ceph scales very very well but at this type of size cluster it can be
challenging to get nice throughput and iops..
for something small like this either use all ssd osds or consider
having more spinning osds
Hi Nick
Interested in this comment - "-Dual sockets are probably bad and will
impact performance."
Have you got real world experience of this being the case?
Thanks - B
On Sun, Aug 21, 2016 at 8:31 AM, Nick Fisk wrote:
>> -Original Message-
>> From: Alex Gorbachev [mailto:a...@iss-inte
If you point at the eu.ceph.com
ceph.apt-get.eu has address 185.27.175.43
ceph.apt-get.eu has IPv6 address 2a00:f10:121:400:48c:baff:fe00:477
On Sat, Aug 20, 2016 at 11:59 AM, Vlad Blando wrote:
> Hi Guys,
>
> I will be installing Ceph behind a very restrictive firewall and one of
> the requir
Seems the version number has jumped another point but the files aren't in
the repo yet? .94.9 ?
Get:1 http://download.ceph.com/debian-hammer/ jessie/main ceph-common amd64
0.94.9-1~bpo80+1 [6029 kB]
Err http://download.ceph.com/debian-hammer/ jessie/main librbd1 amd64
0.94.9-1~bpo80+1
404 Not F
ith any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> --
>
> --
> *From:* Shain Miley [smi...@npr.org]
> *Sent:* Tuesday, August 30, 2016 11:12 AM
> *To:* Brian ::; David Turner
> *Cc:* ceph-users@lists.ceph.com
>
> *Subject
Amazing improvements to performance in the preview now.. I wonder
will there be a filestore --> bluestore upgrade path...
On Wed, Aug 31, 2016 at 6:32 AM, Alexandre DERUMIER wrote:
> Hi,
>
> Here the slides of the ceph bluestore prensentation
>
> http://events.linuxfoundation.org/sites/events/f
Looks like they are having major challenges getting that ceph cluster
running again.. Still down.
On Tuesday, October 11, 2016, Ken Dreyer wrote:
> I think this may be related:
>
http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/
>
> On Tue, Oct 11, 2016
What is the issue exactly?
On Fri, Oct 28, 2016 at 2:47 AM, wrote:
> I think this issue may not related to your poor hardware.
>
>
>
> Our cluster has 3 Ceph monitor and 4 OSD.
>
>
>
> Each server has
>
> 2 cpu ( Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz ) , 32 GB memory
>
> OSD nodes has 2 SSD
This is like your mother telling not to cross the road when you were 4
years of age but not telling you it was because you could be flattened
by a car :)
Can you expand on your answer? If you are in a DC with AB power,
redundant UPS, dual feed from the electric company, onsite generators,
dual PSU
these things happen
>>
>> http://www.theregister.co.uk/2016/11/15/memset_power_cut_service_interruption/
>>
>> We had customers who had kit in this DC.
>>
>> To use your analogy, it's like crossing the road at traffic lights but not
>> checking cars ha
HI Lionel,
Mega Ouch - I've recently seen the act of measuring power consumption
in a data centre (they clamp a probe onto the cable for an AMP reading
seemingly) take out a cabinet which had *redundant* power feeds - so
anything is possible I guess.
Regards
Brian
On Sat, Nov 19, 2016 at
The fact that you are all SSD I would do exactly what Wido said -
gracefully remove the OSD and gracefully bring up the OSD on the new
SSD.
Let Ceph do what its designed to do. The rsync idea looks great on
paper - not sure what issues you will run into in practise.
On Fri, Dec 16, 2016 at 12:38
Hi Satish
You should be able to choose different modes of operation for each
port / disk. Most dell servers will let you do RAID and JBOD in
parallel.
If you can't do that and can only either turn RAID on or off then you
can use SW RAID for your OS
On Fri, Jul 20, 2018 at 9:01 PM, Satish Patel
Hello
Wasn't this originally an issue with mon store now you are getting a
checksum error from an OSD? I think some hardware here in this node is just
hosed.
On Wed, Feb 21, 2018 at 5:46 PM, Behnam Loghmani
wrote:
> Hi there,
>
> I changed SATA port and cable of SSD disk and also update ceph t
Hello List - anyone using these drives and have any good / bad things
to say about them?
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Paul Wido and Konstantin! If we give them a go I'll share some
test results.
On Sat, Jun 16, 2018 at 12:09 PM, Konstantin Shalygin wrote:
> Hello List - anyone using these drives and have any good / bad things
> to say about them?
>
>
> A few moths ago I was asking about PM1725
> http://
ng
to give great results.
Brian
On Wed, Jun 20, 2018 at 8:28 AM, Wladimir Mutel wrote:
> Dear all,
>
> I set up a minimal 1-node Ceph cluster to evaluate its performance. We
> tried to save as much as possible on the hardware, so now the box has Asus
> P10S-M WS motherboard, Xe
Hi Stefan
$ sudo yum provides liboath
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.strencom.net
* epel: mirror.sax.uk.as61049.net
* extras: mirror.strencom.net
* updates: mirror.strencom.net
liboath-2.4.1-9.el7.x86_64 : Library for OATH handling
Repo
Hi John
Have you looked at ceph documentation?
RBD: http://docs.ceph.com/docs/luminous/rbd/rbd-snapshot/
The ceph project documentation is really good for most areas. Have a
look at what you can find then come back with more specific questions!
Thanks
Brian
On Wed, Jun 27, 2018 at 2:24 PM
Hi Michael,
Install sudo on proxmox server and add an entry for nagios like:
nagios ALL=(ALL) NOPASSWD:/usr/bin/ceph
in a file in /etc/sudoers.d
Brian
On Wed, Feb 1, 2017 at 8:55 AM, Michael Hartz wrote:
> I am running ceph as part of a Proxmox Virtualization cluster, which is doing
>
And left out - your command line for the ceph checks in nagios should
be prefixed by sudo
'sudo ceph health'
server# su nagios
$ ceph health
Error initializing cluster client: Error('error calling
conf_read_file: errno EACCES',)
$sudo ceph health
HEALTH_OK
On Wed, Feb 1, 20
This is great - had no idea you could have this level of control with
Ceph authentication.
On Wed, Feb 1, 2017 at 12:29 PM, John Spray wrote:
> On Wed, Feb 1, 2017 at 8:55 AM, Michael Hartz
> wrote:
>> I am running ceph as part of a Proxmox Virtualization cluster, which is
>> doing great.
>>
Hi Max,
Have you considered Proxmox at all? Nicely integrates with Ceph storage. I
moved from Xenserver longtime ago and have no regrets.
Thanks
Brians
On Sat, Feb 25, 2017 at 12:47 PM, Massimiliano Cuttini
wrote:
> Hi Iban,
>
> you are running xen (just the software) not xenserver (ad hoc lin
and issue hasn't come up since.
Brian
On Mon, Apr 3, 2017 at 8:03 AM, Vlad Blando wrote:
> Most of the time random and most of the time 1 at a time, but I also see
> 2-3 that are down at the same time.
>
> The network seems fine, the bond seems fine, I just don't know where
Hi Dan,
Various proxmox daemons don't look happy on startup also.
Are you using a single samsung SSD for your OSD journals on this host?
Is that SSD ok?
Brian
On Tue, Mar 29, 2016 at 5:22 AM, Dan Moses wrote:
> Any suggestions to fix this issue? We are using Ceph with proxmox
Hi Dan
You can increase - not decrease .. I would go with 512 for this - that
will allow you to increase in the future.
from ceph.com "Having 512 or 4096 Placement Groups is roughly
equivalent in a cluster with less than 50 OSDs "
I don't even think you will be able to set pg num go 4096 - ceph
If you have a qcow2 image on *local* type storage and move it to a
ceph pool pmox will automatically convert the image to raw.
Performance is entirely down to your particular setup - moving image
to a ceph pool certainly won't guarantee performance increase - in
fact the opposite could happen.
Yo
Sorry to drag this one up again.
Just got the unsubscribed due to excessive bounces thing.
'Your membership in the mailing list ceph-users has been disabled due
to excessive bounces The last bounce received from you was dated
21-Dec-2018. You will not get any more messages from this list until
y
Hi Marc
Filezilla has decent S3 support https://filezilla-project.org/
ymmv of course!
On Thu, Apr 18, 2019 at 2:18 PM Marc Roos wrote:
>
>
> I have been looking a bit at the s3 clients available to be used, and I
> think they are quite shitty, especially this Cyberduck that processes
> files w
ware/
>
> not saying it definitely is, or isn't malware-ridden, but it sure was
shady at that time.
> I would suggest not pointing people to it.
>
> Den tors 18 apr. 2019 kl 16:41 skrev Brian : :
>>
>> Hi Marc
>>
>> Filezilla has decent S3 support https://file
I wouldn't say that's a pretty common failure. The flaw here perhaps is the
design of the cluster and that it was relying on a single power source.
Power sources fail. Dual power supplies connected to a b power sources in
the data centre is pretty standard.
On Tuesday, July 2, 2019, Bryan Henderso
Don't use nginx. The current version buffers all the uploads to the
local disk, which causes all sorts of problems with radosgw (timeouts,
clock skew errors, etc). Use tengine instead (or apache). I sent the
mailing list some info on tengine a couple weeks ago.
On 5/29/2014 6:11 AM, Michael
the ssl work so that nginx acts as an ssl proxy in front of the radosgw?
Cheers
Andrei
----
*From: *"Brian Rak" <mailto:b...@gameservers.com>
*To: *ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com&g
Did the python-ceph package go away or something? Upgrading from 0.80.1-0.el6
to 0.80.1-2.el6 does not work.
# yum install ceph python-ceph
Package python-ceph-0.80.1-0.el6.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package ceph.x86_64
Also the 0.80.1-2.el6 ceph-radosgw RPM no longer includes an init script.
Where is the proper place to report issues with the RPMs?
On 6/2/2014 9:53 AM, Brian Rak wrote:
Did the python-ceph package go away or something? Upgrading from
0.80.1-0.el6 to 0.80.1-2.el6 does not work.
# yum
They're from that link. They were definitely present in the repository
a couple hours ago. Maybe this got reverted?
On 6/2/2014 1:08 PM, Alfredo Deza wrote:
Brian
Where is that ceph repo coming from? I don't see any 0.80.1-2 in
http://ceph.com/rpm-firefly/el6/x86_64/
On Mon, J
fredo Deza wrote:
Brian
Where is that ceph repo coming from? I don't see any 0.80.1-2 in
http://ceph.com/rpm-firefly/el6/x86_64/
On Mon, Jun 2, 2014 at 10:01 AM, Brian Rak wrote:
Also the 0.80.1-2.el6 ceph-radosgw RPM no longer includes an init script.
Where is the proper place to report
?repo=epel-$releasever&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever
exclude=*ceph*
I've gotten several emails about this, so it's definitely something
other people are running into.
On 6/2/2014 1:15 PM, B
one)
On 6/3/2014 2:58 PM, Pedro Sousa wrote:
Hi Brian,
I've done that but the issue persists:
Dependencies Resolved
==
Package Arch Version
I'm trying to find an issue with RadosGW and special characters in
filenames. Specifically, it seems that filenames with a + in them are
not being handled correctly, and that I need to explicitly escape them.
For example:
---request begin---
HEAD /ubuntu/pool/main/a/adduser/adduser_3.113+nmu3
,
but it sounds familiar so I think you probably want to search the list
archives and the bug tracker (http://tracker.ceph.com/projects/rgw).
What version precisely are you on?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Jun 25, 2014 at 2:58 PM, Brian Rak wrote:
I
filenames or percentage encode
the URL explicitly.
Rgds,
G>
On Wed, Jun 25, 2014 at 8:41 PM, Brian Rak <mailto:b...@gameservers.com>> wrote:
ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
I'll try to take a look through the bug tracker, but I didn
e
web server configuration that might be rewriting the requests. In this
case it seems that you're using nginx which is outside of our usual
test environment, so it might be related.
Yehuda
On Jun 25, 2014 5:39 PM, "Brian Rak" wrote:
Unfortunately, both the client and actual files a
rote:
The gateway itself supports these kind of characters. Usually we see
this issue when there's something in front of the web server (like a
load balancer) that modifies the requests. Another possibility is the
web server configuration that might be rewriting the requests. In this
case i
Going back to my first post, I linked to this
http://stackoverflow.com/questions/1005676/urls-and-plus-signs
Per the defintion of application/x-www-form-urlencoded:
http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.1
"Control names and values are escaped. Space characters are replace
want to
watch the release notes. Once you work around a bug, someone will fix
the bug and break your hack.
On Thu, Jun 26, 2014 at 8:54 AM, Brian Rak <mailto:b...@gameservers.com>> wrote:
Going back to my first post, I linked to this
http://stackoverflow.com/question
quot; or "\" (I don't
which), authentification fails using python-swiftclient.
Is it an issue ?
On 06/25/2014 11:58 PM, Brian Rak wrote:
I'm trying to find an issue with RadosGW and special characters
in filenames. Specifically, it seems that filenames wi
This had come up on the iPXE lists awhile ago, and I had the following
suggestion (at least for linux):
Setup radosgw, store your kernel and initrd there. Create an iPXE
script to boot off this stored kernel/initrd, and have the kernel know
how to mount a RBD volume directly.
This is a litt
Just for reference, I've opened http://tracker.ceph.com/issues/8702
On 6/26/2014 10:18 PM, Brian Rak wrote:
My current workaround plan is to just upload both versions of the
file... I think this is probably the simplest solution with the least
possibility of breaking later on.
On 6/26/2
That sounds like you have some kind of odd situation going on. We only
use radosgw with nginx/tengine so I can't comment on the apache part of it.
My understanding is this:
You start ceph-radosgw, this creates a fastcgi socket somewhere (verify
this is created with lsof, there are some permis
I'm pulling my hair out with ceph. I am testing things with a 5 server
cluster. I have 3 monitors, and two storage machines each with 4 osd's. I
have started from scratch 4 times now, and can't seem to figure out how to
get a clean status. Ceph health reports:
HEALTH_WARN 34 pgs degraded; 192
Brian Lovett writes:
I restarted all of the osd's and noticed that ceph shows 2 osd's up even if
the servers are completely powered down: osdmap e95: 8 osds: 2 up, 8 in
Why would that be?
___
ceph-users mailing list
ceph-users@list
Gregory Farnum writes:
>
> What's the output of "ceph osd map"?
>
> Your CRUSH map probably isn't trying to segregate properly, with 2
> hosts and 4 OSDs each.
> Software Engineer #42http://inktank.com | http://ceph.com
>
Is this what you are looking for?
ceph osd map rbd ceph
osdmap e1
Gregory Farnum writes:
> ...and one more time, because apparently my brain's out to lunch today:
>
> ceph osd tree
>
> *sigh*
>
haha, we all have those days.
[root@monitor01 ceph]# ceph osd tree
# idweight type name up/down reweight
-1 14.48 root default
-2 7.24
Gregory Farnum writes:
> So those disks are actually different sizes, in proportion to their
> weights? It could be having an impact on this, although it *shouldn't*
> be an issue. And your tree looks like it's correct, which leaves me
> thinking that something is off about your crush rules. :/
>
Gregory Farnum writes:
>
> On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett
> wrote:
> > "profile": "bobtail",
>
> Okay. That's unusual. What's the oldest client you need to support,
> and what Ceph version are you using?
This is a
Christian Balzer writes:
> So either make sure these pools really have a replication of 2 by deleting
> and re-creating them or add a third storage node.
I just executed "ceph osd pool set {POOL} size 2" for both pools. Anything
else I need to do? I still don't see any changes to the status
Gregory Farnum writes:
>
> On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett
> wrote:
> > "profile": "bobtail",
>
> Okay. That's unusual. What's the oldest client you need to support,
> and what Ceph version are you using? You proba
Christian Balzer writes:
> Read EVERYTHING you can find about crushmap rules.
>
> The quickstart (I think) talks about 3 storage nodes, not OSDs.
>
> Ceph is quite good when it comes to defining failure domains, the default
> is to segregate at the storage node level.
> What good is a replicati
Alright, I was finally able to get this resolved without adding another node.
As pointed out, even though I had a config variable that defined the default
replicated size at 2, ceph for some reason created the default pools (data,
and metadata) with a value of 3. After digging trough documentat
+ } else {
+ dest[pos++] = ' ';
+ ++src;
+ }
} else {
src++;
char c1 = hex_to_num(*src++);
Though, I'm not sure why this was implemented. I would guess that this
function needs to deal with URL parameters as well as file paths, but I
don't understand t
I'm evaluating ceph for our new private and public cloud environment. I have a
"working" ceph cluster running on centos 6.5, but have had a heck of a time
figuring out how to get rbd support to connect to cloudstack. Today I found
out that the default kernel is too old, and while I could compile
I'm installing the latest firefly on a fresh centos 7 machine using the rhel
7 yum repo. I'm getting a few dependency issues when using ceph-deploy
install. Mostly it looks like it doesn't like python 2.7.
[monitor01][DEBUG ] --> Processing Dependency: libboost_system-mt.so.5()
(64bit) for pack
Simon Ironside writes:
>
> Hi Brian,
>
> I have a fresh install working on RHEL 7 running the same version of
> python as you. I did have trouble installing from the ceph.com yum repos
> though and worked around it by creating and installing from my own local
> yum
Why do you have a MDS active? I'd suggest getting rid of that at least
until you have everything else working.
I see you've set nodown on the OSDs, did you have problems with the OSDs
flapping? Do the OSDs have broken connectivity between themselves? Do
you have some kind of firewall interf
What happens if you remove nodown? I'd be interested to see what OSDs
it thinks are down. My next thought would be tcpdump on the private
interface. See if the OSDs are actually managing to connect to each other.
For comparison, when I bring up a cluster of 3 OSDs it goes to HEALTH_OK
nearly
ceph version 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f)
I recently managed to cause some problems for one of our clusters, we
had 1/3 of the OSDs fail and lose all the data.
I removed all the failed OSDs from the crush map, and did 'ceph osd
rm'. Once it finished recovering, I was lef
Ahh figured it out. I hadn't removed the dead OSDs from the crush map,
which was apparently confusing ceph.
I just did 'ceph osd crush rm XXX' for all of them, restarted all the
online OSDs, and the pg got created!
On 8/8/2014 4:51 PM, Brian Rak wrote:
ceph
We do it with rbd volumes. We're using rbd export/import and netcat to
transfer it across clusters. This was the most efficient solution, that
did not require one cluster to have access to the other clusters (though
it does require some way of starting the process on the different machines).
What are your ulimit settings? You could be hitting the max process count.
On 9/12/2014 9:06 AM, Christian Eichelmann wrote:
Hi Ceph-Users,
I have absolutely no idea what is going on on my systems...
Hardware:
45 x 4TB Harddisks
2 x 6 Core CPUs
256GB Memory
When initializing all disks and jo
That's not how ulimit works. Check the `ulimit -a` output.
On 9/12/2014 10:15 AM, Christian Eichelmann wrote:
Hi,
I am running all commands as root, so there are no limits for the processes.
Regards,
Christian
___
Von: Mariusz Gronczewski [mariusz.gronczew.
We're running ceph version 0.87
(c51c8f9d80fa4e0168aa52685b8de40e42758578), and seeing this:
HEALTH_WARN 1 pgs degraded; 1 pgs stuck degraded; 1 pgs stuck unclean; 1
pgs stuck undersized; 1 pgs undersized
pg 4.2af is stuck unclean for 77192.522960, current state
active+undersized+degraded, las
On 2/18/2015 3:01 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 7:53 PM, Brian Rak wrote:
We're running ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578),
and seeing this:
HEALTH_WARN 1 pgs degraded; 1 pgs stuck degraded; 1 pgs stuck unclean; 1 pgs
stuck undersized;
On 2/18/2015 3:24 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 9:09 PM, Brian Rak wrote:
What does your crushmap look like (ceph osd getcrushmap -o
/tmp/crushmap; crushtool -d /tmp/crushmap)? Does your placement logic
prevent Ceph from selecting an OSD for the third replica?
Cheers
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> h
On 2/26/2015 9:46 AM, Sage Weil wrote:
This is the first (and possibly final) point release for Giant. Our focus
on stability fixes will be directed towards Hammer and Firefly.
Is this something that was decided beforehand? Can we tell if a major
version is going to be maintained or not, bef
seeing
the gestures I'm making as I'm explaining it :)
bab
--
Brian Button
bbut...@agilestl.com | @brianbuttonxp | 636.399.3146
http://www.agileprogrammer.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Do any of the Ceph repositories run rsync? We generally mirror the
repository locally so we don't encounter any unexpected upgrades.
eu.ceph.com used to run this, but it seems to be down now.
# rsync rsync://eu.ceph.com
rsync: failed to connect to eu.ceph.com: Connection refused (111)
rsync er
It looks like ceph.com is having some major issues with their git
repository right now.. https://ceph.com/git/ gives a 500 error
On 3/27/2015 8:11 AM, Vasilis Souleles wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I'm trying to create a 4-node Ceph Storage Cluster using ceph-dep
We just enabled a small cache pool on one of our clusters (v 0.94.1) and
have run into some issues:
1) Cache population appears to happen via the public network (not the
cluster network). We're seeing basically no traffic on the cluster
network, and multiple gigabits inbound to our cache OSDs
le/advisable to migrate from a giant install
back to a firefly install? I'm guessing not.
Thanks much,
Brian
[1] <http://ceph.com/docs/master/releases/>
[2] <http://ceph.com/debian/dists/>
signature.asc
Description: Digital signature
___
"write"
>
> }
>
> ],
>
> "op_mask": "read, write, delete",
>
> "default_placement": "",
>
> "placement_tags": [],
>
> "bucket_quota": {
>
> "enabled": false,
>
> "m
need to change bucket
> quota one by one.
>
>
> Best wishes,
> Mika
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Brian An
ster interface),
this causes all OSDs on that node to be marked down, so there's evidence to
support all heartbeat traffic moving over the interface. I just want to ensure
that what I'm seeing is normal and that I haven't otherwise bo
s backported
for the upstream ceph package repositories for that platform.
Additionally, I'll note that I'm personally likely to continue to use
sysvinit so long as I still can, even when I am able to make the switch
to Jessie.
Thanks,
Brian
[1] <http://www.spinics.net/lists/ceph-
care
to do in large environments. In the meantime I'll just keep refreshing
http://ceph.com/debian/dists/ every couple of days :)
Cheers,
Brian
Alexandre DERUMIER 2015-07-31 09:21:
As I still haven't heard or seen about any upstream distros for Debian
Jessie (see also [1]),
Gitbu
So I guess that's fairly clear.
Anything other options I should be considering?
Regards,
Brian.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
to link to another one?
Cheers,
Brian.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Greg,
> I haven't had any luck with the seq bench. It just errors every time.
>
Can you confirm you are using the --no-cleanup flag with rados write? This
will ensure there is actually data to read for subsequent seq tests.
~Brian
___
As long as default mon and osd paths are used, and you have the proper mon
caps set, you should be okay.
Here is a mention of it in the ceph docs:
http://ceph.com/docs/master/install/upgrading-ceph/#transitioning-to-ceph-deploy
Brian Andrus
Storage Consultant, Inktank
On Fri, Nov 1, 2013 at 4
available RAM, you
could experiment with increasing the multiplier in the equation you are
using and see how it affects your final number.
The pg_num and pgp_num parameters can safely be changed before or after
your new nodes are integrated.
~Brian
On Sat, Nov 30, 2013 at 11:35 PM, Indra Pramana
1 - 100 of 293 matches
Mail list logo