thanks Gregory
On Sun, Nov 1, 2015 at 12:06 AM, Gregory Farnum wrote:
> On Friday, October 30, 2015, mad Engineer
> wrote:
>
>> i am learning ceph,block storage and read that each object size is 4 Mb.I
>> am not clear about the concepts of object storage still what will happen if
>> the actual
Hello,
for production application (for example, openstack's block storage), is
it better to setup data to be stored with two replicas, or three
replicas? is two replicas with better performance and lower cost?
Thanks.
___
ceph-users mailing list
cep
Can those hints be disabled somehow? I was battling XFS preallocation the other
day, and the mount option didn't make any difference - maybe because those
hints have precedence (which could mean they aren't working as they should),
maybe not.
In particular, when you fallocate a file, some numbe
On 02-11-15 11:56, Jan Schermer wrote:
> Can those hints be disabled somehow? I was battling XFS preallocation
> the other day, and the mount option didn't make any difference - maybe
> because those hints have precedence (which could mean they aren't
> working as they should), maybe not.
>
Thi
> On 02 Nov 2015, at 11:59, Wido den Hollander wrote:
>
>
>
> On 02-11-15 11:56, Jan Schermer wrote:
>> Can those hints be disabled somehow? I was battling XFS preallocation
>> the other day, and the mount option didn't make any difference - maybe
>> because those hints have precedence (which
Hi All,
We're currently on version 0.94.5 with three monitors and 75 OSDs.
I've peeked at the decompiled CRUSH map, and I see that all ids are
commented with '# Here be dragons!', or more literally : '# do not
change unnecessarily'.
Now, what would happen if an incautious user would happen t
On 02-11-15 12:30, Loris Cuoghi wrote:
> Hi All,
>
> We're currently on version 0.94.5 with three monitors and 75 OSDs.
>
> I've peeked at the decompiled CRUSH map, and I see that all ids are
> commented with '# Here be dragons!', or more literally : '# do not
> change unnecessarily'.
>
> Now,
Le 02/11/2015 12:47, Wido den Hollander a écrit :
On 02-11-15 12:30, Loris Cuoghi wrote:
Hi All,
We're currently on version 0.94.5 with three monitors and 75 OSDs.
I've peeked at the decompiled CRUSH map, and I see that all ids are
commented with '# Here be dragons!', or more literally : '#
Hi Team,
We have ceph setup with 2 OSD and replica 2 and it is mounted with ocfs2
clients and its working. When we added new osd all the clients rbd mapped
device disconnected and got hanged by running rbd ls or rbd map command. We
waited for long hours to scale the new osd size but
Hello all,
I'm attempting to use the python API to get the quota of a pool, but I can't
see it in the documentation (http://docs.ceph.com/docs/v0.94/rados/api/python/).
Does anyone know how to get the quota? (python or C). Without making a call to
"ceph osd pool get-quota".
I am using ceph
On Mon, Nov 2, 2015 at 9:39 PM, Alex Leake wrote:
> Hello all,
>
>
> I'm attempting to use the python API to get the quota of a pool, but I can't
> see it in the documentation
> (http://docs.ceph.com/docs/v0.94/rados/api/python/).
The call you're looking for is Rados.mon_command, which seems to b
I'd recommend running your program through valgrind first to see if something
pops out immediately.
--
Jason Dillaman
- Original Message -
> From: "min fang"
> To: ceph-users@lists.ceph.com
> Sent: Saturday, October 31, 2015 10:43:22 PM
> Subject: Re: [ceph-users] segmentation fau
When I did a unset noout on the cluster all three mons got a
segmentation fault, then continued as if nothing had happened. Regular
segmentation faults started on mons after upgrading to 0.94.5. Ubuntu
Trusty LTS. Anyone had similar?
-Arnulf
Backtraces:
mon1:
#0 0x7f0b2969120b in raise (si
John,
Thank you very much! Works exactly as expected.
Kind Regards,
Alex.
From: John Spray
Sent: 02 November 2015 13:19
To: Alex Leake
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] retrieving quota of ceph pool using librados or
python API
On
Hi,
I am a new user to ceph. I am using version 0.94.5. Can someone help me with
this rebalancing question.
I have 1 mon and 4 OSDs, 100 pgs in my pool with 2+1 erasure coding. I have
given the steps that I did below. Am I doing these correctly? Why are UP and
acting showing differently in ste
Regardless of what the crush tool does, I wouldn't muck around with the IDs
of the OSDs. The rest of Celh will probably not handle it well if the crush
IDs don't match the OSD numbers.
-Greg
On Monday, November 2, 2015, Loris Cuoghi wrote:
> Le 02/11/2015 12:47, Wido den Hollander a écrit :
>
>>
Thanks Greg :)
For the OSDs, I understand, on the other hand for intermediate
abstractions like hosts, racks and rooms, do you agree that it should
currently be possible to change the IDs (always under the "one change at
a time, I promise mom" rule)?
Clearly, a good amount of shuffling shoul
Hi!
I am trying to set up a Rados Gateway, prepared for multiple regions
and zones, according to the documenation on
http://docs.ceph.com/docs/hammer/radosgw/federated-config/.
Ceph version is 0.94.3 (Hammer).
I am stuck at the "Create zone users" step
(http://docs.ceph.com/docs/hammer/rado
On Mon, Nov 2, 2015 at 7:42 AM, Loris Cuoghi wrote:
> Thanks Greg :)
>
> For the OSDs, I understand, on the other hand for intermediate abstractions
> like hosts, racks and rooms, do you agree that it should currently be
> possible to change the IDs (always under the "one change at a time, I
> pro
On 2/25/15 2:31 PM, Sage Weil wrote:
> Hey,
>
> We are considering switching to civetweb (the embedded/standalone rgw web
> server) as the primary supported RGW frontend instead of the current
> apache + mod-fastcgi or mod-proxy-fcgi approach. "Supported" here means
> both the primary platform
Hi,
We want to have users that can authenticate to our RGW but have no quota
and can not create buckets. The former is working by setting the
user_quota max_size_kb to 0. The later does not seem to be working by
setting max_buckets to 0 means no quota on the amount of buckets that
can be created
Dear all, can anybody help?
2015-10-30 10:37 GMT+02:00 Voloshanenko Igor :
> It's pain, but not... :(
> We already used your updated lib in dev env... :(
>
> 2015-10-30 10:06 GMT+02:00 Wido den Hollander :
>
>>
>>
>> On 29-10-15 16:38, Voloshanenko Igor wrote:
>> > Hi Wido and all community.
>> >
Most likely not going to be related to 13045 since you aren't actively
exporting an image diff. The most likely problem is that the RADOS IO context
is being closed prior to closing the RBD image.
--
Jason Dillaman
- Original Message -
> From: "Voloshanenko Igor"
> To: "Ceph Use
Hi,
Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not
growing up
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..
Thanks!
hzwuli...@gmail.com
From: Chen, Xiaoxi
Date
Thank you, Jason!
Any advice, for troubleshooting
I'm looking in code, and right now don;t see any bad things :(
2015-11-03 1:32 GMT+02:00 Jason Dillaman :
> Most likely not going to be related to 13045 since you aren't actively
> exporting an image diff. The most likely problem is that th
I can't say that I know too much about Cloudstack's integration with RBD to
offer much assistance. Perhaps if Cloudstack is receiving an exception for
something, it is not properly handling object lifetimes in this case.
--
Jason Dillaman
- Original Message -
> From: "Voloshanenk
Hi,
Anybody please help me on this issue.
Regards
Prabu
On Mon, 02 Nov 2015 17:54:27 +0530 gjprabu
wrote
Hi Team,
We have ceph setup with 2 OSD and replica 2 and it is mounted with ocfs2
clients and its working. When we added ne
Hi,
The directory there should be some simulated hierarchical structure with '/' in
the object names. Do you mind checking the rest objects in ceph pool
.rgw.buckets?
$ rados ls -p .rgw.buckets | grep default.157931.5_hive
If there're still objects come out, you might try to delete them from t
I would double check the network configuration on the new node.
Including hosts files and DNS names. Do all the host names resolve to
the correct IP addresses from all hosts?
"... 192.168.112.231:6800/49908 >> 192.168.113.42:0/599324131 ..."
Looks like the communication between subnets is a
Hi Taylor,
I have checked DNS name and all host resolve to the correct IP. MTU
size is 1500 in switch level configuration done. There is no firewall/ selinux
is running currently.
Also we would like to know below query's which already in the thread.
Regards
Prabu
On 2015-11-02 10:19 pm, gjprabu wrote:
Hi Taylor,
I have checked DNS name and all host resolve to the correct IP. MTU
size is 1500 in switch level configuration done. There is no firewall/
selinux is running currently.
Also we would like to know below query's which already in the thread.
R
Hi,
So in RGW there's no hive* objects now, could you please check if there's any
exists in the S3 perspective? That's to check the object listing of bucket
'olla' via the S3 API (boto or s3cmd could do the job)
I've met some similar issue in Hadoop over SwiftFS before. There's some OSDs
were
32 matches
Mail list logo