Hi Sage,
i really would like to test the tiering. Is there any detailed
documentation about it and how it works?
Greets,
Stefan
Am 18.03.2014 05:45, schrieb Sage Weil:
> Hi everyone,
>
> It's taken longer than expected, but the tests for v0.78 are calming down
> and it looks like we'll be able
I'm ready to test the tiering.
2014-03-18 11:07 GMT+04:00 Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:
> Hi Sage,
>
> i really would like to test the tiering. Is there any detailed
> documentation about it and how it works?
>
> Greets,
> Stefan
>
> Am 18.03.2014 05:45, schrieb Sage Wei
Hi Stefan,
http://ceph.com/docs/master/dev/cache-pool/
- Mail original -
De: "Stefan Priebe - Profihost AG"
À: "Sage Weil" , ceph-de...@vger.kernel.org
Cc: ceph-us...@ceph.com
Envoyé: Mardi 18 Mars 2014 08:07:19
Objet: Re: [ceph-users] firefly timing
Hi Sage,
i really woul
On Tue, Mar 18, 2014 at 1:22 AM, Ashraful Arefeen
wrote:
> Hi,
> I want to use ceph for testing purpose. While setting the system up I have
> faced some problem with keyrings. Whenever I ran this command from my admin
> node (ceph-deploy gatherkeys node01) I got these warning.
>
> [ceph_deploy.gat
If you have the logs from the time when "something happened between
the MDS and the client" then please send them along. The bug where
SessionMap write failures were ignored is just a theory based on the
information available -- errors from before the point where replay
started to fail could give
On 03/18/2014 02:07 AM, Stefan Priebe - Profihost AG wrote:
Hi Sage,
i really would like to test the tiering. Is there any detailed
documentation about it and how it works?
Just for a simple test, you can start out with something approximately like:
# Create the pools (might want to do someth
Hi All,
I'm trying to troubleshoot a strange issue with my Ceph cluster.
We're Running Ceph Version 0.72.2
All Nodes are Dell R515's w/ 6C AMD CPU w/ 32GB Ram, 12 x 3TB NearlineSAS
Drives and 2 x 100GB Intel DC S3700 SSD's for Journals.
All Pools have a replica of 2 or better. I.e. metadata repl
Hi everyone,
I am trying to integrate Openstack Keystone with radosgw using the doc :
http://ceph.com/docs/master/radosgw/config/#integrating-with-openstack-keystone
I have made all the necessary changes and was successfully able to use
swift client to connect and use the Ceph Object Gateway v
The cluster is not functioning without mon servers, and I've not the
technical ability to fix it.
So, in the absense of a fix in it's current state, how can I wipe all
mon stuff and start again? It's a test system, with no data on the
cluster. I'd just like something working again, what's the be
On Tue, Mar 18, 2014 at 8:55 AM, Jonathan Gowar wrote:
> The cluster is not functioning without mon servers, and I've not the
> technical ability to fix it.
>
> So, in the absense of a fix in it's current state, how can I wipe all
> mon stuff and start again? It's a test system, with no data on t
On Tue, 2014-03-18 at 09:14 -0400, Alfredo Deza wrote:
> With ceph-deploy you would do the following (keep in mind this gets
> rid of all data as well):
>
> ceph-deploy purge {nodes}
> ceph-deploy purgedata {nodes}
Awesome! Nice new clean cluster, with all the rights bits :)
Thanks for the assi
On Tue, 18 Mar 2014, Stefan Priebe - Profihost AG wrote:
> Hi Sage,
>
> i really would like to test the tiering. Is there any detailed
> documentation about it and how it works?
Great! Here is a quick synopiss on how to set it up:
http://ceph.com/docs/master/dev/cache-pool/
sage
>
> Am 18.03.2014 um 17:06 schrieb Sage Weil :
>
>> On Tue, 18 Mar 2014, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> i really would like to test the tiering. Is there any detailed
>> documentation about it and how it works?
>
> Great! Here is a quick synopiss on how to set it up:
>
>
Is this statement in the documentation still valid: "Stale data is
expired from the cache pools based on some as-yet undetermined
policy." As that sounds a bit scary.
- Milosz
On Tue, Mar 18, 2014 at 12:06 PM, Sage Weil wrote:
> On Tue, 18 Mar 2014, Stefan Priebe - Profihost AG wrote:
>> Hi Sage
On Tue, 18 Mar 2014, Milosz Tanski wrote:
> Is this statement in the documentation still valid: "Stale data is
> expired from the cache pools based on some as-yet undetermined
> policy." As that sounds a bit scary.
I'll update the docs :). The policy is pretty simply but not described
anywhere y
Hi Luke,
(copying list back in)
You should stop all MDS services before attempting to use
--reset-journal (but make sure mons and OSDs are running). The status
of the mds map shouldn't make a difference.
John
On Tue, Mar 18, 2014 at 5:23 PM, Luke Jing Yuan wrote:
> Hi John,
>
> I noticed that
I am a novice ceph user creating a simple 4 OSD default cluster (initially)
and experimenting with RADOS BENCH to understand basic HDD (OSD)
performance. Each interation of rados bench -p data I want the cluster OSDs
in initial state i.e. 0 objects . I assumed the easiest way was to remove
and re
Hi Matt,
This is expected behaviour: pool IDs are not reused.
Cheers,
John
On Tue, Mar 18, 2014 at 6:53 PM, wrote:
>
> I am a novice ceph user creating a simple 4 OSD default cluster (initially)
> and experimenting with RADOS BENCH to understand basic HDD (OSD)
> performance. Each interation o
What you are seeing is expected behavior. Pool numbers do not get reused; they
increment up. Pool names can be reused once they are deleted. One note,
though, if you delete and recreate the data pool, and want to use cephfs,
you'll need to run 'ceph mds newfs
--yes-i-really-mean-it' before
Belay that, I misread your mail and thought you were talking about the
counter used to assign IDs to new pools rather than the pool count
reported from the PG map.
John
On Tue, Mar 18, 2014 at 7:12 PM, John Spray wrote:
> Hi Matt,
>
> This is expected behaviour: pool IDs are not reused.
>
> Chee
On Tue, 18 Mar 2014, John Spray wrote:
> Hi Matt,
>
> This is expected behaviour: pool IDs are not reused.
The IDs go up, but I think the 'count' shown there should not.. i.e.
num_pools != max_pool_id. So probably a subtle bug, I expect in the
print_summary or similar method in PGMonitor.cc?
On Tue, Mar 18, 2014 at 12:20 PM, Sage Weil wrote:
> On Tue, 18 Mar 2014, John Spray wrote:
>> Hi Matt,
>>
>> This is expected behaviour: pool IDs are not reused.
>
> The IDs go up, but I think the 'count' shown there should not.. i.e.
> num_pools != max_pool_id. So probably a subtle bug, I expec
On Tue, 18 Mar 2014, Sage Weil wrote:
> On Tue, 18 Mar 2014, Milosz Tanski wrote:
> > Is this statement in the documentation still valid: "Stale data is
> > expired from the cache pools based on some as-yet undetermined
> > policy." As that sounds a bit scary.
>
> I'll update the docs :). The pol
Hello Everyone
I am looking forward to test new features of 0.78 , it would be nice if erasure
coding and tiering implementation notes available in ceph documentation.
Ceph documentation is in a good shape and its always nice to follow.
A humble request to add erasure coding and tiering in docu
Hi All,
I'm trying to troubleshoot a strange issue with my Ceph cluster.
We're Running Ceph Version 0.72.2
All Nodes are Dell R515's w/ 6C AMD CPU w/ 32GB Ram, 12 x 3TB NearlineSAS
Drives and 2 x 100GB Intel DC S3700 SSD's for Journals.
All Pools have a replica of 2 or better. I.e. metadata repl
I think it's good now, explicit (and detailed).
On Tue, Mar 18, 2014 at 4:12 PM, Sage Weil wrote:
> On Tue, 18 Mar 2014, Sage Weil wrote:
>> On Tue, 18 Mar 2014, Milosz Tanski wrote:
>> > Is this statement in the documentation still valid: "Stale data is
>> > expired from the cache pools based on
For the record, I have one bucket in my slave zone that caught up to the
master zone. I stopped adding new data to my first bucket, and
replication stopped. I started tickling the bucket by uploading and
deleting a 0 byte file every 5 minutes. Now the slave has all of the
files in that bucke
I recall hearing that RGW GC waits 2 hours before garbage collecting
deleted chunks.
Take a look at https://ceph.com/docs/master/radosgw/config-ref/, the rgw
gc * settings. rgw gc obj min wait is 2 hours.
*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centralde
Hi John and all,
On the matter of the stuck --reset-journal, we had enable the debug and saw the
following:
2014-03-19 10:02:14.205646 7fd545180780 0 ceph version 0.72.2
(a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mds, pid 24197
2014-03-19 10:02:14.207653 7fd545180780 1 -- 10.4.1
29 matches
Mail list logo