Hi
I have theoretical question about network in ceph.
If I have two networks (public and cluster network) and one link in public
network is broken ( cluster network is fine) what I will see in my cluster ?
How work ceph in this situation ?
Or how works ceph if link to cluster network was broken
HI all,
After adding the nss and the keystone admin url parameters in ceph.conf and
creating the openSSL certificates, all is working well.
If I had followed the doc and processed by copy/paste, I wouldn't have
encountered any problems.
As all is working well without this set of parameters us
Hi,
still designing and deciding, we asked ourself: How dose the data
travels from and to an OSD?
E.G. I have my Fileserver with a rbd mounted and a client workstation
writes/read to/from a share on that rbd.
Is the data directly going to an OSD (node) or is it e.g. "travelling"
trough the monit
On 05/07/2015 10:28 AM, Götz Reinicke - IT Koordinator wrote:
> Hi,
>
> still designing and deciding, we asked ourself: How dose the data
> travels from and to an OSD?
>
> E.G. I have my Fileserver with a rbd mounted and a client workstation
> writes/read to/from a share on that rbd.
>
> Is the
Hi,
i cant remember on which drive I install which OSD journal :-||
Is there any command to show this?
thanks
regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
It looks like a bit wrong description
http://ceph.com/docs/master/dev/rbd-diff/
- u8: ‘s’
- u64: (ending) image size
I suppose that instead of u64 should be used something like le64, isn't it?
Because of from this description is not clear which bytes order should I
use..
_
On 07/05/15 20:21, ghislain.cheval...@orange.com wrote:
HI all,
After adding the nss and the keystone admin url parameters in ceph.conf and
creating the openSSL certificates, all is working well.
If I had followed the doc and processed by copy/paste, I wouldn't have
encountered any problems.
Hi,
Inside your mounted osd there is a symlink - journal - pointing to a file
or disk/partition used with it.
Cheers,
Martin
On Thu, May 7, 2015 at 11:06 AM, Patrik Plank wrote:
> Hi,
>
>
> i cant remember on which drive I install which OSD journal :-||
> Is there any command to show this?
>
Hi,
Patrik Plank wrote:
> i cant remember on which drive I install which OSD journal :-||
> Is there any command to show this?
It's probably not the answer you hope, but why don't use a simple:
ls -l /var/lib/ceph/osd/ceph-$id/journal
?
--
François Lafont
On 05/06/15 19:51, Lionel Bouton wrote:
> During normal operation Btrfs OSD volumes continue to behave in the same
> way XFS ones do on the same system (sometimes faster/sometimes slower).
> What is really slow though it the OSD process startup. I've yet to make
> serious tests (umounting the file
On 06/05/2015 16:58, Scottix wrote:
As a point to
* someone accidentally removed a thing, and now they need a thing back
I thought MooseFS has an interesting feature that I thought would be
good for CephFS and maybe others.
Basically a timed Trashbin
"Deleted files are retained for a configur
Hi,
In Cache Tier parameters, there is nothing to tell the cache to flush dirty
objects on cold storage when the cache is under-utilized (as far as you 're
under the "cache_target_dirty_ratio", it's look like dirty objects can be
keeped in the cache for years).
That is to say that the flush
Hi,
On 05/07/2015 12:04 PM, Lionel Bouton wrote:
On 05/06/15 19:51, Lionel Bouton wrote:
*snipsnap*
We've seen progress on this front. Unfortunately for us we had 2 power
outages and they seem to have damaged the disk controller of the system
we are testing Btrfs on: we just had a system crash
Indeed it is not necessary to have any OSD entries in the Ceph.conf
file
but what happens in the event of a disk failure resulting in changing
the mount device?
For what I can see is that OSDs are mounted from entries in /etc/mtab
(I am on CentOS 6.6)
like this:
/dev/sdj1 /var/lib/ceph/osd/c
On 05/07/2015 12:10 PM, John Spray wrote:
> On 06/05/2015 16:58, Scottix wrote:
>> As a point to
>> * someone accidentally removed a thing, and now they need a thing back
>>
>> I thought MooseFS has an interesting feature that I thought would be
>> good for CephFS and maybe others.
>>
>> Basically
Hi,
On 05/07/15 12:30, Burkhard Linke wrote:
> [...]
> Part of the OSD boot up process is also the handling of existing
> snapshots and journal replay. I've also had several btrfs based OSDs
> that took up to 20-30 minutes to start, especially after a crash.
> During journal replay the OSD daemon
Hi,
>>If I have two networks (public and cluster network) and one link in public
>>network is broken ( cluster network is fine) what I will see in my cluster ?
See
http://ceph.com/docs/master/rados/configuration/network-config-ref/
only osd between them use private network.
so if public ne
This page explains what happens quite well:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#flapping-osds
"We recommend using both a public (front-end) network and a cluster (back-end)
network so that you can better meet the capacity requirements of object
replication. An
I have not used ceph-deploy, but it should use ceph-disk for the OSD
preparation. Ceph-disk creates GPT partitions with specific partition
UUIDS for data and journals. When udev or init starts the OSD, or mounts it
to a temp location reads the whoami file and the journal, then remounts it
in the c
Hi
Just wanted to mention this again, if it went unnoticed.
Problem is that I need to get the same ID for a pool as it was before, or a
way to tell ceph where to find the original image for the VM's. I have them
available.
T
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> GODIN Vincent (SILCA)
> Sent: 07 May 2015 11:13
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] About Ceph Cache Tier parameters
>
> Hi,
>
> In Cache Tier parameters, there is nothing
You are correct -- it is little endian like the other values. I'll open a
ticket to correct the document.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: "Ultral"
To: ceph-us...@ceph.com
Sent: Thursday, May 7, 2015 5:23:12 AM
Su
Hi all,
Something strange occurred.
I have ceph 0.87 version and 2048gb image format 1. I decided to made
incremental backups between clusters
i've made initial copy,
time bbcp -x 7M -P 3600 -w 32M -s 6 -Z 5030:5035 -N io "rbd
export-diff --cluster cluster1 --pool RBD-01 --image
CEPH_006__0
Hi,
after i have installed calamari,
ceph shows me following error when i change/reinstall add a osd.0.
Traceback (most recent call last):
File "/usr/bin/calamari-crush-location", line 86, in
sys.exit(main())
File "/usr/bin/calamari-crush-location", line 83, in main
print get_o
Hi all,
I've found what I think is a packaging error in Hammer. I've tried
registering for the tracker.ceph.com site but my confirmation email
has got lost somewhere!
/usr/bin/ceph is installed by the ceph-common package.
```
dpkg -S /usr/bin/ceph
ceph-common: /usr/bin/ceph
```
It relies on cep
Hi,
https://github.com/ceph/ceph/pull/4517 is the fix for
http://tracker.ceph.com/issues/11388
Cheers
On 07/05/2015 20:28, Andy Allan wrote:
> Hi all,
>
> I've found what I think is a packaging error in Hammer. I've tried
> registering for the tracker.ceph.com site but my confirmation email
>
Hi Loic,
Sorry for the noise! I'd looked when I first ran into it and didn't
find any reports or PRs, I should have checked again today.
Thanks,
Andy
On 7 May 2015 at 19:41, Loic Dachary wrote:
> Hi,
>
> https://github.com/ceph/ceph/pull/4517 is the fix for
> http://tracker.ceph.com/issues/113
Hi,
when issuing rbd unmap command when there is no network connection with
mons and osds, the command hangs. Isn't there a option to force unmap even
on this situation?
Att.
Vandeir.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.c
On Thu, May 7, 2015 at 10:20 PM, Vandeir Eduardo
wrote:
> Hi,
>
> when issuing rbd unmap command when there is no network connection with mons
> and osds, the command hangs. Isn't there a option to force unmap even on
> this situation?
No, but you can Ctrl-C the unmap command and that should do i
Hi,
I am setting up a local instance of ceph cluster with latest source from
git hub. The build succeeded and installation was successful, But I could
not start the monitor.
The "ceph start" command returns immediately and does not output anything.
$ sudo /etc/init.d/ceph start mon.monitor1
$
It does sound contradictory: why would read operations in cephfs result
in writes to disk? But they do. I upgraded to Hammer last week and I am
still seeing this.
The setup is as follows:
EC-pool on hdd's for data
replicated pool on ssd's for data-cache
replicated pool on ssd's for meta-data
Now
On 05/07/2015 12:53 PM, Andy Allan wrote:
> Hi Loic,
>
> Sorry for the noise! I'd looked when I first ran into it and didn't
> find any reports or PRs, I should have checked again today.
>
> Thanks,
> Andy
That's totally fine. If you want, you can review that PR and give a
thumbs up or down comm
I have another thread goign on about truncation of objects and I believe
this is a separate but equally bad issue in civetweb/radosgw. My cluster
is completely healthy
I have one (possibly more) objects stored in ceph rados gateway that
will return a different size every time I Try to download
- Original Message -
> From: "Sean"
> To: ceph-users@lists.ceph.com
> Sent: Thursday, May 7, 2015 3:35:14 PM
> Subject: [ceph-users] RGW - Can't download complete object
>
> I have another thread goign on about truncation of objects and I believe
> this is a separate but equally bad iss
Srikanth,
Try if this helps..
sudo initctl list|grep ceph (should display all ceph daemon)
sudo start ceps-mon-all (To start ceph all ceph-monitor)
Thanks
-Krishna
> On May 7, 2015, at 1:35 PM, Srikanth Madugundi
> wrote:
>
> Hi,
>
> I am setting up a local instance of ceph cluster wi
You may also be able to use `ceph-disk list`.
On Thu, May 7, 2015 at 3:56 AM, Francois Lafont wrote:
> Hi,
>
> Patrik Plank wrote:
>
> > i cant remember on which drive I install which OSD journal :-||
> > Is there any command to show this?
>
> It's probably not the answer you hope, but why don't
On Thu, May 7, 2015 at 5:20 AM, Wido den Hollander wrote:
>
> Aren't snapshots something that should protect you against removal? IF
> snapshots work properly in CephFS you could create a snapshot every hour.
>
>
Unless the file is created and removed between snapshots, then the Recycle
Bin featu
I tried "echo 3 > /proc/sys/vm/drop_caches" and dentry_pinned_count dropped.
Thanks for your help.
On Thu, Apr 30, 2015 at 11:34 PM Yan, Zheng wrote:
> On Thu, Apr 30, 2015 at 4:37 PM, Dexter Xiong wrote:
> > Hi,
> > I got these message when I remount:
> > 2015-04-30 15:47:58.199837 7f9ad3
This is pretty weird to me. Normally those PGs should be reported as
active, or stale, or something else in addition to remapped. Sam
suggests that they're probably stuck activating for some reason (which
is a state in new enough code, but not all versions), but I can't tell
or imagine why from the
Sam? This looks to be the HashIndex::SUBDIR_ATTR, but I don't know
exactly what it's for nor why it would be getting constantly created
and removed on a pure read workload...
On Thu, May 7, 2015 at 2:55 PM, Erik Logtenberg wrote:
> It does sound contradictory: why would read operations in cephfs
Hi,
I built and installed ceph source from (wip-newstore) branch and could not
start osd with "newstore" as osd objectstore.
$ sudo /usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c
/etc/ceph/ceph.conf --cluster ceph -f
2015-05-08 05:49:16.130073 7f286be01880 -1 unable to create object
I think you need to add the following..
enable experimental unrecoverable data corrupting features = newstore rocksdb
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Srikanth Madugundi
Sent: Thursday, May 07, 2015 10:56 PM
To: ceph-us...@ceph.c
Thanks for the details, Somnath.
So it definitely sounds like 128 pgs per pool is way too many? I lowered
ours to 16 on a new deploy and the warning is gone. I'm not sure if this
number is sufficient, though...
On Wed, May 6, 2015 at 4:10 PM, Somnath Roy wrote:
> Just checking, are you aware of
Nope, 16 seems way too less for performance.
How many OSDs you have ? And how many pools are you planning to create ?
Thanks & Regards
Somnath
From: Chris Armstrong [mailto:carmstr...@engineyard.com]
Sent: Thursday, May 07, 2015 11:34 PM
To: Somnath Roy
Cc: Stuart Longland; ceph-users@lists.ceph.
It brings some comfort to know you found it weird too.
In the end, we noted that the tunables were in ‘legacy’ mode - a hold over from
prior experimentation, and a possible source of how we ended up with the
remapped PGs in the first place. Setting that back to ‘firefly’ cleared up the
remainin
Sorry, I didn’t read through all..It seems you have 6 OSDs, so, I would say 128
PGs per pool is not bad !
But, if you keep on adding pools, you need to lower this number, generally ~64
PGs per pool should achieve good parallelism with lower number of OSDs..If you
grow your cluster , create pools
46 matches
Mail list logo