Hi everyone.
Had a problem about speed write by some write frist dd(s) on my rootdisk
(My rootdisk-RBD was based on other RBD-Image). It have run more slowly,
but running faster after.
(I used writeback cache on RBD Client side and RAID Phisycal).
#dd if=/dev/zero of=bigfile01 bs=1M count=500
Hi everyone,
I has been used the cache-tier on a data pool.
After a long time, a lot of rbd images don't be displayed in "rbd -p
data ls".
Although that Images still show through "rbd info" and "rados ls" command.
rbd -p data info volume-008ae4f7-3464-40c0-80b0-51140d8b95a8
rbd image 'volu
I understand, Thank Gregory Farnum for your explaining
--
Tuantaba
Ha Noi-VietNam
On 07/04/2015 00:54, Gregory Farnum wrote:
On Mon, Apr 6, 2015 at 2:21 AM, Ta Ba Tuan wrote:
Hi all,
I have ever to setup the cache-pool for my pool.
But had some proplems about cache-pool running, so I
Hi all,
I have ever to setup the cache-pool for my pool.
But had some proplems about cache-pool running, so I removed the cache
pool from My CEPH Cluster.
The DATA pool currently don't use cache pool, but "lfor" setting still
be appeared.
*lfor* seems is a setting, not flag.
pool 3 'data_po
[mailto:ceph-users-boun...@lists.ceph.com] *On
Behalf Of *Ta Ba Tuan
*Sent:* Friday, November 07, 2014 2:49 PM
*To:* ceph-users@lists.ceph.com
*Subject:* [ceph-users] How to detect degraded objects
Hi everyone,
111/57706299 objects degraded (0.001%)
&
n
bsp;
14918 active+c
Hi everyone,
111/57706299 objects degraded (0.001%)
14918 active+clean
1 active+clean+scrubbing+deep
52 active+recovery_wait+degraded
2 active+recovering+degraded
Ceph'state : *111 /*57706299 objects degraded.
Some missing object
d Zafman wrote:
Can you upload the entire log file?
David
On Nov 4, 2014, at 1:03 AM, Ta Ba Tuan <mailto:tua...@vccloud.vn>> wrote:
Hi Sam,
I resend logs with debug options http://123.30.41.138/ceph-osd.21.log
(Sorry about my spam :D)
I saw many missing objects :|
2014-11-04 15:2
duce with
debug osd = 20
debug filestore = 20
debug ms = 1
In the [osd] section of that osd's ceph.conf?
-Sam
On Sun, Nov 2, 2014 at 9:10 PM, Ta Ba Tuan wrote:
Hi Sage, Samuel & All,
I upgraded to GAINT, but still appearing that errors |:
I'm trying on deleting related objects/volum
Hi Samuel and Sage,
I will upgrde to Giant soon, Thank you so much.
--
Tuan
HaNoi-VietNam
On 11/01/2014 01:10 AM, Samuel Just wrote:
You should start by upgrading to giant, many many bug fixes went in
between .86 and giant.
-Sam
On Fri, Oct 31, 2014 at 8:54 AM, Ta Ba Tuan wrote:
Hi Sage
o Giant o resolve this bug?,
Thank you,
--
Tuan
HaNoi-VietNam
On 10/30/2014 10:02 PM, Sage Weil wrote:
On Thu, 30 Oct 2014, Ta Ba Tuan wrote:
Hi Everyone,
I upgraded Ceph to Giant by installing *tar.gz package, but appeared some
errors related Object Trimming or Snap Trimming:
I think having so
Hi Everyone,
I upgraded Ceph to Giant by installing *tar.gz package, but appeared
some errors related Object Trimming or Snap Trimming:
I think having some missing objects and be not recovered.
* ceph version 0.86*-106-g6f8524e (6f8524ef7673ab4448de2e0ff76638deaf03cae8)
1: /usr/bin/ceph-osd(
"num_objects_unfound": 0,
"num_objects_dirty": 1092,
"num_whiteouts": 0,
"num_read": 4820626,
"num_read_kb": 59073045,
"num_write": 12748709,
lete:true),
before_progress: ObjectRecoveryProgress(first, data_recovered_to:0,
data_comp
lete:false, omap_recovered_to:, omap_complete:false))])
I think having some error objects. What'm I must do?,please!
Thanks!
--
Tuan
HaNoi-VietNam
On 10/25/2014 03:01 PM, Ta Ba Tuan wrote:
I send some
252-102839/53 luod=0'0 crt=102808'38419
active] *enter **Started/ReplicaActive/RepNotRecovering*
Thanks!
On 10/25/2014 11:26 AM, Ta Ba Tuan wrote:
Hi Craig, Thanks for replying.
When i started that osd, Ceph Log from "ceph -w" warns pgs 7.9d8
23.596, 23.9c6, 23.63 can't rec
bad snapshots created on
older versions of Ceph.
Were any of the snapshots you're removing up created on older versions
of Ceph? If they were all created on Firefly, then you should open a
new tracker issue, and try to get some help on IRC or the developers
mailing list.
On Thu, Oct
Dear everyone
I can't start osd.21, (attached log file).
some pgs can't be repair. I'm using replicate 3 for my data pool.
Feel some objects in those pgs be failed,
I tried to delete some data that related above objects, but still not
start osd.21
and, removed osd.21, but other osds (eg: osd.8
Hi eveyone, I use replicate 3, many unfound object and Ceph very slow.
pg 6.9d8 is active+recovery_wait+degraded+remapped, acting [22,93], 4
unfound
pg 6.766 is active+recovery_wait+degraded+remapped, acting [21,36], 1
unfound
pg 6.73f is active+recovery_wait+degraded+remapped, acting [19,84],
i solved this by export key from "ceph auth export..." :D
above question, i use key with old format version.
On 06/09/2014 05:44 PM, Ta Ba Tuan wrote:
Hi all,
I adding a new ceph-data host, but
#ceph -s -k /etc/ceph/ceph.client.admin.keyring
2014-06-09 17:39:51.686082 7fade
Hi all,
I adding a new ceph-data host, but
#ceph -s -k /etc/ceph/ceph.client.admin.keyring
2014-06-09 17:39:51.686082 7fade4f14700 0 librados: client.admin
authentication error (1) Operation not permitted
Error connecting to cluster: PermissionError
my ceph.conf:
[global]
auth cluster requ
Thanks *Lewis*
I removed osd as follow, and re-add it. It's solved.
ceph osd out 26
/etc/init.d/ceph stop osd.26
ceph osd crush remove osd.26
ceph auth del osd.26
ceph osd down 26
ceph osd rm 26
On 05/31/2014 04:16 AM, Craig Lewis wrote:
On 5/30/14 03:08 , Ta Ba Tuan wrote:
Dear all
Dear all,
I'm using Firefly. One disk was false, I replated failure disk and start
that osd.
But that osd 's still down.
Help me,
Thank you
2014-05-30 17:01:56.090314 7f9387516780 -1 journal FileJournal::_open:
disabling aio for non-block journal. Use journal_force_aio to force use o
f aio
Dear Yang,
I planed set nodeep-scrub at nigh daily by crontab.
and with error "HEALTH_WARN nodeep-scrub flag(s) set". I only
concentrate messages from the monitoring tool (vd: nagios) => and I
re-writed nagios'checkscript to with message "HEALTH_WARN nodeep-scrub
flag(s) set" returns code = 0
.
16 drwxr-xr-x 445 root root 12288 Apr 18 19:17 ..
Thanks!
On 04/18/2014 06:11 PM, Ирек Фасихов wrote:
Is there any data to:
ls -lsa /var/lib/ceph/osd/ceph-82/current/14.7c8_*/
ls -lsa /var/lib/ceph/osd/ceph-26/current/14.7c8_*/
2014-04-18 14:36 GMT+04:00 Ta Ba Tuan <mailto:tua...@vccloud
Hi Ирек Фасихов
I send it to you :D,
Thank you!
{ "state": "incomplete",
"epoch": 42880,
"up": [
82,
26],
"acting": [
82,
26],
"info": { "pgid": "14.7c8",
"last_update": "0'0",
"last_complete": "0'0",
"log_tail": "0'0",
"last_user_v
? (22,23,82)
2014-04-18 12:35 GMT+04:00 Ta Ba Tuan <mailto:tua...@vccloud.vn>>:
Thank Ирек Фасихов for my reply.
I restarted osds that contains incomplete pgs, but still false :(
On 04/18/2014 03:16 PM, Ирек Фасихов wrote:
Ceph detects that a placement group is missing a
the first sign that your hard
drive was fail under.
ceph pg repair *14.a5a *
ceph pg repair *14.aa8*
2014-04-18 12:09 GMT+04:00 Ta Ba Tuan mailto:tua...@vccloud.vn>>:
Dear everyone,
I lost 2 osd(s) and my '.rg
Dear everyone,
I lost 2 osd(s) and my '.rgw.buckets' pool is using 2 replicate,
Therefore has some incomplete pgs
cluster
health HEALTH_WARN 88 pgs backfill; 1 pgs backfilling; 89 pgs
degraded; *5 pgs incomplete;**
*
*14.aa8* 39930 0 0 1457965487
ount": "1",
"rbd_default_stripe_unit": "*8388608*", #(16MB)
Thanks Wido!
--
Tuan
On 04/06/2014 04:34 AM, Wido den Hollander wrote:
On 04/05/2014 07:15 AM, Ta Ba Tuan wrote:
Hi everyone
My Ceph cluster is running, I'm plaining to tune my Ce
Hi everyone
My Ceph cluster is running, I'm plaining to tune my Ceph performance.
I want to increase object size from 4M to 16MB (maybe 32MB,..)
With the fomular: "stripe_unit" * "stripe_count" equals "object_size",
i'm thinking to change this following option:
"rbd_default_stripe_unit" fro
down0
71 1 osd.71 down0
Thank you!
--
TA BA TUAN
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi James,
Proplem is why the Ceph not recommend using Device'UUID in Ceph.conf,
when, above error can be occur?
--
TuanTaBa
On 11/26/2013 04:04 PM, James Harper wrote:
Hi all
I have 3 OSDs, named sdb, sdc, sdd.
Suppose, one OSD with device /dev/sdc die => My server have only sdb, sdc
at the
Hi all
I have 3 OSDs, named sdb, sdc, sdd.
Suppose, one OSD with device */dev/sdc* die => My server have only sdb,
sdc at the moment.
Because device /dev/sdc replaced by /dev/sdd
I have the following configuration:
[osd.0]
host = data-01
devs = /dev/sdb1
[osd.1]
host = data-01
dev
Please help me!
On 07/20/2013 02:11 AM, Ta Ba Tuan wrote:
Hi everyone,
I have *3 nodes (running MON and MDS)*
and *6 data nodes ( 84 OSDs**)*
Each data nodes has configuraions:
- CPU: 24 processor * Core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
- RAM: 32GB
- Disk: 14*4TB
(14disks *4TB
Hi everyone,
I have *3 nodes (running MON and MDS)*
and *6 data nodes ( 84 OSDs**)*
Each data nodes has configuraions:
- CPU: 24 processor * Core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
- RAM: 32GB
- Disk: 14*4TB
(14disks *4TB *6 data nodes= 84 OSDs)
To optimize Ceph Cluster, *I adjusted
00'0 0.00
How to delete above pgs, Greg?
Thank Greg so much.
--tuantaba
On 07/19/2013 05:01 AM, Gregory Farnum wrote:
On Thu, Jul 18, 2013 at 3:53 AM, Ta Ba Tuan wrote:
Hi all,
I have 4 (stale+inactive) pgs, how to delete those pgs?
pgmap v59722: 21944 pgs: 4 stal
Hi Samuel,
Output logs from : ceph pg dump | grep 'stale\|creating'
0.f4f 0 0 0 0 0 0 0 stale+creating 2013-07-17 16:35:06.882419 0'0 0'0 []
[68,12] 0│'0 0.00 0'0 0.00 │
2.f4d 0 0 0 0 0 0 0 stale+creating 2013-07-17 16:35:22.826552 0'0 0'0 []
[68,12] 0│
'0 0.00 0'0 0.00 │
0.2c
Hi all,
I have 4 (stale+inactive) pgs, how to delete those pgs?
pgmap v59722: 21944 *pgs: 4 stale,* 12827 active+clean, 9113
active+degraded; 45689 MB data, 1006 GB used, 293 TB / 294 TB avail;
I found on google a long time, still can't resolve it.
Please, help me!
Thank you so much.
--tuan
:
mon = "allow r"
osd = "allow class-read object_prefix rbd_children, allow pool templates r
class-read, allow pool vms rwx"
A client mounting the file system with minimal permissions would need
caps like:
mds = "allow"
osd = "allow rw pool data&quo
zombie pgs might occured when i remove some data pools.
but, with pgs in stale state, i can't delete it?
I found this guide, but I don't understand it.
http://ceph.com/docs/next/dev/osd_internals/pg_removal/
Thanks!
--tuantaba
On 07/18/2013 09:22 AM, Ta Ba Tuan wrote:
I'm using
Ceph still warning: "pgmap v57451: 22944 pgs:
*4 creating*, 22940 active+clean;"
I don't know how to remove those pgs?.
Please guiding this error help me!
Thank you!
--tuantaba
TA BA TUAN
On 07/18/2013 01:16 AM, Samuel Just wrote:
What version are you running? How did you move
Hi everyone,
I converted every osds from 2TB to 4TB, and when moving complete, show
log Ceph realtime"ceph -w":
displays error: *"I don't have pgid 0.2c8"*
after then, I run: "ceph pg force_create_pg 0.2c8"
Ceph warning: pgmap v55175: 22944 pgs: 1 creating, 22940 active+clean, 3
stale+active
Thank Sage,
tuantaba
On 07/16/2013 09:24 PM, Sage Weil wrote:
On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
Thanks Sage,
I wories about returned capacity when mounting CephFS.
but when disk is full, capacity will showed 50% or 100% Used?
100%.
sage
On 07/16/2013 11:01 AM, Sage Weil wrote:
On
McBride:
On 16/07/13 09:35, Ta Ba Tuan wrote:
Hi everyone,
OSDs capacity sumary is 144TB, and *when I mount CephFS on Ubuntu 14.04
then it only display **576GB*, (Currently, I'm using replicate 3 for
data pools)
(using: mount -t ceph Monitor_IP:/ /ceph -o
name=admin,secret=xx")
Hi Markus,
Limit access to specified pool through key authentication.:
Example, i having a pool is 'instances', and setting permission likes:
#ceph auth get-or-create client.instances mon 'allow r' osd 'allow rwx
pool=instances'
--tuantaba
TA BA TUAN
Hi Windo,
Client OS that I'm using is Ubuntu 14.04 64 bits,
I notified to dev list about this bug.
On 07/16/2013 03:44 PM, Wido den Hollander wrote:
Hi,
On 07/16/2013 10:35 AM, Ta Ba Tuan wrote:
Hi everyone,
OSDs capacity sumary is 144TB, and *when I mount CephFS on Ubuntu 14.04
th
Hi everyone,
OSDs capacity sumary is 144TB, and *when I mount CephFS on Ubuntu 14.04
then it only display **576GB*, (Currently, I'm using replicate 3 for
data pools)
(using: mount -t ceph Monitor_IP:/ /ceph -o name=admin,secret=xx")
I don't think capacity is too small?, please explain
Thanks Sage,
I wories about returned capacity when mounting CephFS.
but when disk is full, capacity will showed 50% or 100% Used?
On 07/16/2013 11:01 AM, Sage Weil wrote:
On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
Hi everyone.
I have 83 osds, and every osds have same 2TB, (Capacity sumary is
Hi everyone.
I have 83 osds, and every osds have same 2TB, (Capacity sumary is 166TB)
I'm using replicate 3 for pools ('data','metadata').
But when mounting Ceph filesystem from somewhere (using: mount -t ceph
Monitor_IP:/ /ceph -o name=admin,secret=xx")
then/*capacity sumary is showed
subscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Joao,
Thank for replying, I hope I might contribute my knowledges for the Ceph,
With me, the Ceph is very nice!!
Thank you!
--TuanTB
On 05/29/2013 10:17 PM, Joao Eduardo Luis wrote:
On 05/29/2013 05:26 AM, Ta Ba Tuan wrote:
Hi Majodomo
I am TuanTB (full name: Tuan Ta Ba, and I come from
Hi Majodomo
I am TuanTB (full name: Tuan Ta Ba, and I come from VietNam),
I 'm working about the Cloud Computing
Of course, Were are using the Ceph, and I'm a new Ceph'member
so, I hope to be joined "ceph-delvel", "ceph-users" mailist.
Thank you so much
Regrex!
--TuanTB
__
51 matches
Mail list logo