n_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
> rule metadata {
> ruleset 1
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf fi
How I can force unmap mapped device?
Force I mean - unmap during usage as hot unplug cable on HDD.
It will be good for force unmap image from other node.
I need in the firm belief that image mounted only for one node, but sometime I
have buzzed processes which work with image on old node and can't
I use ceph for HA-cluster.
Some time ceph rbd go to have pause in work (stop i/o operations). Sometime it
can be when one of OSD slow response to requests. Sometime it can be my mistake
(xfs_freeze -f for one of OSD-drive).
I have 2 storage servers with one osd on each. This pauses can be few min
ep-scrub.
>
> http://tracker.ceph.com/issues/6278
>
> Cheers,
> Mike Dawson
>
>
> On 9/16/2013 12:30 PM, Timofey wrote:
>> I use ceph for HA-cluster.
>> Some time ceph rbd go to have pause in work (stop i/o operations). Sometime
>> it can be when one of OSD slow
I rename few images when cluster was in degradeted state. Now I can't map one
of them with error:
rbd: add failed: (6) No such device or address
I try rename failed image to old name, it isn't solve problem.
P.S. Now clusted in degrade state too - it remap data between osd after mark
one of osd
I use format 1.
Yes I see images, but can't map it.
> Hello Timofey,
>
> You still see your images with "rbd ls"?
> which format (1 or 2) do you use ?
>
>
> Laurent Barbe
>
>
> Le 18/09/2013 08:54, Timofey a écrit :
>> I rename few images wh
Do for every read/write rbd read/write full block of data (4MB) or rbd can
read/wite part of block?
For example - I have a 500MB file (database) and need random read/write by
blocks about 1-10Kb.
Do for every read 1 Kb rbd will read 4MB from hdd?
for write?
___
l stripe has to be
> read or write.
>
> Cheers
>
> --
> Cédric Lemarchand
>
> > Le 5 juin 2014 à 22:56, Timofey Koolin a écrit :
> >
> > Do for every read/write rbd read/write full block of data (4MB) or rbd can
> > read/wite part of block?
> >
>
if not a bit faster (comparing latencies).
>
> This is still too early to tell, but very encouraging.
>
> Best regards,
>
> Lionel
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ectory
> 2015-05-05 16:04:25.413110 7f57c518b780 0 osd.17 22428 load_pgs opened
> 160 pgs
>
> The filesystem might not have reached its balance between fragmentation
> and defragmentation rate at this time (so this may change) but mirrors
> our initial experience with Btrfs where this was the first sympto
tool compiler.
May be someone can explain, how this rule can crush system? May be
this is a crazy mistake somewhere?
--
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2015-05-10 8:23 GMT+03:00 Georgios Dimitrakakis :
> Hi Timofey,
>
> assuming that you have more than one OSD hosts and that the replicator
> factor is equal (or less) to the number of the hosts why don't you just
> change the crushmap to host replication?
>
> You just need
ny way, thanks for helping.
2015-05-10 12:44 GMT+03:00 Georgios Dimitrakakis :
> Timofey,
>
> may be your best chance is to connect directly at the server and see what is
> going on.
> Then you can try debug why the problem occurred. If you don't want to wait
> until tomorrow
Hey! I catch it again. Its a kernel bug. Kernel crushed if i try to
map rbd device with map like above!
Hooray!
2015-05-11 12:11 GMT+03:00 Timofey Titovets :
> FYI and history
> Rule:
> # rules
> rule replicated_ruleset {
> ruleset 0
> type replicated
> min_size 1
>
ght direction would be much appreciated.
>
> Anyway, thanks in advance!
>
> - Peace
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Have a nice day,
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is way to know real size of rbd image and rbd snapshots?
rbd ls -l write declared size of image, but I want to know real size.
--
Blog: www.rekby.ru
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph
I have read about support image format 2 in 3.9 kernel.
Is 3.9/3.10 kernel support rbd images format 2 now (I need connect to
images, cloned from snapshot)?
--
Blog: www.rekby.ru
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
Is anywhere documentation for manual install/modify ceph cluster WITHOUT
ceph-deploy?
--
Blog: www.rekby.ru
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ll
> the changes in more detail. The latter should help users wanting to
> customize puppet/chef scripts etc
>
> NB mkcephfs is deprecated now. It was just a shell script and interacted
> with the same commands that ceph-deploy now does.
>
>
> On Sat, Sep 7, 2013 at 8:47 AM
in ceph 0.67
cat /opt/ceph/current/bin/ceph | head -n 1
#!/usr/bin/python
if /usr/bin/python is python 3 I have error:
ceph auth get-or-create client.admin
File "/opt/ceph/current/bin/ceph", line 192
print '\n', s, '\n', '=' * len(s)
it is syntax of python2.
I propose change #!/usr/bin/pyt
I use ceph 0.67.2.
When I start
ceph-osd -i 0
or
ceph-osd -i 1
it start one process, but it process open few tcp-ports, is it normal?
netstat -nlp | grep ceph
tcp0 0 10.11.0.73:6789 0.0.0.0:* LISTEN
1577/ceph-mon - mon
tcp0 0 10.11.0.73:6800
Is lost journal mean that I lost all data from this osd?
And I must have HA (raid-1 or similar) journal storage if I use data
without replication?
--
Blog: www.rekby.ru
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
ample
-backups).
2013/9/18 Laurent Barbe
> Sorry, I don't really know where is the problem.
> I hope someone from the mailing you will be able to respond. I am
> interested to understand.
>
> Laurent
>
> Le 18/09/2013 18:08, Timofey Koolin a écrit :
>
>>
On 09 нояб. 2013 г., at 1:46, Gregory Farnum wrote:
> On Fri, Nov 8, 2013 at 8:49 AM, Listas wrote:
>> Hi !
>>
>> I have clusters (IMAP service) with 2 members configured with Ubuntu + Drbd
>> + Ext4. Intend to migrate to the use of Ceph and begin to allow distributed
>> access to the data.
>>
What will happened if RBD lose all copied of data-block and I read the block?
Context:
I want use RDB as main storage with replication factor 1 and drbd for
replication on non rbd storage by client side.
For example:
Computer1:
1. connect rbd as /dev/rbd15
2. use rbd as disk for drbd
Computer2:
monitors
1 down monitor
2 old monitors.
> On 04/28/2014 02:35 PM, Timofey Koolin wrote:
>> What will happened if RBD lose all copied of data-block and I read the block?
>>
>
> The read to the object will block until a replica comes online to serve it.
>
> Remember this w
04/28/2014 05:59 PM, Timofey Koolin wrote:
>> Is a setting for change the behavior to return read error instead of block
>> read?
>>
>> I think it is more reasonable behavior because it is similar to bad block on
>> HDD: it can’t be read.
>>
>> Or may b
tor
instance.
Similar incompatible changes was few times. You can find it by word “protocol”
on the release notes page.
I want reserve storage for the situation when rolling update with incompatible
components will fail.
>
>
>> Le 28 avr. 2014 à 17:59, Timofey Koolin a écrit :
Is this mean about I can stop updates for any potential downtime operation,
then update to next release without downtime?
> On Tue, Apr 29, 2014 at 01:13:25PM +0200, Wido den Hollander wrote:
>> When you go from the major release to another one there is no
>> problem. Dumpling -> Emperor -> Firef
be i missing something?
If someone have a expirience with similar solutions, story and links a
welcomed -.-
--
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Big thanks Nick, any way
Now i catch hangs of ESXi and Proxy =_=''
/* Proxy VM: Ubuntu 15.10/Kernel 4.3/LIO/Ceph 0.94/ESXi 6.0 Software iSCSI*/
I've moved to NFS-RBD proxy and now try to make it HA
2015-11-07 18:59 GMT+03:00 Nick Fisk :
> Hi Timofey,
>
> You are most
Great thanks, Alex, you give me a hope, i'll try SCST later in
configuration what you suggest
2015-11-09 16:25 GMT+03:00 Alex Gorbachev :
> Hi Timofey,
>
> With Nick's, Jan's, RedHat's and others' help we have a stable and, in my
> best judgement, well perf
Alex, are you use ESXi?
If yes, you use iSCSI Software adapter?
If yes, you use active/passive, fixed, RoundRobin MPIO?
Do you tune something on Initiator side?
If possible can you give more details? Please
2015-11-09 17:41 GMT+03:00 Timofey Titovets :
> Great thanks, Alex, you give me a h
672
>
> osd_crush_chooseleaf_type = 1
>
> mon_osd_full_ratio = .75
>
> mon_osd_nearfull_ratio = .65
>
> osd_backfill_full_ratio = .65
>
> mon_clock_drift_allowed = .15
>
> mon_clock_drift_warn_backoff = 30
>
> mon_osd_down_out_interval = 300
Hi list,
AFAIK, fiemap disabled by default because it cause rbd corruption.
Someone already test it with recent kernels?
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 30 Nov 2015 21:19, "Ilya Dryomov" wrote:
>
> On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets
wrote:
> > Hi list,
> > Short:
> > i just want ask, why i can't do:
> > echo 129 > /sys/class/block/rbdX/queue/nr_requests
> >
> >
Big Thanks Ilya,
for explanation
2015-11-30 22:15 GMT+03:00 Ilya Dryomov :
> On Mon, Nov 30, 2015 at 7:47 PM, Timofey Titovets
> wrote:
>>
>> On 30 Nov 2015 21:19, "Ilya Dryomov" wrote:
>>>
>>> On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets
bjs
3. Dedup objs (duperemove needed)
P.S. It's designet for btrfs, but it's not mean what i can't rename it
and add hooks for ext4/xfs based stores
Feel free to kick me, if you need some stuff to other FS
--
Have a nice day,
Timofey.
__
On 3 Dec 2015 8:56 p.m., "Florent B" wrote:
>
> By the way, when system boots, "ceph" service is starting everything
> fine. So "ceph-osd@" service is disabled => how to restart an OSD ?!
>
AFAIK, ceph now have 2 services:
1. Mount device
2. Start OSD
Also, service can be disabled, but this not m
On 3 Dec 2015 9:35 p.m., "Robert LeBlanc" wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Reweighting the OSD to 0.0 or setting the osd out (but not terminating
> the process) should allow it to backfill the PGs to a new OSD. I would
> try the reweight first (and in a test environ
rting Ceph osd.2 on test3...
> Dec 03 17:48:52 test3 ceph[931]: Running as unit run-1580.service.
>
>
> I don't see any udev rule related to Ceph on my servers...
>
>
> On 12/03/2015 07:56 PM, Adrien Gillard wrote:
>> I think OSD are automatically mouted at
flock is at /usr/bin/flock
>
> My problem is that "ceph" service is doing everything, and all others
> systemd services does not run...
>
> it seems there is a problem switching from old init.d services to new
> systemd..
>
> On 12/03/2015 08:31 PM, Timofey Titovets
I have centos 6.3 with kernel 3.8.4-1.el6.elrepo.x86_64 from elrepo.org.
Cephfs mount with kernel module.
[root@localhost t1]# wget
http://joomlacode.org/gf/download/frsrelease/17965/78413/Joomla_3.0.3-Stable-Full_Package.tar.gz
[root@localhost t1]# time tar -zxf Joomla_3.0.3-Stable-Full_Package.t
Can anybody repeat is test in own production cluster?
2013/4/4 Timofey Koolin
> I have centos 6.3 with kernel 3.8.4-1.el6.elrepo.x86_64 from elrepo.org.
> Cephfs mount with kernel module.
>
> [root@localhost t1]# wget
> http://joomlacode.org/gf/download/frsrelease/17965/784
I have test cluster
3 node:
1 - osd.0 mon.a mds.a
2 - osd.1
3 - empty
I create osd.2:
node1# ceph osd create
node3# mkdir /var/lib/ceph/osd/ceph-2
node3# mkfs.xfs /dev/sdb
node3# mount /dev/sdb /var/lib/ceph/osd/ceph-2
node3# ceph-osd -i 2 --mkfs --mkkey
copy keyring from node 3 to node 1 in ro
Is snapshot time depend from image size?
Do snapshot create consistent state of image for moment at start snapshot?
For example if I have file system on don't stop IO before start snapshot -
Is it worse than turn of power during IO?
--
Blog: www.rekby.ru
_
ceph -v
ceph version 0.61.3 (92b1e398576d55df8e5888dd1a9545ed3fd99532)
mount.ceph l6:/ /ceph -o name=admin,secret=...
mount error 12 = Cannot allocate memory
I have cluster with 1 mon, 2 osd, ipv6 network.
rbd work fine.
--
Blog: www.rekby.ru
___
cep
Is way to exclusive map rbd?
For example - I map on host A, then I try map it on host B. I want fail
map on host B while it mapped to host A.
I read about lock command, I want atomic lock and mount rbd for one host
and auto unlock it when host A fail.
--
Blog: www.rekby.ru
_
im4ag/Entropy_Calculation
So, if that looks interesting, please let me know,
Thanks.
--
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
messages in dmesg from client/server.
Reboot did fix that everytime. But it's very bad.
And this without any HA.
Just mount cephfs and export it, and running some fio load.
--
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and
this pass stress tests.
1. Don't try pass RBD directly to LIO, this setup are unstable
2. Instead of that, use Qemu + KVM (i use proxmox for that create VM)
3. Attach RBD to VM as VIRTIO-SCSI disk (must be exported by target_co
s modified OSD read 4MB of data and replicate it,
instead of only changes?
If yes, can striping help with that?
Thanks for any answer
--
Have a nice day,
Timofey.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/
7;' configuration?
> For example, something similar to 'iSCSI multipath'?
>
>
> I'm reading switch manuals and ceph documentations, but with no luck.
>
>
> Thanks.
Just use balance-alb, this will do a trick with no stack switches
--
Have a nice day,
Timofey
55 matches
Mail list logo