Hi, i've write some heuristic code for btrfs compression subsystem,
userspace option are simple and fast enough (~4GiB/s on my notebook).
Heuristic can detect good/bad compressible data, to decrease cpu load
and avoid compressing bad compressible data.
https://github.com/Nefelim4ag/Entropy_Calcul
2017-05-29 11:37 GMT+03:00 Marco Gaiarin :
>
> I've setup a little Ceph cluster (3 host, 12 OSD), all belonging to a
> single switch, using 2-1Gbit/s LACP links.
>
> Supposing to have two identical switches, there's some way to setup a
> ''redundant'' configuration?
> For example, something similar
Hi, i found that RBD Striping documentation are not enough detail.
Can some one explain how RBD stripe own object over more objects and
why it's better use striping instead of small rbd object size?
Also if RBD use object size = 4MB by default does it's mean that every
time object has modified OSD
JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and
this pass stress tests.
1. Don't try pass RBD directly to LIO, this setup are unstable
2. Instead of that, use Qemu + KVM (i use proxmox for that create VM)
3. Attach RBD to VM as VIRTIO-SCSI disk (must be exported by target_co
Hi does anyone try this stack, may be someone can provide some
feedback about it?
Thanks.
P.S.
AFAIK at now Ceph RBD + LIO lack of iSCSI HA support, so i think about NFS.
UPD1:
I did some tests and get strange behavior:
Every several minutes io from nfs client to nfs proxy just stops, no
message
flock is at /usr/bin/flock
>
> My problem is that "ceph" service is doing everything, and all others
> systemd services does not run...
>
> it seems there is a problem switching from old init.d services to new
> systemd..
>
> On 12/03/2015 08:31 PM, Timofey Titovets
Lol, it's opensource guys
https://github.com/ceph/ceph/tree/master/systemd
ceph-disk@
2015-12-03 21:59 GMT+03:00 Florent B :
> "ceph" service does mount :
>
> systemctl status ceph -l
> ● ceph.service - LSB: Start Ceph distributed file system daemons at boot
> time
>Loaded: loaded (/etc/init.d
On 3 Dec 2015 9:35 p.m., "Robert LeBlanc" wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Reweighting the OSD to 0.0 or setting the osd out (but not terminating
> the process) should allow it to backfill the PGs to a new OSD. I would
> try the reweight first (and in a test environ
On 3 Dec 2015 8:56 p.m., "Florent B" wrote:
>
> By the way, when system boots, "ceph" service is starting everything
> fine. So "ceph-osd@" service is disabled => how to restart an OSD ?!
>
AFAIK, ceph now have 2 services:
1. Mount device
2. Start OSD
Also, service can be disabled, but this not m
Hi list,
I create small tool for maintenance/optimization of btrfs based OSD store:
https://github.com/Nefelim4ag/ceph-btrfs-butler
May be it's can be useful for somebody
At now script can find rarely accessed objects on disk and based on
this information can:
1. Defrag objs
2. Compress objs
3. De
Big Thanks Ilya,
for explanation
2015-11-30 22:15 GMT+03:00 Ilya Dryomov :
> On Mon, Nov 30, 2015 at 7:47 PM, Timofey Titovets
> wrote:
>>
>> On 30 Nov 2015 21:19, "Ilya Dryomov" wrote:
>>>
>>> On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets
On 30 Nov 2015 21:19, "Ilya Dryomov" wrote:
>
> On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets
wrote:
> > Hi list,
> > Short:
> > i just want ask, why i can't do:
> > echo 129 > /sys/class/block/rbdX/queue/nr_requests
> >
> >
Hi list,
Short:
i just want ask, why i can't do:
echo 129 > /sys/class/block/rbdX/queue/nr_requests
i.e. why i can't set value greater then 128?
Why such a restriction?
Long:
Usage example:
i have slow CEPH HDD based storage and i want it export by iSCSI proxy
machine for ESXi cluster
If i have
Hi list,
AFAIK, fiemap disabled by default because it cause rbd corruption.
Someone already test it with recent kernels?
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On small cluster i've get a great sequental perfomance by using btrfs
on OSD, journal file (max sync interval ~180s) and with option
filestore journal parallel = true
2015-11-11 10:12 GMT+03:00 Ben Town :
> Hi Guys,
>
>
>
> I’m in the process of configuring a ceph cluster and am getting some less
Alex, are you use ESXi?
If yes, you use iSCSI Software adapter?
If yes, you use active/passive, fixed, RoundRobin MPIO?
Do you tune something on Initiator side?
If possible can you give more details? Please
2015-11-09 17:41 GMT+03:00 Timofey Titovets :
> Great thanks, Alex, you give me a h
d be just the superior
> switch response on a higher end switch.
>
> Using blk_mq scheduler, it's been reported to improve performance on random
> IO.
>
> Good luck!
>
> --
> Alex Gorbachev
> Storcium
>
> On Sun, Nov 8, 2015 at 5:07 PM, Timofey Titovets
> wr
5-10s.
>
> Sorry I couldn't be of more help.
>
> Nick
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Timofey Titovets
>> Sent: 07 November 2015 11:44
>> To: ceph-users@lists.ceph.com
&g
Hi List,
I Searching for advice from somebody, who use Legacy client like ESXi with Ceph
I try to build High-performance fault-tolerant storage with Ceph 0.94
In production i have 50+ TB of VMs (~800 VMs)
8 NFS servers each:
2xIntel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
12xSeagate ST2000NM0023
1xLS
Which backend you use in TGT for rbd?
2015-06-23 5:44 GMT+03:00 maoqi1982 :
> Hi list:
> my cluster include 4 server ,12 osd(4 osd/server), 1 mon(1 server), 1Gbps
> link, ceph version is 0.72 , the cluster status is ok, client is vmware
> vcenter.
> use rbd as tgt backend,expose 2TB LUN via iscs
cephfs is just fs, like ext4 and btrfs & etc.
But you can export it by NFS or Samba share.
P.S. I did test NFS kernel implementation and NFS-ganesha, both have
stability problems in my tests (strange deadlocks).
2015-06-22 11:16 GMT+03:00 Joakim Hansson :
> Hi list!
> I'm doing an internship at a
Hey! I catch it again. Its a kernel bug. Kernel crushed if i try to
map rbd device with map like above!
Hooray!
2015-05-11 12:11 GMT+03:00 Timofey Titovets :
> FYI and history
> Rule:
> # rules
> rule replicated_ruleset {
> ruleset 0
> type replicated
> min_size 1
>
FYI and history
Rule:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type room
step choose firstn 0 type rack
step choose firstn 0 type host
step chooseleaf firstn 0 type osd
step emit
}
And after reset
Georgios, oh, sorry for my poor english _-_, may be I poor expressed
what i want =]
i know how to write simple Crush rule and how use it, i want several
things things:
1. Understand why, after inject bad map, my test node make offline.
This is unexpected.
2. May be somebody can explain what and wh
Hi list,
i had experiments with crush maps, and I've try to get raid1 like
behaviour (if cluster have 1 working osd node, duplicate data across
local disk, for avoiding data lose in case local disk failure and
allow client working, because this is not a degraded state)
(
in best case, i want dyna
2015-05-06 20:51 GMT+03:00 Lionel Bouton :
> On 05/05/15 02:24, Lionel Bouton wrote:
>> On 05/04/15 01:34, Sage Weil wrote:
>>> On Mon, 4 May 2015, Lionel Bouton wrote:
Hi,
we began testing one Btrfs OSD volume last week and for this first test
we disabled autodefrag and began t
Hi list,
Excuse me, what I'm saying is off topic
@Lionel, if you use btrfs, did you already try to use btrfs compression for OSD?
If yes, сan you share the your experience?
2015-05-05 3:24 GMT+03:00 Lionel Bouton :
> On 05/04/15 01:34, Sage Weil wrote:
>> On Mon, 4 May 2015, Lionel Bouton wrote:
27 matches
Mail list logo