Hi! I'm playing with a test setup of ceph jewel with bluestore and cephfs
over erasure-coded pool with replicated pool as a cache tier. After
writing some number of small files to cephfs I begin seeing the following
error messages during the migration of data from cache to EC pool:
2016-09-
Hi!
I wanted to report a bug in ceph, but I found out that visiting
http://tracker.ceph.com/projects/ceph/issues/new gives me only "403 You
are not authorized to access this page."
What does it mean - why is it forbidden to post issues?
--
With best regards,
Vitaliy Filippov
Thanks for the reply! Ok I understand :-)
But the page still shows 403 by now...
5 августа 2018 г. 6:42:33 GMT+03:00, Gregory Farnum пишет:
>On Sun, Aug 5, 2018 at 1:25 AM Виталий Филиппов
>wrote:
>
>> Hi!
>>
>> I wanted to report a bug in ceph, but I foun
Hi,
I've recently tried to setup a user for CephFS running on a pair of
replicated+erasure pools, but after I ran
ceph fs authorize ecfs client.samba / rw
The "client.samba" user could only see listings, but couldn't read or
write any files. I've tried to look in logs and to raise the debu
By the way, does it happen with all installations or only under some
conditions?
CephFS will be offline and show up as "damaged" in ceph -s
The fix is to downgrade to 13.2.1 and issue a "ceph fs repaired "
command.
Paul
--
With best regards,
Vitaliy Filippov
__
I mean, does every upgraded installation hit this bug, or do some upgrade
without any problem?
The problem occurs after upgrade, fresh 13.2.2 installs are not affected.
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list
ceph-users
Hi
After I recreated one OSD + increased pg count of my erasure-coded (2+1) pool
(which was way too low, only 100 for 9 osds) the cluster started to eat
additional disk space.
First I thought that was caused by the moved PGs using additional space during
unfinished backfills. I pinned most of
1.0 TiB 7611672
rpool_hdd 15 9.2 MiB 0 515 GiB 92
fs_meta44 20 KiB 0 515 GiB 23
fs_data45 0 B 0 1.0 TiB 0
How to heal it?
--
С наилучшими пожеланиями,
Витали
This may be the explanation:
https://serverfault.com/questions/857271/better-performance-when-hdd-write-cache-is-disabled-hgst-ultrastar-7k6000-and
Other manufacturers may have started to do the same, I suppose.
--
With best regards,
Vitaliy Filippov
Ok... That's better than previous thread with file download where the topic
starter suffered from normal only-metadata-journaled fs... Thanks for the link,
it would be interesting to repeat similar tests. Although I suspect it
shouldn't be that bad... at least not all desktop SSDs are that broke
Is RDMA officially supported? I'm asking because I recently tried to use DPDK
and it seems it's broken... i.e the code is there, but does not compile until I
fix cmake scripts, and after fixing the build OSDs just get segfaults and die
after processing something like 40-50 incoming packets.
May
rados bench is garbage, it creates and benches a very small amount of objects.
If you want RBD better test it with fio ioengine=rbd
7 февраля 2019 г. 15:16:11 GMT+03:00, Ryan пишет:
>I just ran your test on a cluster with 5 hosts 2x Intel 6130, 12x 860
>Evo
>2TB SSD per host (6 per SAS3008), 2x
"Advanced power loss protection" is in fact a performance feature, not a safety
one.
28 февраля 2019 г. 13:03:51 GMT+03:00, Uwe Sauter
пишет:
>Hi all,
>
>thanks for your insights.
>
>Eneko,
>
>> We tried to use a Samsung 840 Pro SSD as OSD some time ago and it was
>a no-go; it wasn't that perfo
Is it a question to me or Victor? :-)
I did test my drives, intel nvmes are capable of something like 95100 single
thread iops.
10 марта 2019 г. 1:31:15 GMT+03:00, Martin Verges
пишет:
>Hello,
>
>did you test the performance of your individual drives?
>
>Here is a small snippet:
>-
Hi Felix,
Better use fio.
Like fio -ioengine=rbd -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128
-rw=randwrite -pool=rpool_hdd -runtime=60 -rbdname=testimg (for peak parallel
random iops)
Or the same with -iodepth=1 for the latency test. Here you usually get
Or the same with -ioengine=
Bluestore's deferred write queue doesn't act like Filestore's journal because
a) it's very small = 64 requests b) it doesn't have a background flush thread.
Bluestore basically refuses to do writes faster than the HDD can do them
_on_average_. With Filestore you can have 1000-2000 write iops unt
Cache=writeback is perfectly safe, it's flushed when the guest calls fsync, so
journaled filesystems and databases don't lose data that's committed to the
journal.
25 июля 2019 г. 2:28:26 GMT+03:00, Stuart Longland
пишет:
>On 25/7/19 9:01 am, vita...@yourcmc.ru wrote:
>>> 60 millibits per seco
Hi again,
I reread your initial email - do you also run a nanoceph on some SBCs each
having one 2.5" 5400rpm HDD plugged into it? What SBCs do you use? :-)
--
With best regards,
Vitaliy Filippov___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
Afaik no. What's the idea of running a single-host cephfs cluster?
4 августа 2019 г. 13:27:00 GMT+03:00, Eitan Mosenkis пишет:
>I'm running a single-host Ceph cluster for CephFS and I'd like to keep
>backups in Amazon S3 for disaster recovery. Is there a simple way to
>extract a CephFS snapshot a
30gb already includes WAL, see
http://yourcmc.ru/wiki/Ceph_performance#About_block.db_sizing
15 августа 2019 г. 1:15:58 GMT+03:00, Anthony D'Atri
пишет:
>Good points in both posts, but I think there’s still some unclarity.
>
>Absolutely let’s talk about DB and WAL together. By “bluestore goes
https://yourcmc.ru/wiki/Ceph_performance
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc
19 декабря 2019 г. 0:41:02 GMT+03:00, Sinan Polat пишет:
>Hi,
>
>I am aware that
>https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-
consumer grade SSDs to the point
>where we had to replace them all. You have to be very careful and know
>exactly what you are buying.
>>
>>
>> Mark
>>
>>
>>> On 12/19/19 12:04 PM, jes...@krogh.cc wrote:
>>> I dont think “usually” is good eno
...disable signatures and rbd cache. I didn't mention it in the email to not
repeat myself. But I have it in the article :-)
--
With best regards,
Vitaliy Filippov___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Hi! Thanks.
The parameter gets reset when you reconnect the SSD so in fact it requires not
to power cycle it after changing the parameter :-)
Ok, this case seems lucky, ~2x change isn't a lot. Can you tell the exact model
and capacity of this Micron, and what controller was used in this test? I
24 matches
Mail list logo