That is the expected behavior. RBD is emulating a real device, you wouldn't
expect good things to happen if you were to plug the same drive into two
different machines at once (perhaps with some soldering). There is no built in
mechanism for two machines to access the same block device concurr
Wyatt,
Please post your ceph.conf.
- mike
On 5/1/2013 12:06 PM, Wyatt Gorman wrote:
Hi everyone,
I'm setting up a test ceph cluster and am having trouble getting it
running (great for testing, huh?). I went through the installation on
Debian squeeze, had to modify the mkcephfs script
y more) OSDs than highest replication level out of your pools.
Mike
On 5/1/2013 12:23 PM, Wyatt Gorman wrote:
Here is my ceph.conf. I just figured out that the second host = isn't
necessary, though it is like that on the 5-minute quick start guide...
(Perhaps I'll submit my couple of fixe
FWIW, here is what I have for my ceph cluster:
4 x HP DL 180 G6
12Gb RAM
P411 with 512MB Battery Backed Cache
10GigE
4 HP MSA 60's with 12 x 1TB 7.2k SAS and SATA drives (bought at different times
so there is a mix)
2 HP D2600 with 12 x 3TB 7.2k SAS Drives
I'm currently running 79 qemu/kvm vm's
You've learned on of the three computer science facts you need to know about
distributed systems, and I'm glad I could pass something on:
1. Consistent, Available, Distributed - pick any two
2. To completely guard against k failures where you don't know which one failed
just by looking you need
S isn't deleting the objects for
some reason, possibly because there's still an open filehandle?
My question is, how can I get a report from the MDS on which objects
aren't visible from the filesystem / why it's not deleted them yet /
what open filehandles there are etc.
Cheers
Mike
t why it's keeping the
objects alive, but I'm not sure which messages I should be looking at.
Mike
On 9 May 2013 19:31, Noah Watkins wrote:
> Mike,
>
> I'm guessing that HBase is creating and deleting its blocks, but that the
> deletes are delayed:
>
> http://c
though: http://tracker.ceph.com/issues/3601
Looks like that may be the same issue..
Mike
On 10 May 2013 08:31, Mike Bryant wrote:
> Mhm, if that was the case I would expect it to be deleting things over time.
> On one occurrence for example the data pool reached 160GB after 3 or 4 days,
> with a repor
> There is already debugging present in the Java bindings. You can turn on
> client logging, and add 'debug javaclient = 20' to get client debug logs.
Ah, I hadn't noticed that, cheers.
> How many clients does HBase setup?
There's one connection to cephfs from the master, and one from each of
t
cmdline flag.
dmick has an issue started at:
http://tracker.ceph.com/issues/5024
Thanks,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
(Sorry for sending this twice... Forgot to reply to the list)
Is rbd caching safe to enable when you may need to do a live migration of
the guest later on? It was my understanding that it wasn't, and that
libvirt prevented you from doing the migration of it knew about the caching
setting.
If it i
Hmm try searching the libvirt git for josh as an author you should see the
commit from Josh Durgan about white listing rbd migration.
On May 11, 2013, at 10:53 AM, w sun wrote:
> The reference Mike provided is not valid to me. Anyone else has the same
> problem? --weiguo
>
ransaction dump of the growth yesterday. Sage
looked, but the files are so large it is tough to get useful info.
http://tracker.ceph.com/issues/4895
I believe this issue has existed since 0.48.
- Mike
On 5/21/2013 8:16 AM, Sylvain Munaut wrote:
Hi,
I've just added some monitoring to the
.
Thanks for the correction Sylvain.
- Mike
On 5/21/2013 8:57 AM, Mike Dawson wrote:
Sylvain,
I can confirm I see a similar traffic pattern.
Any time I have lots of writes going to my cluster (like heavy writes
from RBD or remapping/backfilling after losing an OSD), I see all sorts
of monitor
Does anybody know exactly what ceph repair does? Could you list out briefly
the steps it takes? I unfortunately need to use it for an inconsistent pg.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-c
Jeff,
Perhaps these?
http://tracker.ceph.com/issues/4834
http://ceph.com/packages/qemu-kvm/
- Mike
On 6/4/2013 8:16 AM, Jeff Bachtel wrote:
Hijacking (because it's related): a couple weeks ago on IRC it was
indicated a repo with these (or updated) qemu builds for CentOS should
be coming
Behind a registration form, but iirc, this is likely what you are
looking for:
http://www.inktank.com/resource/dreamcompute-architecture-blueprint/
- Mike
On 5/31/2013 3:26 AM, Gandalf Corvotempesta wrote:
In reference architecture PDF, downloadable from your website, there was
some
I wonder if it has something to do with them renaming /usr/bin/kvm, in qemu 1.4
packaged with ubuntu 13.04 it has been replaced with the following:
#! /bin/sh
echo "W: kvm binary is deprecated, please use qemu-system-x86_64 instead" >&2
exec qemu-system-x86_64 -machine accel=kvm:tcg "$@"
On Ju
You need to specify the ceph implementation in core-site.xml:
fs.ceph.impl
org.apache.hadoop.fs.ceph.CephFileSystem
Mike
On 5 June 2013 16:19, Ilja Maslov wrote:
> Hmm, no joy so far :(
>
> Still getting:
>
> hduser@dfs01:~$ hadoop fs -ls
> Bad connect
w 0.61, so it's listed as being supported in the changelog)
Cheers
Mike
--
Mike Bryant | Systems Administrator | Ocado Technology
mike.bry...@ocado.com | 01707 382148 | www.ocadotechnology.com
--
Notice: This email is confidential and may contain copyright material of
Ocado Limite
No, I'm using the same user.
I have in fact tried it as close as possible to the actual creation,
to be sure I'm using the same credentials.
i.e. using boto, bucket = boto.create_bucket(...), followed by,
bucket.set_cors().
Mike
On 6 June 2013 15:51, Yehuda Sadeh wrote:
> Are you
I did, and I do. (Well, having just tried it again under debug mode)
http://pastebin.com/sRHWR6Rh
On 6 June 2013 16:15, Yehuda Sadeh wrote:
> I guess you run set_cors() with a config object? Do you have the rgw
> logs for that operation?
>
>
> On Thu, Jun 6, 2013 at 8:02 AM, Mik
Yes, that change lets me get and set cors policies as I would expect.
Thanks,
Mike
On 6 June 2013 17:45, Yehuda Sadeh wrote:
> Looking at it, it fails in a much basic level than I expected. My
> guess off the cuff is that the 'cors' sub-resource needs to be part of
> the c
I think the bug Sage is talking about was fixed in 3.8.0
On Jun 18, 2013, at 11:38 AM, Guido Winkelmann
wrote:
> Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil:
>> On Tue, 18 Jun 2013, Guido Winkelmann wrote:
>>> Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh Durgin:
Which fil
Hi,
is there any way to create snapshots of individual buckets, that can
be restored from piecemeal?
i.e. if someone deletes objects by mistake?
Cheers
Mike
--
Mike Bryant | Systems Administrator | Ocado Technology
mike.bry...@ocado.com | 01707 382148 | www.ocadotechnology.com
--
Notice
Quorum means you need at least %51 participating be it people following
parliamentary procedures or mons in ceph. With one dead and two up you have
%66 participating or enough to have a quorum. An even number doesn't get you
any additional safety but does give you one more thing than can fail v
ados/operations/add-or-rm-mons/#removing-monitors
then
http://ceph.com/docs/next/rados/operations/add-or-rm-mons/#adding-monitors
- Mike
On 6/25/2013 10:34 PM, Darryl Bond wrote:
Upgrading a cluster from 6.1.3 to 6.1.4 with 3 monitors. Cluster had
been successfully upgraded from bobtail to
I've typically moved it off to a non-conflicting path in lieu of
deleting it outright, but either way should work. IIRC, I used something
like:
sudo mv /var/lib/ceph/mon/ceph-c /var/lib/ceph/mon/ceph-c-bak && sudo
mkdir /var/lib/ceph/mon/ceph-c
- Mike
On 6/25/2013 11:08 PM
The ceph kernel module is only for mounting rbd block devices on bare metal
(technically you could do it in a vm but there is no good reason to do so).
QEMU/KVM has its own rbd implementation that tends to lead the kernel
implementation and should be used with vm's.
The rbd module is always us
Having run ceph clusters in production for the past six years and upgrading
from every stable release starting with argonaut to the next, I can honestly
say being careful about order of operations has not been a problem.
> On Jul 14, 2017, at 10:27 AM, Lars Marowsky-Bree wrote:
>
> On 2017-07-
10:39 AM, Lars Marowsky-Bree wrote:
>
> On 2017-07-14T10:34:35, Mike Lowe wrote:
>
>> Having run ceph clusters in production for the past six years and upgrading
>> from every stable release starting with argonaut to the next, I can honestly
>> say being careful abo
move a host from the root bucket into the correct rack without draining
it and then refilling it or do I need to reweight the host to 0, move the host
to the correct bucket, and then reweight it back to it’s correct value?
Any insights here will be appreciated.
Thank you for your time,
Mike Cave
us in ceph of RDMA, NVDIMM access using libpmem and SPDK software?
How mature this technologes in Ceph? Ready for prodaction use?
Mike
—
Mike, runs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 7 авг. 2017 г., в 9:54, Wido den Hollander написал(а):
>
>
>> Op 3 augustus 2017 om 15:28 schreef Mike A :
>>
>>
>> Hello
>>
>> Our goal it is make fast storage as possible.
>> By now our configuration of 6 servers look like that:
>
and client to the Jewel release
Please suggest pros and cons.
—
Mike, runs!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 15 сент. 2017 г., в 18:42, Sage Weil написал(а):
>
> On Fri, 15 Sep 2017, Mike A wrote:
>> Hello!
>>
>> We have a ceph cluster based on Jewel release and one virtualization
>> infrastructure that is using the cluster. Now we are going to add another
>
On 12/20/2017 03:21 PM, Steven Vacaroaia wrote:
> Hi,
>
> I apologies for creating a new thread ( I already mentioned my issue in
> another one)
> but I am hoping someone will be able to
> provide clarification / instructions
>
> Looks like the patch for including qfull_time is missing from ker
On 12/25/2017 03:13 PM, Joshua Chen wrote:
> Hello folks,
> I am trying to share my ceph rbd images through iscsi protocol.
>
> I am trying iscsi-gateway
> http://docs.ceph.com/docs/master/rbd/iscsi-overview/
>
>
> now
>
> systemctl start rbd-target-api
> is working and I could run gwcli
>
ate iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
> > create iqn.1994-05.com.redhat:a7c1ec3c43f7-ovirt2
> > create iqn.1994-05.com.redhat:67669afedddf-ovirt3
> > create
> iqn.1994-05.com.redhat:2af344ba6ae5-ce
: osd1 down
[90391.238370] libceph: osd1 up
[90391.778979] libceph: osd1 up
Thanks for any help/ideas
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 10/01/2018 3:52 PM, Linh Vu wrote:
>
> Have you checked your firewall?
>
There are no ip tables rules at this time but connection tracking is
enable. I would expect errors about running out of table space if that
was an issue.
Thanks
Mike
___
On 10/01/2018 4:24 PM, Sam Huracan wrote:
> Hi Mike,
>
> Could you show system log at moment osd down and up?
Ok so I have no idea how I missed this each time I looked but the syslog
does show a problem.
I've created the dump file mentioned in the log its 29M compressed so
any on
On 10/01/2018 4:48 PM, Mike O'Connor wrote:
> On 10/01/2018 4:24 PM, Sam Huracan wrote:
>> Hi Mike,
>>
>> Could you show system log at moment osd down and up?
So now I know its a crash, what my next step. As soon as I put the
system under write load,
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_sol_van
w=8
Thanks
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 15/01/2018 7:46 AM, Christian Wuerdig wrote:
> Depends on what you mean with "your pool overloads"? What's your
> hardware setup (CPU, RAM, how many nodes, network etc.)? What can you
> see when you monitor the system resources with atop or the likes?
Single node, 8 core (16 hyperthread) CPU, 32
e "rados ls" but
still get counted as objects in the pool for things like ceph df. this can
make things a little confusing. i verified this each time i did this by
looking in the osd to see what objects are left in the pg. all of them
started with hit\uset.
on a different note, you say
On Tue, Jan 16, 2018 at 9:25 AM, Jens-U. Mozdzen wrote:
> Hello Mike,
>
> Zitat von Mike Lovell :
>
>> On Mon, Jan 8, 2018 at 6:08 AM, Jens-U. Mozdzen wrote:
>>
>>> Hi *,
>>> [...]
>>> 1. Does setting the cache mode to "forward" lead
fd
> cd ..
> set attribute generate_node_acls=1
> cd luns
> create /backstores/block/vmware_5t
>
>
>
>
> On Thu, Jan 4, 2018 at 10:55 AM, Joshua
vering from them OK?)
>
> Any ideas?
Ideas ? yes
There is a a bug which is hitting a small number of systems and at this
time there is no solution. Issues details at
http://tracker.ceph.com/issues/22102.
Please submit more details of your problem on the ticket.
Mike
__
Hi All
Where can I find the source packages that the Proxmox Ceph Luminous was
built from ?
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 13/02/2018 11:19 AM, Brad Hubbard wrote:
> On Tue, Feb 13, 2018 at 10:23 AM, Mike O'Connor wrote:
>> Hi All
>>
>> Where can I find the source packages that the Proxmox Ceph Luminous was
>> built from ?
> You can find any source packages we release on http://d
On 02/13/2018 01:09 PM, Steven Vacaroaia wrote:
> Hi,
>
> I noticed a new ceph kernel (4.15.0-ceph-g1c778f43da52) was made available
> so I have upgraded my test environment
>
...
>
> It will be appreciated if someone can provide instructions / stpes for
> upgrading the kernel without break
Hey Henry, What a sec… That R101 is for our Nagios node, not apart of the Ceph
monitoring nodes. So both the R133 and the one R101 should have redundant
power supplies. Make sense?
Cheers
Mike
> On Sep 7, 2016, at 10:55 AM, Henry Figueroa
> wrote:
>
> Mike
> The monitorin
Hi,
I just wanted to get a sanity check if possible, I apologize if my
questions are stupid, I am still new to Ceph and I am feeling uneasy adding
new nodes.
Right now we have one OSD node with 10 OSD disks (plus 2 disks for
caching) and this week we are going to add two more nodes with the same
ight 3.637
> item osd.11 weight 3.637
> }
> root default {
> id -1 # do not change unnecessarily
> # weight 94.559
> alg straw
> hash 0 # rjenkins1
> item tesla weight 36.369
> item faraday weight 32.732
>
ter is now working!
So it seems the storage traffic was being dropped/blocked by something on
our ISP side.
Cheers,
Mike
On Mon, Oct 10, 2016 at 5:22 PM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Hi Mike...
>
> I was hoping that someone with a bit more experience would a
Hi David,
VLAN connectivity was good, the nodes could talk to each other on either
their private or public network. I really think they were doing something
weird across the fiber, not and issue with Ceph or how it was setup.
Thanks for the help!
Cheers,
Mike
On Tue, Oct 11, 2016 at 2:39 PM
HI Chris,
That's an interesting point, I bet the managed switches don't have jumbo
frames enabled.
I think I am going to leave everything at our colo for now.
Cheers,
Mike
On Tue, Oct 11, 2016 at 2:42 PM, Chris Taylor wrote:
>
>
> I see on this list often that peering is
Hi all,
I need some help yet again... With my cluster backup and running Ceph
10.2.3, I am having problems again mounting an RBD image under Xenserver.
I had this working before I broke everything and started over (previously
on Ceph 10.1), I made sure to set tunables to legacy and disabled
choose
ere a way to create rbd images with format 1 without having to convert
from format 2?
Cheers
Mike
On Wed, Oct 12, 2016 at 12:37 PM, Mike Jacobacci wrote:
> Hi all,
>
> I need some help yet again... With my cluster backup and running Ceph
> 10.2.3, I am having problems again moun
t; Yes I was aware of David Disseldorp & Mike Christie efforts to upstream
> the patches from a while back ago. I understand there will be a move
> away from the SUSE target_mod_rbd to support a more generic device
> handling but do not know what the current status of this work is. We
On 10/17/2016 02:40 PM, Mike Christie wrote:
> For the (non target_mode approach), everything that is needed for basic
Oops. Meant to write for the non target_mod_rbd approach.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
metadata to SSD
Currently, cephfs_metadata is on the same pool as the data on the
spinning SATA disks. Is this the bottleneck? Is the move of metadata to
SSD a solution?
Or is it both?
Your experience and insight are highly appreciated.
Thanks,
Mike
?
Regards,
Mike
Regards,
Eric
On Sun, Nov 20, 2016 at 3:24 AM, Mike Miller wrote:
Hi,
reading a big file 50 GB (tried more too)
dd if=bigfile of=/dev/zero bs=4M
in a cluster with 112 SATA disks in 10 osd (6272 pgs, replication 3) gives
me only about *122 MB/s* read speed in single thread.
iment with readahead on cephfs?
Mike
On 11/21/16 12:33 PM, Eric Eastman wrote:
Have you looked at your file layout?
On a test cluster running 10.2.3 I created a 5GB file and then looked
at the layout:
# ls -l test.dat
-rw-r--r-- 1 root root 524288 Nov 20 23:09 test.dat
# getfattr -n ceph.f
s/bdi/ceph-*/read_ahead_kb
-> 131072
And YES! I am so happy, dd 40GB file does a lot more single thread now,
much better.
rasize= 67108864 222 MB/s
rasize=134217728 360 MB/s
rasize=268435456 474 MB/s
Thank you all very much for bringing me on the right track, highly
appreciated.
Regar
de at a time
5. once all nodes back up, run "ceph osd unset noout"
6. bring VM's back online
Does this sound correct?
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ectly again (note sdb
and sdc are ssd for cache)? I am not sure which disk maps to ceph-osd@0 and
so on. Also, can I add them to /etc/fstab to work around?
Cheers,
Mike
On Tue, Nov 29, 2016 at 10:41 AM, Mike Jacobacci wrote:
> Hello,
>
> I would like to install OS updates on the ce
b80ceff106 /dev/sdc
Do i need to run that again?
Cheers,
Mike
On Tue, Nov 29, 2016 at 4:13 PM, Sean Redmond
wrote:
> Normally they mount based upon the gpt label, if it's not working you can
> mount the disk under /mnt and then cat the file called whoami to find out
> the osd n
Hi John,
Thanks I wasn't sure if something happened to the journal partitions or
not.
Right now, the ceph-osd.0-9 services are back up and the cluster health is
good, but none of the ceph-disk@dev-sd* services are running. How can I
get the Journal partitions mounted again?
Cheers,
Mik
9
My fsid in ceph.conf is:
fsid = 75d6dba9-2144-47b1-87ef-1fe21d3c58a8
I don't know why the fsid would change or be different. I thought I had a
basic cluster setup, I don't understand what's going wrong.
Mike
On Tue, Nov 29, 2016 at 5:15 PM, Mike Jacobacci wrote:
>
I forgot to add:
On Tue, Nov 29, 2016 at 6:28 PM, Mike Jacobacci wrote:
> So it looks like the journal partition is mounted:
>
> ls -lah /var/lib/ceph/osd/ceph-0/journal
> lrwxrwxrwx. 1 ceph ceph 9 Oct 10 16:11 /var/lib/ceph/osd/ceph-0/journal
> -> /dev/sdb1
>
&g
-bea5-d103fe1fa9c9, osd.9
On Tue, Nov 29, 2016 at 6:32 PM, Mike Jacobacci wrote:
> I forgot to add:
>
>
> On Tue, Nov 29, 2016 at 6:28 PM, Mike Jacobacci wrote:
>
>> So it looks like the journal partition is mounted:
>>
>> ls -lah /var/lib/ceph/osd/ceph-0/journal
d failed failedCeph disk activation: /dev/sdc3
● ceph-disk@dev-sdc4.service
loaded failed failedCeph disk activation: /dev/sdc4
>From my understanding, the disks have already been activated... Should
these services even be running or enabled?
Mike
On Tue, Nov 29, 2016 at 6:33
Hi Vasu,
Thank you that is good to know!
I am running ceph version 10.2.3 and CentOS 7.2.1511 (Core) minimal.
Cheers,
Mike
On Tue, Nov 29, 2016 at 7:26 PM, Vasu Kulkarni wrote:
> you can ignore that, its a known issue http://tracker.ceph.com/
> issues/15990
>
> regardless wah
as created,
would it be a problem to disable it now?
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi John,
Thanks that makes sense... So I take it if I use the same IP for the bond,
I shouldn't run into the issues I ran into last night?
Cheers,
Mike
On Wed, Nov 30, 2016 at 9:55 AM, John Petrini wrote:
> For redundancy I would suggest bonding the interfaces using LACP that way
>
performance, so I can come back to
jumbo frames later.
Cheers,
Mike
On Wed, Nov 30, 2016 at 10:09 AM, John Petrini
wrote:
> Yes that should work. Though I'd be weary of increasing the MTU to 9000 as
> this could introduce other issues. Jumbo Frames don't provide a very
> sign
things turn out for you.
Regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
We also went the rbd way before this, but for large rbd images we much
prefer cephfs instead.
Regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ing.
- mike
On 12/13/16 2:37 AM, V Plus wrote:
The same..
see:
A: (g=0): rw=read, bs=5M-5M/5M-5M/5M-5M, ioengine=*libaio*, iodepth=1
...
fio-2.2.10
Starting 16 processes
A: (groupid=0, jobs=16): err= 0: pid=27579: Mon Dec 12 20:36:10 2016
mixed: io=122515MB, bw=6120.3MB/s, iops=1224, runt= 20018msec
It looks like the libvirt (2.0.0-10.el7_3.2) that ships with centos 7.3 is
broken out of the box when it comes to hot plugging new virtio-scsi devices
backed by rbd and cephx auth. If you use openstack, cephx auth, and centos,
I’d caution against the upgrade to centos 7.3 right now.
_
yum downgrade
twice to get to something from the 1.2 series.
> On Dec 19, 2016, at 6:40 PM, Jason Dillaman wrote:
>
> Do you happen to know if there is an existing bugzilla ticket against
> this issue?
>
> On Mon, Dec 19, 2016 at 3:46 PM, Mike Lowe wrote:
>> It looks
Hi,
Happy New Year!
Can anyone point me to specific walkthrough / howto instructions how to
move cephfs metadata to SSD in a running cluster?
How is crush to be modified step by step such that the metadata migrate
to SSD?
Thanks and regards,
Mike
will metadata on SSD improve latency significantly?
Mike
On 1/2/17 11:50 AM, Wido den Hollander wrote:
Op 2 januari 2017 om 10:33 schreef Shinobu Kinjo :
I've never done migration of cephfs_metadata from spindle disks to
ssds. But logically you could achieve this through 2 phases.
Wido, all,
can you point me to the "recent benchmarks" so I can have a look?
How do you define "performance"? I would not expect cephFS throughput to
change, but it is surprising to me that metadata on SSD will have no
measurable effect on latency.
- mike
On 1/3/1
On 07/15/2018 08:08 AM, Wladimir Mutel wrote:
> Hi,
>
> I cloned a NTFS with bad blocks from USB HDD onto Ceph RBD volume
> (using ntfsclone, so the copy has sparse regions), and decided to clean
> bad blocks within the copy. I run chkdsk /b from WIndows and it fails on
> free space verifi
On 07/28/2018 03:59 PM, Wladimir Mutel wrote:
> Dear all,
>
> I want to share some experience of upgrading my experimental 1-host
> Ceph cluster from v13.2.0 to v13.2.1.
> First, I fetched new packages and installed them using 'apt
> dist-upgrade', which went smooth as usual.
> Then I no
with the OpenStack Foundation in providing a Ceph
Day at the same venue that the OpenStack Summit will be taking place. You do
need to purchase a Ceph Day pass even if you have a Full access pass for the
OpenStack summit.
--
Mike Perez (thingee)
pgpVgCtKU6Hay.pgp
Description: PGP signature
rsity that helped
Sage with his research to start it all, and how Ceph enables Genomic
research at the campus today!
Registration is up and the schedule is posted:
https://ceph.com/cephdays/ceph-day-silicon-valley-university-santa-cruz-silicon-
+valley-campus/:
--
Mike Perez (thingee)
pgpo2CiOqgXa
Meeting ID: 908675367
Thanks!
--
Mike Perez (thingee)
pgp6Yc3gzRyO4.pgp
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
perform other than what I found in the docs?
Please let me know if I can provide any additional data.
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
preciate it.
There should be no data in the system yet... unless I'm missing something.
Thanks,
Mike
-Original Message-
From: ceph-users on behalf of Serkan Çoban
Date: Wednesday, September 19, 2018 at 6:25 AM
To: "jaszewski.ja...@gmail.com"
Cc: ceph-users , "ceph-use
I'll bump this one more time in case someone who knows why this is happening
didn't see the thread yesterday.
Cheers,
Mike
-Original Message-
From: ceph-users on behalf of Cave Mike
Date: Wednesday, September 19, 2018 at 9:25 AM
Cc: ceph-users
Subject: Re: [ceph-users]
Hi Nitin,
I'm still receiving slides from the speakers but I think I will start posting
them tomorrow. I will reply back when this is done. Thanks!
--
Mike Perez (thingee)
On 16:44 Sep 20, Kamble, Nitin A wrote:
> Hi Mike,
> Are the slides of presentations available anywhere?
On 09/24/2018 05:47 AM, Florian Florensa wrote:
> Hello there,
>
> I am still in the works of preparing a deployment with iSCSI gateways
> on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15,
> and i dont see support for iscsi.
> What kernel are people using for this ?
> - Mainlin
I’m sorry I completely missed the text you wrote at the top of the reply. It at
first appeared that you just quoted a previous reply without adding.
My mistake!
Thank you for the answer as it completely correlates with what I've found after
doing some other digging.
Cheers,
On 10/09/2018 05:09 PM, Brady Deetz wrote:
> I'm trying to replace my old single point of failure iscsi gateway with
> the shiny new tcmu-runner implementation. I've been fighting a Windows
> initiator all day. I haven't tested any other initiators, as Windows is
> currently all we use iscsi for.
>
On 10/10/2018 08:21 AM, Steven Vacaroaia wrote:
> Hi Jason,
> Thanks for your prompt responses
>
> I have used same iscsi-gateway.cfg file - no security changes - just
> added prometheus entry
> There is no iscsi-gateway.conf but the gateway.conf object is created
> and has correct entries
>
> i
201 - 300 of 452 matches
Mail list logo