On Tue, Nov 26, 2013 at 06:50:33AM -0800, Sage Weil wrote:
> If syncfs(2) is not present, we have to use sync(2). That means you have
> N daemons calling sync(2) to force a commit on a single fs, but all other
> mounted fs's are also synced... which means N times the sync(2) calls.
>
> Fortunat
2013/11/26 Derek Yarnell
> On 11/26/13, 4:04 AM, Mihály Árva-Tóth wrote:
> > Hello,
> >
> > Is there any idea? I don't know this is s3api limitation or missing
> feature?
> >
> > Thank you,
> > Mihaly
>
> Hi Mihaly,
>
> If all you are looking for is the current size of the bucket this can be
> fo
Thanks a lot... after update with ceph-deploy 1.3.3, everything is working fine...Regards,Upendra YadavDFS On Wed, 27 Nov 2013 02:22:00 +0530 Alfredo Deza wrote ceph-deploy 1.3.3 just got released and you should not see this with the new version.On Tue, Nov 26, 2013 at 9:56 AM, Alfredo Dez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
No solution so far, but I also asked in IRC and linuxkidd told me they
where looking for a workaround.
Micha Krause
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
> The largest group of threads is those from the network messenger — in
> the current implementation it creates two threads per process the
> daemon is communicating with. That's two threads for each OSD it
> shares PGs with, and two threads for each client which is accessing
> any data on that OSD
Recently, I want to test performance benefit of rbd cache, i cannot get
obvious performance benefit at my setup, then I try to make sure rbd cache is
enabled, but I cannot get rbd cache perf counter. In order to identify how to
enable rbd cache perf counter, I setup a simple setup(one client h
> Thanks a lot, Jens. Do I have to have cephx authentication enabled? Did you
> enable it? Which user from the node that contains cinder-api or glance-api
> are you using to create volumes and images? The documentation at
> http://ceph.com/docs/master/rbd/rbd-openstack/ mentions creating new
Hi Karan
your cinder.conf looks sensible to me, I have posted mine here:
--- cut ---
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy
Hi,
while googles leveldb was too slow for facebook they created rocksdb
(http://rocksdb.org/) may be interesting for Ceph? It's already
production quality.
Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listi
there are some guide lines on this. Thanks in advance!
Regards,
Johannes
__ Informatie van ESET Endpoint Antivirus, versie van database
viruskenmerken 9100 (20131127) __
Het bericht is gecontroleerd door ESET Endpoint Antivirus.
http://www.ese
Hi,
we have a setup of 4 Servers running ceph and radosgw. We use it as an internal
S3 service for our files. The Servers run Debian Squeeze with Ceph 0.67.4.
The cluster has been running smoothly for quite a while, but we are currently
experiencing issues with the radosgw. For some files the
I can recommend zabbix for it, I use it myself.
You just install zabbix agent on OSD node - it will automatically
discover mounted file systems and report usage on those(osd mounts as
well), nice GUI available if needed.
Sure, you need to set up zabbix server before, but it is easy and worth it!
Z
Thanks Jens / Sebastien
It worked for me now , Thanks a lot for your suggestions , they were worth.
Many Thanks
Karan Singh
- Original Message -
From: "Jens-Christian Fischer"
To: "Karan Singh"
Cc: "Sebastien Han" , ceph-users@lists.ceph.com
Sent: Wednesday, 27 November, 2013
uck!
Regards,
Johannes
__ Informatie van ESET Endpoint Antivirus, versie van database
viruskenmerken 9100 (20131127) __
Het bericht is gecontroleerd door ESET Endpoint Antivirus.
http://www.eset.com
___
ceph-users mailing list
ceph-users@lists.cep
I was going to add something to the bug tracker, but it looks to me
that contributor email addresses all have public (unauthenticated)
visibility? Can this be set in user preferences?
Many thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
wrote:
>> The largest group of threads is those from the network messenger — in
>> the current implementation it creates two threads per process the
>> daemon is communicating with. That's two threads for each OSD it
>> shares PGs with, and t
On 11/27/2013 09:25 AM, Gregory Farnum wrote:
On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
wrote:
The largest group of threads is those from the network messenger — in
the current implementation it creates two threads per process the
daemon is communicating with. That's two threads f
On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson wrote:
> On 11/27/2013 09:25 AM, Gregory Farnum wrote:
>>
>> On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
>> wrote:
The largest group of threads is those from the network messenger — in
the current implementation it creates tw
On Activating cluster ceph disks by using command ceph-deploy osd activate ceph-node3:/home/ceph/osd2i am gettingceph-node3][DEBUG ] connected to host: ceph-node3 [ceph-node3][DEBUG ] detect platform information from remote host[ceph-node3][DEBUG ] detect machine type[ceph_deploy.osd][INFO ] Distr
On Wed, Nov 27, 2013 at 04:34:00PM +0100, Gregory Farnum wrote:
> On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson wrote:
> > On 11/27/2013 09:25 AM, Gregory Farnum wrote:
> >>
> >> On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
> >> wrote:
>
> The largest group of threads is those
On 11/26/13, 3:31 PM, Shain Miley wrote:
> Micha,
>
> Did you ever figure out a work around for this issue?
>
> I also had plans of using s3cmd to put, and recursively set acl's on a
> nightly basis...however we are getting the 403 errors as well during our
> testing.
>
> I was just wonderin
I am working with a small test cluster, but the problems described
here will remain in production. I have an external fiber channel storage
array and have exported 2 3TB disks (just as JBODs). I can use
ceph-deploy to create an OSD for each of these disks on a node named
Vashti. So far ever
Derek,
That's great...I am hopeful it makes it into the next release too...it will
solve several issues we are having, trying to working around radosgw bucket and
object permissions when there are multiple users writing files to our buckets.
And with the 's3cmd setacl' failing...at this point I
>For ~$67 you get a mini-itx motherboard with a soldered on 17W dual core
>1.8GHz ivy-bridge based Celeron (supports SSE4.2 CRC32 instructions!).
>It has 2 standard dimm slots so no compromising on memory, on-board gigabit
>eithernet, 3 3Gb/s + 1 6Gb/s SATA, and a single PCIE slot for an additional
Thanks. I may have to go this route, but it seems awfully fragile. One
stray command could destroy the entire cluster, replicas and all. Since
all disks are visible to all nodes, any one of them could mount
everything, corrupting all OSDs at once.
Surly other people are using external FC dr
Is LUN masking an option in your SAN?
On 11/27/13, 2:34 PM, "Kevin Horan" wrote:
>Thanks. I may have to go this route, but it seems awfully fragile. One
>stray command could destroy the entire cluster, replicas and all. Since
>all disks are visible to all nodes, any one of them could mount
>eve
Ah, that sounds like what I want. I'll look into that, thanks.
Kevin
On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote:
Is LUN masking an option in your SAN?
On 11/27/13, 2:34 PM, "Kevin Horan" wrote:
Thanks. I may have to go this route, but it seems awfully fragile. One
stray command could de
Dear Ceph Experts,
our Ceph cluster suddenly went into a state of OSDs constantly having
blocked or slow requests, rendering the cluster unusable. This happened
during normal use, there were no updates, etc.
All disks seem to be healthy (smartctl, iostat, etc.). A complete
hardware reboot includ
On Wed, Nov 27, 2013 at 4:46 AM, Sebastian wrote:
> Hi,
>
> we have a setup of 4 Servers running ceph and radosgw. We use it as an
> internal S3 service for our files. The Servers run Debian Squeeze with Ceph
> 0.67.4.
>
> The cluster has been running smoothly for quite a while, but we are curre
On Wed, Nov 27, 2013 at 12:24 AM, Mihály Árva-Tóth
wrote:
> 2013/11/26 Derek Yarnell
>>
>> On 11/26/13, 4:04 AM, Mihály Árva-Tóth wrote:
>> > Hello,
>> >
>> > Is there any idea? I don't know this is s3api limitation or missing
>> > feature?
>> >
>> > Thank you,
>> > Mihaly
>>
>> Hi Mihaly,
>>
>>
Hey,
What number do you have for a replication factor? As for three, 1.5k
IOPS may be a little bit high for 36 disks, and your OSD ids looks a bit
suspicious - there should not be 60+ OSDs based on calculation from
numbers below.
On 11/28/2013 12:45 AM, Oliver Schulz wrote:
> Dear Ceph Experts,
>
Sounds like what I was having starting a couple of days ago, played
around with the conf, taking in/out suspect osd and doing full smart
tests on them that came back perfectly fine, doing network tests that
came back 110MB/s on all channels, doing OSD benches that reported all
OSD managing 80+
> How much performance can be improved if use SSDs to storage journals?
You will see roughly twice the throughput unless you are using btrfs
(still improved but not as dramatic). You will also see lower latency
because the disk head doesn't have to seek back and forth between
journal and data par
I just pushed a fix for review for the s3cmd --setacl issue. It should
land a stable release soonish.
Thanks,
Yehuda
On Wed, Nov 27, 2013 at 10:12 AM, Shain Miley wrote:
> Derek,
> That's great...I am hopeful it makes it into the next release too...it will
> solve several issues we are having,
Hi all,
I'd like to use Ceph to solve two problems at my company: to be an S3 mock
for testing our application, and for sharing test artifacts in a
peer-to-peer fashion between developers.
We currently store immutable binary blobs ranging from a few kB to several
hundred MB in S3, which means bot
On 11/27/2013 07:21 AM, James Pearce wrote:
I was going to add something to the bug tracker, but it looks to me that
contributor email addresses all have public (unauthenticated)
visibility? Can this be set in user preferences?
Yes, it can be hidden here: http://tracker.ceph.com/my/account
___
On 11/26/2013 02:22 PM, Stephen Taylor wrote:
From ceph-users archive 08/27/2013:
On 08/27/2013 01:39 PM, Timofey Koolin wrote:
/Is way to know real size of rbd image and rbd snapshots?/
/rbd ls -l write declared size of image, but I want to know real size./
You can sum the sizes of the
On 11/27/2013 01:31 AM, Shu, Xinxin wrote:
Recently, I want to test performance benefit of rbd cache, i cannot get
obvious performance benefit at my setup, then I try to make sure rbd
cache is enabled, but I cannot get rbd cache perf counter. In order to
identify how to enable rbd cache perf co
On 11/26/2013 01:14 AM, Ta Ba Tuan wrote:
Hi James,
Proplem is why the Ceph not recommend using Device'UUID in Ceph.conf,
when, above error can be occur?
I think with the newer-style configuration, where your disks have
partition ids setup by ceph-disk instead of entries in ceph.conf, it
does
[re-adding the list]
It's not related to the version of qemu. When qemu starts up, it
creates the admin socket file, but it needs write access to do that.
Does the user running qemu (libvirt-qemu on ubuntu) have write access
to /var/run/ceph? It may be unix permissions blocking it, or apparmor
o
2013/11/27 Yehuda Sadeh
> On Wed, Nov 27, 2013 at 12:24 AM, Mihály Árva-Tóth
> wrote:
> > 2013/11/26 Derek Yarnell
> >>
> >> On 11/26/13, 4:04 AM, Mihály Árva-Tóth wrote:
> >> > Hello,
> >> >
> >> > Is there any idea? I don't know this is s3api limitation or missing
> >> > feature?
> >> >
> >>
41 matches
Mail list logo