I've uploaded the complete log[0]. It's about 70MB just as a warning.
[0] damoxc.net/ceph-mds.ceph2.log.1.gz
On 21 August 2013 07:07, Gregory Farnum wrote:
> Do you have full logs from the beginning of replay? I believe you
> should only see this when a client is reconnecting to the MDS with
> f
Hello,
I'm trying to upload a multipart file to the radosgw (v 0.67.1) using the
Amazon S3 API (v 1.5.3) following the example in
http://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html.
The upload seems to go well, there are no errors in the logs and if I use
s3cmd to check the fil
Thank you Greg. It makes sense, that ceph-fuse will work this way.
I am experiencing this problem for example when I run rsync on folder with lot
of small files in ceph (~300GB in 10kB files). Ceph is during that very slow
and everything crash like domino.
I will look into other ways of reading
Is there an issue ID associated with this? For those of us who made the
long jump and want to avoid any unseen problems.
Thanks,
Jeff
On Tue, Aug 20, 2013 at 7:57 PM, Sage Weil wrote:
> We've identified a problem when upgrading directly from bobtail to
> dumpling; please wait until 0.67.2 bef
Mark,
I tried with journal aio = true, and op thread = 4, but made little
difference.Then I tried to enlarge read ahead value both on the osd block
devices and cephfs client. It did improve some overall performance, especially
the sequential read performance. But still has not much help to the
Hi. Thanks for patch. But after patched ceph src and install it, I have not
ceph-disk or ceph-deploy command.
I did the following steps:
git clone --recursive https://github.com/ceph/ceph.git
patch -p0 <
./autogen.sh
./configure
make
make install
What am I doing wrong?
-Original Message-
http://tracker.ceph.com/issues/6057
Ian R. Colle
Director of Engineering, Inktank
Twitter: ircolle
LinkedIn: www.linkedin.com/in/ircolle
On Aug 21, 2013, at 6:41, Jeff Bachtel wrote:
> Is there an issue ID associated with this? For those of us who made the long
> jump and want to avoid any uns
Hi All,
I'm deploying a small Ceph cluster (1 MON, 4 OSD, 1MDS) using a modified
version of the ENovance Puppet modules
(https://github.com/enovance/puppet-ceph) (I'm using Dumpling on Ubuntu 12.04
instead of Bobtail on Debian).
All goes well until themes is deployed and I get the following er
On Wed, Aug 21, 2013 at 9:33 AM, Pavel Timoschenkov
wrote:
> Hi. Thanks for patch. But after patched ceph src and install it, I have not
> ceph-disk or ceph-deploy command.
> I did the following steps:
> git clone --recursive https://github.com/ceph/ceph.git
> patch -p0 <
> ./autogen.sh
> ./conf
Sounds like the puppet scripts haven't put the client.admin keyring on that
node, or it's in the wrong place.
Alternatively, there's a different keyring they're supposed to be using but
it's not saying so in the command.
-Greg
On Wednesday, August 21, 2013, Stroppa Daniele (strp) wrote:
> Hi Al
On Wednesday, August 21, 2013, Petr Soukup wrote:
> Thank you Greg. It makes sense, that ceph-fuse will work this way.
> I am experiencing this problem for example when I run rsync on folder with
> lot of small files in ceph (~300GB in 10kB files). Ceph is during that very
> slow and everything cr
On 08/21/2013 07:58 AM, Da Chun Ng wrote:
> Mark,
>
> I tried with journal aio = true, and op thread = 4, but made little
> difference.
> Then I tried to enlarge read ahead value both on the osd block devices
> and cephfs client. It did improve some overall performance, especially
> the sequent
Tips: GPS tracker with multi discrete input and output /Attn: purchase
manager
Dear Sir
This is Anna,the sales manager of Redview GPS in China.
VT310 is a GPS tracker with 5 discrete inputs ,5 discrete outputs
and 2 analog ports . With VT310,you can get vehicle windows status
Hi,
Is it possible to have more than one CephFS filesystem per Ceph cluster?
In the default configuration, a ceph cluster has got only one filesystem, and
you can mount that or nothing. Is it possible somehow to have several distinct
filesystems per cluster, preferably with access controls that
Hey Samuel,
I had a good run on the production-cluster with it and unfortunately, it
still doesn't seem to have solved the problem. It seemed OK for a while
and individual OSD CPU-usage seemed quite low, but as the cluster's load
increased during the day, things got slower again. Write-performan
There haven't been any significant osd side changes that I can think
of. Is cpu usage still high? If so, can you post the profiler
results again?
-Sam
On Wed, Aug 21, 2013 at 12:02 PM, Oliver Daudey wrote:
> Hey Samuel,
>
> I had a good run on the production-cluster with it and unfortunately, i
Files are subdivided to folders by 1000 in each folder (/img/10/21/10213456.jpg
etc.) to increase performance.
I just discovered, that it would make much more sense to read images in
webserver using radosgw (S3 API?) - if there will be problem on ceph server, it
shouldn't affect connected clien
On Wed, Aug 21, 2013 at 11:33 AM, Guido Winkelmann
wrote:
> Hi,
>
> Is it possible to have more than one CephFS filesystem per Ceph cluster?
>
> In the default configuration, a ceph cluster has got only one filesystem, and
> you can mount that or nothing. Is it possible somehow to have several dis
On Wed, Aug 21, 2013 at 12:20 PM, Petr Soukup wrote:
> Files are subdivided to folders by 1000 in each folder
> (/img/10/21/10213456.jpg etc.) to increase performance.
In that case the stability issues are probably with your OSDs being
overloaded by write requests that aren't being appropriately
Hey Samuel,
CPU-usage still seems a bit higher, but not always equally on every OSD.
I profiled the node with the most CPU-usage on the OSD. Note the
libleveldb-related stuff right at the top. The Cuttlefish-OSD doesn't
show those at all. Could those be related to the problem?
OSD version 0.6
Hello,
I'm having issues with setting cors on dumpling. I seems like it's not doing
anything.
I have the following CORS rule on test1 bucket:
GETPOST
http://a.a.a
*
When test with the following I'm missing t
ave.
Did anyone experiment with file systems that disabled journaling and how did it
perform?
Regards,
Johannes
__ Informatie van ESET Endpoint Antivirus, versie van database
viruskenmerken 8713 (20130821) __
Het bericht is gecontroleerd door
Hey Samuel,
Finally got it reproduced on my test-cluster, which was otherwise
unloaded at the time. First, with Dumpling:
# rbd create --size 102400 test
# ceph-osd --version
ceph version 0.67.1-6-g0c4f2f3
(0c4f2f34b78b634efe7f4d56694e2edeeda5a130)
# rbd bench-write test
bench-write io_size 409
ing and how did
> it perform?
>
> Regards,
> Johannes
>
>
>
>
>
>
> __ Informatie van ESET Endpoint Antivirus, versie van database
> viruskenmerken 8713 (20130821) __
>
> Het bericht is gecontroleerd
Try it again in the reverse order, I strongly suspect caching effects.
-Sam
On Wed, Aug 21, 2013 at 1:34 PM, Oliver Daudey wrote:
> Hey Samuel,
>
> Finally got it reproduced on my test-cluster, which was otherwise
> unloaded at the time. First, with Dumpling:
>
> # rbd create --size 102400 test
On Wed, Aug 21, 2013 at 1:19 PM, Jeppesen, Nelson
wrote:
> Hello,
>
> I'm having issues with setting cors on dumpling. I seems like it's not doing
> anything.
>
> I have the following CORS rule on test1 bucket:
>
>
>
>
> GETPOST
> http://a.a.a
>
Fantastic! Thank you for the quick response. I'm available if you need any
testing.
Nelson Jeppesen
Disney Technology Solutions and Services
Phone 206-588-5001
-Original Message-
From: Yehuda Sadeh [mailto:yeh...@inktank.com]
Sent: Wednesday, August 21, 2013 2:22 PM
To: Jeppesen,
Hey Samuel,
I repeated the same test several times before my post and just now 2
more times. It holds up and is also repeatable in reverse order, with
the same results. Remember, I restart all OSDs between tests, so any
caches should get destroyed and besides, I'm writing. That shouldn't
involv
> Hi,
>
> Is it possible to have more than one CephFS filesystem per Ceph cluster?
>
> In the default configuration, a ceph cluster has got only one filesystem, and
> you can mount that or nothing. Is it possible somehow to have several
> distinct
> filesystems per cluster, preferably with access
ore like a paranoid super fail save.
Did anyone experiment with file systems that disabled journaling and how did it
perform?
Regards,
Johannes
__ Informatie van ESET Endpoint Antivirus, versie van database
viruskenmerken 8713 (20130821)
Hi guys,
I'm a newbie in ceph. Wonder if I can use 2~3 LVM disks on each server,
total 2 servers to run a quick ceph clustering tests.
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
You'd need to unmount the fs to actually clear the cache. Did you see
a significant difference in load between the runs? To confirm, the
rbd client is dumpling the entire time?
-Sam
On Wed, Aug 21, 2013 at 2:28 PM, Oliver Daudey wrote:
> Hey Samuel,
>
> I repeated the same test several times be
> In a normal/single file system I truly see the value of journaling and the
> potential for btrfs (although it’s still very slow). However in a system like
> ceph, journaling seems to me more like a paranoid super fail save.
>
> Did anyone experiment with file
I am dumb. There *has* been a change in the osd which can account for
this: the wbthrottle limits. We added some logic to force the kernel
to start flushing writes out earlier, normally a good thing. In this
case, it's probably doing an fsync every 500 writes.
Can you run 3 tests?
1) rerun with
Hey Samuel,
On wo, 2013-08-21 at 18:33 -0700, Samuel Just wrote:
> I am dumb. There *has* been a change in the osd which can account for
> this: the wbthrottle limits. We added some logic to force the kernel
> to start flushing writes out earlier, normally a good thing. In this
> case, it's pro
I think the rbd cache one you'd need to run for a few minutes to get
meaningful results. It should stabilize somewhere around the actual
throughput of your hardware.
Hmm, 10k ios I guess is only 10 rbd chunks. What replication level
are you using? Try setting them to 1000 (you only need to
Hello,
Is there any method to one radosgw user has more than one
access/secret_key?
Thank you,
Mihaly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
37 matches
Mail list logo