hello ,your steps helped me a lot. I am able to create a failover. now.
Thank you very much !
Le jeu. 21 janv. 2021 à 03:57, comport3 a écrit :
> You will need to enable DNS alt names in your CA config, and issue a few
> names per server - likely including a common one shared by all nodes such
>
On Tue, May 5, 2020 at 1:01 PM Zachary Kent wrote:
> Thank you for such a thorough explanation of the issue you were seeing. We're
> fixing this now and will put out a new version of PuppetDB with a fix ASAP
> (likely in the next couple days).
Glad I could help!
> The two options you mentioned a
Hi Steve,
Thank you for such a thorough explanation of the issue you were seeing.
We're
fixing this now and will put out a new version of PuppetDB with a fix ASAP
(likely in the next couple days).
The two options you mentioned as possible workarounds would both get the
job done and seem low risk,
On Tue, Jul 11, 2017, at 19:07, Peter Krawetzky wrote:
> Isn't that for the PE version? we are using open source.
It's in all versions
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails f
I'm wondering if that puppetdb instance's queue would grow if it wasn't also
doing normal agent runs.
Maybe pause puppet agent runs until puppetdb is caught up? Puppetdb may not be
happy doing its regular work plus this cleanup. You could stop the puppetserver
service(s) for the cheap way to ac
The userlist is most likely your your issue. This seems analogous to
FACT-1345 and PDB-2631 for context -- we usually see this with the os,
mountpoints, and partitions facts. What's generating the userlist and
what are the numbers it's valued with? Do you have code that generates
the fact? Ther
Peter,
You said no work is going to postgresql but how did you determine that? Are
you able to see if PuppetDB (java) or postgres is consuming the CPU? How is
disk activity?
Slow command processing is often correlated with long database queries --
it would be useful to turn on slow query logging
BTW, that fixed it. Thanks!
On Wednesday, August 17, 2016 at 8:22:03 PM UTC-4, Bret Wortman wrote:
>
> I'd love to figure out how that changed on me, because your changes remind
> me that it USED to look like that!
>
> Bret Wortman
> http://wrapbuddies.co/
>
> On Aug 17, 2016, 8:01 PM -0400, Wy
I'd love to figure out how that changed on me, because your changes remind me
that it USED to look like that!
Bret Wortman
http://wrapbuddies.co/
On Aug 17, 2016, 8:01 PM -0400, Wyatt Alt , wrote:
> Hey Bret, sorry for the gap in communication. Try making these changes:
>
> * to the [master] sec
Hey Bret, sorry for the gap in communication. Try making these changes:
* to the [master] section,
storeconfigs=true
storeconfigs_backend=puppetdb
reports=puppetdb
* to the [agent] section for each node, change "reports=puppetdb" to
"report=true".
I think that should straighten it out.
Wyatt
https://gist.github.com/wortmanb/4896962accb5aa24bcb33b893f6ef477
On Wednesday, August 17, 2016 at 2:37:31 PM UTC-4, Wyatt Alt wrote:
>
> Can you post a gist of your master's full puppet.conf?
>
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
T
Can you post a gist of your master's full puppet.conf?
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to puppet-users+unsubscr...@googlegroups.com.
To view this di
For a node I know has been updated, I get (just listing the timestamps):
"facts_timestamp" : "2016-08-17T17:52:33.899Z",
"report_timestamp" : "2016-08-16T21:15:22.376Z",
"catalog_timestamp" : "2016-08-16T21:14:24.444Z",
So it seems that facts are being stored, but reports & catalogs aren't.
I'd check the timestamps in the output of
curl -X GET http://localhost:8080/pdb/query/v4/nodes/fs1603.our.net
(with a real certname if that one is fake). These will tell you when the
last report/facts/catalog for the node were stored.
Nothing about the log output you've posted indicates an issue
> Can you help with this query? I am trying to get 2 facts from all of our
> puppet clients in PuppetDB.
> I tried variations of the following, but no luck: ('["or", ["=", "name",
> "kernelversion"], ["=", "name", "instance_uuid"]]')
For me this query works. Here is the full curl example in the l
Can you help with this query? I am trying to get 2 facts from all of our
puppet clients in PuppetDB.
I tried variations of the following, but no luck: ('["or", ["=", "name",
"kernelversion"], ["=", "name", "instance_uuid"]]')
Thank you!
On Tuesday, September 3, 2013 at 11:09:19 AM UTC-7, Ken
So Rob I've managed to do a successful install on a clean Ubuntu 14.04
box, you can see the full transcript from here:
https://gist.github.com/kbarber/837ff7e55e8940a7d1c8
What variations during the installation process do you think are here?
In regards to your other points yesterday:
> I figure
> I figured out the issue with the embedded database. For some reason letting
> the system default to ::cert for the “ssl_listen_address” had it binding to
> localhost instead of the actual interface it should have. Specifying
> “0.0.0.0” for that option made puppetdb work just fine when using the
I figured out the issue with the embedded database. For some reason letting the
system default to ::cert for the “ssl_listen_address” had it binding to
localhost instead of the actual interface it should have. Specifying “0.0.0.0”
for that option made puppetdb work just fine when using the embe
On 5/28/15 4:17 PM, Robert Hafner wrote:
>
> Even using the “embedded” database is apparently useless, as puppet is
> still not able to connect to puppetdb.
>
> In addition, puppetdb is very obviously not creating it’s firewall rules
> even though I haven’t disabled that feature.
>
> Does anyon
>> Even using the “embedded” database is apparently useless, as puppet is still
>> not able to connect to puppetdb.
>>
>> In addition, puppetdb is very obviously not creating it’s firewall rules
>> even though I haven’t disabled that feature.
>
> This is interesting/surprising, but it sounds like t
> Even using the “embedded” database is apparently useless, as puppet is still
> not able to connect to puppetdb.
>
> In addition, puppetdb is very obviously not creating it’s firewall rules
> even though I haven’t disabled that feature.
This is interesting/surprising, but it sounds like the main
Even using the “embedded” database is apparently useless, as puppet is still
not able to connect to puppetdb.
In addition, puppetdb is very obviously not creating it’s firewall rules even
though I haven’t disabled that feature.
Does anyone have an example of this module actually working? It’s
> thanks kindly for the reply . sorry for the delayed response.
>
> yes, this is the old report purge that happens every hr for 14 days
>
> we are using puppetdb-1.4.0 and the default postgresql-8.4.20 that comes
> with that
Upgrade as soon as you can, that version is almost 2 years old
(origina
> in the meantime I've added RAM and extended the heap to 2GB. But still I'm
> getting crashes of PuppetDB.
> Last time it was the kernel OOM that killed the java process as I saw in
> /var/log/messages
> kernel: Out of memory: Kill process 10146 (java) score 158 or sacrifice
> child
> kernel: Kill
Hi there,
I'm running into a similar probelm where the custom fact declaration file
is disappearing when I run the puppet agent. What did you restart to
resolve the problem?
On Monday, 24 June 2013 15:28:41 UTC+10, Alexander Grushin wrote:
>
> You are right, thanks!
>
> Custom fact get disappe
That did the trick. Thanks.
On Wednesday, October 1, 2014 4:41:44 PM UTC-7, Ellison Marks wrote:
>
> If you're using ubuntu, It would probably just be
>
> sudo apt-get install postgresql-contrib
>
> You might need to restart postgres after that, not sure.
>
>
> On Wednesday, October 1, 2014 2:10:
If you're using ubuntu, It would probably just be
sudo apt-get install postgresql-contrib
You might need to restart postgres after that, not sure.
On Wednesday, October 1, 2014 2:10:40 PM UTC-7, Taylor Leese wrote:
>
> Ken - I'm using Ubuntu 14 and I installed PuppetDB via
> https://forge.pup
Ken - Also, to be even more specific I'm using the following version of the
puppetdb module:
mod 'puppetlabs/puppetdb',
:git => 'g...@github.com:puppetlabs/puppetlabs-puppetdb.git',
:ref => 'ba3049796f89aec8fb2857cb04d7f6b4dd71c4dc'
On Wednesday, October 1, 2014 2:10:40 PM UTC-7, Taylo
Ken - I'm using Ubuntu 14 and I installed PuppetDB
via https://forge.puppetlabs.com/puppetlabs/puppetdb. I also should have
noted that I got this error while upgrading from PuppetDB 2.1 to 2.2.
Postgres is version 9.3.5 if I remember correctly. Admittedly, I'm not very
familiar with Postgres.
As a note, I installed from the postgres yum repositories, as the version
in stock centos 6 was getting too old. I had to install the -contrib
package to get that extension.
On Wednesday, October 1, 2014 7:11:17 AM UTC-7, Ken Barber wrote:
>
> > I tried the same thing and got the error below. An
> I tried the same thing and got the error below. Any ideas?
>
> puppetdb=# create extension pg_trgm;
>
> ERROR: could not open extension control file
> "/usr/share/postgresql/9.3/extension/pg_trgm.control": No such file or
> directory
Seems odd, pg_trgm should be shipped with PostgreSQL. Maybe i
Sorry, its your agents that need to be upgraded not your master. Are
they all running 3.4.3 or greater?
On Sat, Apr 26, 2014 at 7:08 PM, JonY wrote:
>
>
> On Saturday, April 26, 2014 11:05:50 AM UTC-7, JonY wrote:
>>
>> I think I'm past that version already:
>>
>> # rpm -qa | grep puppet
>> puppe
On Saturday, April 26, 2014 11:05:50 AM UTC-7, JonY wrote:
>
> I think I'm past that version already:
>
> # rpm -qa | grep puppet
> puppetdb-1.6.3-1.el6.noarch
> puppet-3.5.1-1.el6.noarch
> puppet-server-3.5.1-1.el6.noarch
> vim-puppet-2.7.20-1.el6.rf.noarch
> puppetlabs-release-6-10.noarch
> pup
I think I'm past that version already:
# rpm -qa | grep puppet
puppetdb-1.6.3-1.el6.noarch
puppet-3.5.1-1.el6.noarch
puppet-server-3.5.1-1.el6.noarch
vim-puppet-2.7.20-1.el6.rf.noarch
puppetlabs-release-6-10.noarch
puppetdb-terminus-1.6.3-1.el6.noarch
On Saturday, April 26, 2014 7:44:43 AM UTC-7
Sorry, the URL for the Puppet bug is really here:
https://tickets.puppetlabs.com/browse/PUP-1524
On Sat, Apr 26, 2014 at 3:44 PM, Ken Barber wrote:
> It sounds like this Puppet bug: https://tickets.puppetlabs.com/browse/PDB-349
>
> You can usually solve this by upgrading to Puppet 3.4.3.
>
> ken.
It sounds like this Puppet bug: https://tickets.puppetlabs.com/browse/PDB-349
You can usually solve this by upgrading to Puppet 3.4.3.
ken.
On Sat, Apr 26, 2014 at 3:39 PM, JonY wrote:
> FWIW: I tried to remove all the db files and restart in hopes of removing
> the constraint issue. It immedia
Aah good to hear, it sounded like it was an outside influence causing it :-).
ken.
On Tue, Apr 22, 2014 at 11:47 PM, David Mesler wrote:
> Sorry, you can disregard. It turns out the database was screwed up before
> the upgrade. This was just on my failover server so a purge and
> reinitializatio
Sorry, you can disregard. It turns out the database was screwed up before
the upgrade. This was just on my failover server so a purge and
reinitialization took care of it.
On Tuesday, April 22, 2014 6:40:45 PM UTC-4, Ken Barber wrote:
>
> We didn't actually provide any database schema migration
We didn't actually provide any database schema migration for
1.6.2->1.6.3. This in fact looks like its trying to migrate from
scratch as if you have no tables loaded, this is done by looking at
the schema_migrations table to work out what needs to be updated.
So I guess the question is, what does
Cory thanks for the response.
Ruby (1.9.3-484 with rails-express patch) was compiled/installed with bash
script ( aka something else).
While the most current ruby binary is in /usr/bin, yum/RPM must need
additional stuff ( maybe that RPM DB entry you mentioned ). Ultimately, I
could see the
So how did you install Ruby 1.9.3? Did you create your own RPM or use
something else? The RPM packages for puppet will have dependencies
and then yum will try to satisfy them by looking in it's configured
yum repositories. It usually tries to find the latest version.
Looking at your output it lo
On Tuesday, January 21, 2014 3:13:49 PM UTC-5, machete wrote:
>
> The 1.8.7 ruby, rubygems and its gems were uninstalled. The root .gem and
> user .gem directories were also removed. I am hunting for what I am
> overlooking.
>
> I am now looking at using the yum package to install puppet. List
The 1.8.7 ruby, rubygems and its gems were uninstalled. The root .gem and
user .gem directories were also removed. I am hunting for what I am
overlooking.
I am now looking at using the yum package to install puppet. Listed below
are the results of the package install. Notice it's trying to ins
Ok .. thanks ... looks like the linux user is rockin' a different version
than root and because I am using root to do the install it ... DOH! Scary,
I thought all the 1.8.7 stuff was gone ...
User
gem environment
RubyGems Environment:
- RUBYGEMS VERSION: 1.8.23
- RUBY VERSION: 1.9.3 (2
If you installed puppet originally using ruby 1.8.7, then all of your
gems, including puppet, will be in the ruby 1.8.7 gem home, and all
the binaries, including puppet, that were created by rubygems will be
pointing there. Since you built ruby 1.9.3 from source, perhaps you
have a separate 'gem' e
gem list:
gem list
*** LOCAL GEMS ***
bigdecimal (1.1.0)
bundler (1.3.5)
daemon_controller (1.1.7)
facter (1.7.3)
hiera (1.3.0)
io-console (0.3)
json (1.5.5)
json_pure (1.8.1)
minitest (2.5.1)
passenger (4.0.17)
puppet (3.3.2)
rack (1.5.2)
rake (0.9.2.2)
rdoc (3.9.5)
rgen (0.6.6)
On Wednesda
Hey Moses,
Thanks for the quick response.
Puppet was install via ruby gem ... I know, but it is so easy.
Ruby/rubygem are running from source.
OS is CentOS 6.4 Linux puppetdb 2.6.32-71.el6.x86_64 #1 SMP Fri May 20
03:51:51 BST 2011 x86_64 x86_64 x86_64 GNU/Linux
On Wednesday, December 4, 201
How did you install puppet originally? Are you running from packages,
rubygem, or running from source, or some other method?
On Wed, Dec 4, 2013 at 11:58 AM, machete wrote:
> Hey what gives ... ? ;)
>
> I've got a shiny new install of ruby 1.9.3-p484 ... and my 'sudo puppet
> resource package pup
On 16 October 2013 14:39, Steve Wray wrote:
> Your response is encouraging, thanks.
>
> I wasn't using sqlite, I was using postgresql. I have about 100 nodes (and
> growing) sqlite quickly became unusable.
>
Ah ok. There are changes to the ENC script (which also registers new Hosts
in the Forema
Your response is encouraging, thanks.
I wasn't using sqlite, I was using postgresql. I have about 100 nodes (and
growing) sqlite quickly became unusable.
One of the things I tried was aptitude install with =version but this
didn't work, apparently it couldn't find the old versions in the repo. I
On 16 October 2013 11:48, Steve Wray wrote:
> Sure, I'm using this repository
>
> deb http://deb.theforeman.org/ precise stable
>
> it looks as if the upgrade didn't make the required changes to the
> database, or something like that. I dropped the db and recreated it and the
> error about the mi
Sure, I'm using this repository
deb http://deb.theforeman.org/ precise stable
I upgraded from
foreman-postgresql 1.2.3+debian1
foreman 1.2.3+debian1
foreman-proxy 1.2.1+ubuntu1
foreman-installer 1.2.1-debian1
to
foreman-postgresql 1.3.0-1
foreman 1.3.0-1
foreman-proxy 1.3.0-1
foreman-installer
Yes, I know. At first I didn't anticipate that it was a foreman issue.
To be honest I've found foremans usefulness marginal at best and its
performance hit on the Puppet master server quite significant, so I'm not
inclined to pursue it further.
On Wednesday, 16 October 2013 17:26:33 UTC+8, And
On 2013-10-16 10:07, Steve Wray wrote:
> It turned out that there was an update to the foreman package which
> completely broke Puppets ability to enroll new nodes.
>
> Call me old fashioned, I've been a Debian sysadmin for over 10 years,
> but on a 'stable' system an apt-get upgrade is not suppos
On Wed, Oct 16, 2013 at 11:07 AM, Steve Wray wrote:
> It turned out that there was an update to the foreman package which
> completely broke Puppets ability to enroll new nodes.
>
> Call me old fashioned, I've been a Debian sysadmin for over 10 years, but
> on a 'stable' system an apt-get upgrade
So I've been trying to get to the bottom of this one for a while, but
haven't found anyone who is being affected to work with me properly
since it first occured to get to the bottom of the problem. If you
want to jump on Freenode IRC and contact me: ken_barber we can talk
further. This problem is f
Den tirsdag den 3. september 2013 20.09.19 UTC+2 skrev Ken Barber:
>
> Is it acceptable to do the search based on 'certname'? ie:
>
> curl -G 'http://localhost:8080/v2/facts' --data-urlencode
> 'query=["and",["~","certname","puppetdb?"],["or",["=","name","ipaddress"],["=","name","hostname"]]]'
>
Is it acceptable to do the search based on 'certname'? ie:
curl -G 'http://localhost:8080/v2/facts' --data-urlencode
'query=["and",["~","certname","puppetdb?"],["or",["=","name","ipaddress"],["=","name","hostname"]]]'
ken.
On Mon, Sep 2, 2013 at 7:00 AM, Klavs Klavsen wrote:
> This gives me the
You should be able to use 'curl' to access the API, there are some
examples for /v2/nodes in the docs as a start ... and the rest of the
end-points have similar examples:
http://docs.puppetlabs.com/puppetdb/1.3/api/query/v2/nodes.html
Also take a look at the general curl advice page:
http://docs
You are right, thanks!
Custom fact get disappearing because of periodic run of puppet agent with
old facter. Restart solved this problem.
On Thursday, June 20, 2013 1:18:09 PM UTC+4, David Schmitt wrote:
>
> Perhaps you have still an agent running who has loaded an older facter
> version?
>
>
Perhaps you have still an agent running who has loaded an older facter
version?
The default expiration of nodes in puppetdb should be in the order of
days, if it is even enabled by default.
Regards, D.
On 20.06.2013 10:19, Alexander Grushin wrote:
Interesting...
This fact returned using P
Already update to 1.3.1, the old database still there. Theres script for
remove it via postgres script or puppet stanza to clean this such as after
30days old?
On Wed, May 29, 2013 at 12:19 PM, shell heriyanto wrote:
> Hi Ken thanks for your reply,
> We using Postgresql, we just have about 150
Hi Ken thanks for your reply,
We using Postgresql, we just have about 150 puppet agent, and for 130 agent
its just run once per day, every day its take about 150MB.
We using puppetDB 1.1.0. I will try to update puppetDB today.
On Tue, May 28, 2013 at 10:23 PM, Ken Barber wrote:
> What kind of d
On Fri, May 24, 2013 at 5:27 PM, Worker Bee wrote:
> Actually, I am getting no results..
>
> [ ]
>
> I assume this means that facts are not being stored but, I cannot figure
> out why/how to troubleshoot...
>
FWIW, if you want to find all the facts for a node you can just hit
"/v2/nodes/myhost.m
What kind of database is this? Postgresql or the built-in HSQLDB? And
- how are you calculating the database size?
On Tue, May 28, 2013 at 12:19 PM, shell heriyanto
wrote:
> no efect, this my configuration:
>
> gc-interval = 60
> node-ttl = 30m
> node-purge-ttl = 30m
> report-ttl = 30m
>
> I make
no efect, this my configuration:
gc-interval = 60
node-ttl = 30m
node-purge-ttl = 30m
report-ttl = 30m
I make ttl fast to see the change but still, my database still grow bigger.
Its need to remove the database fast and create again, to make this work?
how ttl work? is it just count when we do ad
This what i'am find, thank you Klavs.
On Wed, May 22, 2013 at 9:44 PM, Klavs Klavsen wrote:
> http://docs.puppetlabs.com/puppetdb/1.3/maintain_and_tune.html
>
> Den onsdag den 22. maj 2013 09.56.36 UTC+2 skrev Heriyanto:
>
>> Hi,
>>
>> I've been use puppetdb about 6 months ago, and now the data
Opened bug 20838: http://projects.puppetlabs.com/issues/20838
Thanks,
kl
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to puppet-users+unsubscr...@googlegroups.
I'm glad you found a solution :-).
I think this is a bug though. Would you mind if you raised a ticket
for this in our redmine tracker with the details of your error and
solution? At least if we can record it for the purpose of errata, it
might help someone else - or we might come to a proper solu
Ken, it's working now! "Solution" below.
On Fri, May 17, 2013 at 4:27 PM, Ken Barber wrote:
> Could very well be, however it seems so far you're the first unlucky
> one to see this issue afaik :-). I've been trying to reproduce it on
> my own setup with no luck yet, although I've got some ideas t
> I am not sure I did the ssl-setup command again. I started all over
> again on the puppetdb. Deleted the package, all the logs and
> configuration and reinstalled puppetdb. I included a complete output:
> http://pastebin.com/raw.php?i=TDejFAvp
>
> Does this make things more clear? I did a clean i
Hi Ken,
On Thu, May 16, 2013 at 5:34 PM, Ken Barber wrote:
> I think the certificate fingerprint issue you received is a worry, but
> might not indicate a problem per se. Lets use openssl instead to get
> the fingerprint directly:
Still get this problem.
> # openssl x509 -noout -in `puppet mast
I think the certificate fingerprint issue you received is a worry, but
might not indicate a problem per se. Lets use openssl instead to get
the fingerprint directly:
# openssl x509 -noout -in `puppet master --configprint hostcert`
-fingerprint -md5
So if I do the same exercise on my own host I ge
Hi Ken, thanks for your reply,
On Tue, May 14, 2013 at 5:08 PM, Ken Barber wrote:
> Can we walk through your certificates again? Can you give the full
> verbose output of the following?
I put the complete output here: http://pastebin.com/raw.php?i=iW44kACL .
Hope this helps.
> I get the feelin
Can we walk through your certificates again? Can you give the full
verbose output of the following?
* keytool -list -keystore /etc/puppetdb/ssl/keystore.jks # you'll need
the password from puppetdb_keystore_pw.txt
* keytool -list -keystore /etc/puppetdb/ssl/truststore.jks # same again
* puppet cer
Any idea on how I can do debugging?
Tried re-installing several times now. I'd like to be able to find out
where the problem lies.
Thanks,
kl
On Friday, May 10, 2013 2:11:09 PM UTC+2, Ken Barber wrote:
>
> How did you setup your SSL certificates? You didn't mention a manual
> certificate setu
Thanks for your reply Ken,
On Fri, May 10, 2013 at 2:11 PM, Ken Barber wrote:
> How did you setup your SSL certificates? You didn't mention a manual
> certificate setup.
I did it manually after the automatic way did not work. I followed
this guide ( http://goo.gl/m4PIH ) and reviewed your commen
How did you setup your SSL certificates? You didn't mention a manual
certificate setup. Perhaps you can get away with just re-initializing
your certificates using 'puppetdb-ssl-setup'? Just backup your
/etc/puppetdb/ssl directory first, and then remove it and re-run the
tool and see if that helps:
> Puppet (err): Could not retrieve catalog from remote server: execution
> expired
> Puppet (notice): Using cached catalog
>
> /File[/etc/security/http/key.pem] (err): Could not evaluate: SSL_connect
> SYSCALL returned=5 errno=0 state=SSLv2/v3 read server hello A Could not
> retrieve file metadata
To pitch my two cents as information. I was searching about this error. In
my case I am not running puppetDB.
Here are the versions that I am using,
puppet-server-3.0.2-1.el5
puppet-3.0.2-1.el5
java-1.6.0-openjdk-1.6.0.0-1.27.1.10.8.el5_8
ruby-1.8.7.352-5.el5
openssl-0.9.8e-22.el5_8.4
Error oc
Russell and Hugh - any luck downgrading to openjdk-6?
An alternative thing to try - I found this in the openssl changelog:
http://changelogs.ubuntu.com/changelogs/pool/main/o/openssl/openssl_1.0.1-4ubuntu5.8/changelog.
Looks like the patch for CVE-2013-0169 was reverted due to a bug, but
it has no
Thanks Hugh, can you confirm if switching to openjdk-6 fixes it?
On Mon, Mar 25, 2013 at 1:35 PM, Hugh Cole-Baker wrote:
> I've filed a bug report http://projects.puppetlabs.com/issues/19884 with
> some info on the OpenJDK / Ruby / OpenSSL versions we're using.
>
> --
> You received this message
I've filed a bug report http://projects.puppetlabs.com/issues/19884 with
some info on the OpenJDK / Ruby / OpenSSL versions we're using.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails fr
Awesome news Deepak, I suspected "SSL something" which is why I was
asking for JDK and OpenSSL details ... be good to nail down exact
revisions (and distros in case of special patching they might do) so
we can get an errata out somehow - so if anyone has seen this - be
kind & report! :-).
On Sat,
I was helping someone on IRC with a similar issue the other day...it looks
like there may be a bug in very recent 1.7 OpenJDK versions that cause this
to happen. Reverting to an earlier JDK version resolved the issue. As Ken
mentioned, it would be most helpful if we could get the Ruby/OpenSSL/JDK
v
Russel: Can you confirm the same error message that Hugh is receiving
in your own puppetdb.log?
Hugh: I'd suggest raising a bug with all the details:
http://projects.puppetlabs.com/projects/puppetdb/issues/new ...
Russell, if the problem looks the same I'd confirm it in the same
ticket so we can c
Hi ak0ska,
How are things going? Anything to report?
ken.
On Fri, Mar 15, 2013 at 5:00 AM, Ken Barber wrote:
> Hi ak0ska,
>
> FWIW - with the help of some of my colleagues we've managed to
> replicate your constraint issue in a lab style environment now:
>
> https://gist.github.com/kbarber/5157
Hi ak0ska,
FWIW - with the help of some of my colleagues we've managed to
replicate your constraint issue in a lab style environment now:
https://gist.github.com/kbarber/5157836
Which is a start. It requires a unique precondition to replicate, and
I've been unable however to replicate it in any
So I have this sinking feeling that all of your problems (including
the constraint side-effect) are related to general performance issues
on your database 'for some reason we are yet to determine'. This could
be related to IO contention, it could be a bad index (although you've
rebuilt them all rig
Hello Ken,
I really appreciate you guys looking into this problem, and I'm happy to
provide you with the data you ask for. However, I feel like I should ask,
whether you think this problem is worth your efforts, if rebuilding the
database might solve the issue?
Cheers,
ak0ska
On Thursday, Ma
Hi ak0ska,
So I've been spending the last 2 days trying all kinds of things to
replicate your constraint violation problem and I still am getting
nowhere with it. I've been speaking to all kinds of smart people and
we believe its some sort of lock and/or transactional mode problem but
none of my t
Hello Deepak,
Here are the queries you asked for:
> Can you fire up psql, point it at your puppetdb database, and run "EXPLAIN
> ANALYZE SELECT COUNT(*) AS c FROM certname_catalogs cc, catalog_resources
> cr, certnames c WHERE cc.catalog=cr.catalog AND c.name=cc.certname AND
> c.deactivated I
On Tue, Mar 12, 2013 at 6:38 AM, ak0ska wrote:
> I think my previous comment just got lost.
>
> So, I cut three occurrences of this error from the database log and the
> corresponding part from the puppetdb log. I removed the hostnames, I hope
> it's still sensible: http://pastebin.com/yvyBDWQE
>
I think my previous comment just got lost.
So, I cut three occurrences of this error from the database log and the
corresponding part from the puppetdb log. I removed the hostnames, I hope
it's still sensible: http://pastebin.com/yvyBDWQE
The unversioned api warnings are not from the masters. Th
> After dropping the obsolete index, and rebuilding the others, the database
> is now ~ 30 GB. We still get the constraint violation errors when garbage
> collection starts.
Okay - can you please send me the puppetdb.log entry that shows the
exception? Including surrounding messages?
> Also the "
On Thursday, March 7, 2013 12:23:13 AM UTC+1, Ken Barber wrote:
>
>
>
> So the index 'idx_catalog_resources_tags' was removed in 1.1.0 I
> think, so that is no longer needed.
>
> This points back to making sure you schema matches exactly what a
> known good 1.1.1 has, as things have been misse
> Indexes seem bloated.
Totally agree, you should organise re-indexes starting from the biggest.
> relation | size
> -+-
> public.idx_catalog_resources_tags_gin | 117 GB
> public.idx_catalog_resources_tags
@Mike
iostat -nx
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
md0 0.00 0.00 85.55 405.31 9226.53 3234.60
25.39 0.000.00 0.00 0.00
@Ken
Wow. Thats still way too large for the amount of nodes. I i
> Vacuum full was running for the whole weekend, so we didn't yet have time to
> rebuild indexes, because that would require more downtime, and we're not
> sure how long it would take. The size of the database didn't drop that much,
> it's now ~370Gb.
Wow. Thats still way too large for the amount
1 - 100 of 146 matches
Mail list logo