Was wondering if someone has implemented the management of multiple MySQL
instances using puppet on the same server? Essentially we want to use the
same MySQL binaries but implement multiple distinct MySQL instances
connecting via a specific port number. Puppet Forge has a great MySQL
impleme
Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked and
the KahaDB logs started to grow eventually almost filling a filesystem. I
stopped the service, removed the mq directory per a troubleshooting guide,
and restarted. After several minutes the same symptoms began again an
Hi Mike, thanks for the reply. I'll look at the doci and see what they say
but somehow I suspected that.
And thanks for how to disable puppetdb.
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 puppetdb server. The
I looked at both documents and the second one references the scheduler log
files filling up. Mine are actually in the KahaDB directory.
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked
I did a little searching on github but couldn't find it. Does anyone know
where the source code is for the PuppetDB server? I'm really looking for
the source code that contains the DML (insert, select, update, delete).
Thanks.
--
You received this message because you are subscribed to the Go
uot;.
I've already increased that to 32.
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked
> and the KahaDB logs started to grow eventually almost filling a
> filesystem. I st
lete message from client
< 2017-06-30 07:48:02.343 EDT >LOG: incomplete message from client
< 2017-06-30 07:48:04.957 EDT >LOG: incomplete message from client
< 2017-06-30 07:48:05.256 EDT >LOG: incomplete message from client
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4,
0', $136 = '537',
$137 = '1286', $138 = '68711508', $139 = '537', $140 = '325', $141 =
'67891543', $142 = '537', $143 = '336', $144 = '68711522', $145 = '537',
$146 = '43908', $14
So if I'm reading this correctly, the userlist#~(number) represents the
value of the userlist fact? If that is the case, the size of the userlist
fact is 228k each and every time puppet agent runs with approximately 3300
nodes.
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Kraw
25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked
> and the KahaDB logs started to grow eventually almost filling a
> filesystem. I stopped the service, removed the mq directory per a
> troubleshooting guide, a
What is the actual definition of store_usage? It's not very specific.
Does it limit the number of KahaDB logs? If so what happens when that
limit is reached?
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 pupp
pot, start postgresql and start puppetdb allowing
it to create everything it needs from scratch. Any opinions?
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked
> and the KahaDB logs started
Chris that is this my take on historical data as well. We have processes
that export the data to a data warehouse for consumption by other apps.
Missing some won't kill that process, like the data never existed.
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
have to
drop/create the DB.
On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked
> and the KahaDB logs started to grow eventually almost filling a
> filesystem. I stopped the service, re
I'm seeing a lot of replace facts in the puppetdb server log. I googled
but can't find anything solid.
Is there a way to compare facts for a node between runs? Our agents run
hourly. We are using open source PuppetDB 3.0.2.
Thanks.
--
You received this message because you are subscribed to
ne 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
>
> Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked
> and the KahaDB logs started to grow eventually almost filling a
> filesystem. I stopped the service, removed the mq directory per a
> troubl
So I went to run the curl command listed below and it came back with
nothing. So I used pgadmin to look at the catalogs table and it's
completely empty. The system has been running for almost 24 hours after
dropping/creating the postgresql database. Any idea why the catalog table
would be em
I need a clarification on a comment in the puppet upgrade doci. Does this
mean (last sentence below) I can upgrade the puppetdb servers before the
puppetservers and puppet agent? It's the "nodes" comment that has me
confused. I take that as it can go before anything.
A minor upgrade is an up
Do you have a link to those posts Mike?
On Thursday, July 6, 2017 at 12:54:37 PM UTC-4, Peter Krawetzky wrote:
>
> I'm seeing a lot of replace facts in the puppetdb server log. I googled
> but can't find anything solid.
>
> Is there a way to compare facts for a node
Yes I am on V4 and the query just didn't return any results - no errors so
I assume I am using the correct curl command. Thanks
On Wednesday, June 28, 2017 at 2:11:17 PM UTC-4, Mike Sharpton wrote:
>
> Hey all,
>
> I am hoping there is someone else in the same boat as I am. We are
> running Pu
Using CURL to query PuppetDB has got to be the most time consuming thing
I've ever done. It took me almost 3 hours one day to create a CURL query
that I ended up creating in a SQL statement in 10 minutes once I figured
out the database structure.
Does anyone have:
1. A documented list of C
Isn't that for the PE version? we are using open source.
On Tuesday, July 11, 2017 at 11:48:35 AM UTC-4, Peter Krawetzky wrote:
>
> Using CURL to query PuppetDB has got to be the most time consuming thing
> I've ever done. It took me almost 3 hours one day to create a C
I upgraded the puppet server from 2.1.1-1 to 2.7.2-1 and at the same time
the puppet agent was upgraded from 1.2.2-1 to 1.10.4-1.
I read several different posts on this forum and others but I can't seem to
get hiera 5 to work properly. I tried a couple of different hiera.yaml
config files yet
I'm doing a minor upgrade from 2.1.1-1 to 2.7.2-1 and was wondering if the
size of the database makes a difference in how long the upgrade takes?
It's currently managing approximately 3200+ nodes in production. Testing
in our lab environment did not run long as we only manage about 500 nodes
We had an odd situation happen earlier this morning. Puppet server version
2.1.1 on RHEL7.
I have 4 puppet servers behind a load balancing F5 server. One of our
puppet servers puppetserver-access.log grew (over 2TB's) to the point that
it almost filled /var which for a server is not good. I
Since I don't have a setting in the file, it defaults to info. Unless
there is a bug.
On Monday, October 2, 2017 at 10:24:19 AM UTC-4, Peter Krawetzky wrote:
>
> We had an odd situation happen earlier this morning. Puppet server
> version 2.1.1 on RHEL7.
>
> I have 4 pup
So I recycled the puppetserver service and it now appears the log is back
to normal size over the course of time. I'm guessing something happened to
cause puppetserver to dump more than it should have.
On Monday, October 2, 2017 at 10:24:19 AM UTC-4, Peter Krawetzky wrote:
>
> We
Just installed a new copy of postgresql 9.6 on a server that was running
9.4. Upgraded puppetdb to 5.2.4 on the same server.
After startup the pg_log file has been throwing the following error:
ERROR: canceling autovacuum task
I suspect puppetdb is holding a lock but not sure where. It also d
I'm trying to an SSL connection from puppetserver to a couchdb no-sql
database for hiera lookup data. I have both hiera-http and lookup_http
installed however the version of lookup_http.rb file that gets installed
from running the puppetserver gem install command is 1.0.3. The version I
want
Yeah it looks like I did get this in reverse but it doesn't explain why an
SSL connection is not working to couchdb.
On Tuesday, February 19, 2019 at 4:23:26 PM UTC-5, Peter Krawetzky wrote:
>
> I'm trying to an SSL connection from puppetserver to a couchdb no-sql
> databa
I want to be able to ingest the puppet servers logs into splunk but the
owner of the directory is puppet:puppet and the permissions are
/var/log/puppetlabs/puppet rwxr-x---. Since other has no access, the
splunk service will not be able to read the log files. Can I just change
the permissions
Interesting, thanks!
On Tuesday, June 4, 2019 at 1:59:07 PM UTC-4, Peter Krawetzky wrote:
>
> I want to be able to ingest the puppet servers logs into splunk but the
> owner of the directory is puppet:puppet and the permissions are
> /var/log/puppetlabs/puppet rwxr-x---. Since
I was looking through the documentation and couldn't find my answer. I
want to use both the PuppetDB and Postgresql supported modules to install
and manage both. I don't want to use the default database directory
"/var/lib/postgresql/..." but want to specify my own. What do I use to
point th
I've reviewed sever 500 error posts in here but the answers seem to differ
based on the situation.
One of our developers modified code to include a parameter available in
httpfile 0.1.9 called quick_check.
We have two installation of puppetserver one in lab domain and one in
production do
server/latest/admin-api/v1/environment-cache.html
>
> HTH,
> Justin
>
> On Thu, Jul 16, 2020 at 10:52 AM Peter Krawetzky > wrote:
>
>> I've reviewed sever 500 error posts in here but the answers seem to
>> differ based on the situation.
>>
>>
Ok I figured out the curl command but I get this error:
[root@mypuppetserver private_keys]# curl -v --header "Content-Type:
application/json" --cert
/etc/puppetlabs/puppet/ssl/certs/mypuppetserver.mydomain.com.pem
--key
/etc/puppetlabs/puppet/ssl/private_keys/mypuppetserver.mydomain.com.pem
-
36 matches
Mail list logo