Well FWIW, I can't replicate this when I upgrade from 1.0.1 to 1.3.2,
I even attempted to name things in the same way, no luck:

https://gist.github.com/kbarber/6184213.

Looking at the schema changes in code, not much has changed to do with
parameters since 1.0.1 either:

https://github.com/puppetlabs/puppetdb/blob/master/src/com/puppetlabs/puppetdb/scf/migrate.clj#L326-L331

So its probably not specifically a bug with the schema change.

On Thu, Aug 8, 2013 at 12:48 PM, Ken Barber <k...@puppetlabs.com> wrote:
>> We've come across a rather strange problem where the parameters of some
>> resources in PuppetDB are now empty.
>>
>> We have a Nagios server collecting resources from PuppetDB and we've started
>> to get failures like this for one resource type:
>>
>> Error: Could not retrieve catalog from remote server: Error 400 on SERVER:
>> Must pass host_alias to Nagios::Config::Host[hostname] on node nagiosserver
>>
>> The Puppet manifest that defines that resources is this, it is impossible to
>> not populate host_alias:
>>
>> *******************************
>> define nagios::host::host($host = $::fqdn, $tag = undef) {
>>   @@nagios::config::host { $host:
>>     host_alias => $host,
>>     address    => $host,
>>     tag        => $use_tag,
>>   }
>> }
>> *******************************
>>
>> If we query PuppetDB directly (redacted), there are indeed no parameters at
>> all on this resource:
>>
>> *******************************
>> # curl -H 'Accept: application/json' -X GET
>> 'https://puppet:8081/v2/resources' --cacert
>> /var/lib/puppet/ssl/ca/ca_crt.pem --cert
>> /var/lib/puppet/ssl/certs/puppet.pem  --key
>> /var/lib/puppet/ssl/private_keys/puppet.pem --data-urlencode 'query=["=",
>> "type", "Nagios::Config::Host"]' | jgrep "certname=hostname"
>> [
>>   {
>>     "resource": "8ba4379c364b9dba9d18836ef52ce5f4f82d0468",
>>     "parameters": {
>>     },
>>     "title": "hostname",
>>     "exported": true,
>>     "certname": "hostname",
>>     "type": "Nagios::Config::Host",
>>     "sourceline": 27,
>>     "sourcefile":
>> "/etc/puppet/environments/production/modules/nagios/manifests/host/host.pp",
>>     "tags": [
>>     ]
>>   }
>> ]
>> *******************************
>>
>> After a Puppet run and a new catalog, this resource now looks normal:
>>
>> *******************************
>> # curl -H 'Accept: application/json' -X GET
>> 'https://puppet:8081/v2/resources' --cacert
>> /var/lib/puppet/ssl/ca/ca_crt.pem --cert
>> /var/lib/puppet/ssl/certs/puppet.pem  --key
>> /var/lib/puppet/ssl/private_keys/puppet.pem --data-urlencode 'query=["=",
>> "type", "Nagios::Config::Host"]' | jgrep "certname=hostname"
>> [
>>   {
>>     "type": "Nagios::Config::Host",
>>     "sourceline": 27,
>>     "title": "hostname",
>>     "certname": "hostname",
>>     "resource": "8ba4379c364b9dba9d18836ef52ce5f4f82d0468",
>>     "parameters": {
>>       "address": "hostname",
>>       "tag": "tag",
>>       "host_alias": "hostname"
>>     },
>>     "exported": true,
>>     "sourcefile":
>> "/etc/puppet/environments/production/modules/nagios/manifests/host/host.pp",
>>     "tags": [
>>     ]
>>   }
>> ]
>> *******************************
>>
>> These nodes do not have Puppet run on them regularly. We did upgrade from
>> PuppetDB 1.0.1-1.el6.noarch to 1.3.2-1.el6.noarch about 3 weeks ago. We
>> don't do any automatic report or node expiry.
>>
>> This started happening back on 2nd August, just halfway through the day the
>> Puppet runs on the Nagios server start failing with this error. Now if I
>> think back, at this time I think I had a broken Nagios module and a lot of
>> manifests were failing to compile, but I fixed this and re-ran the failures,
>> and everything was ok. PuppetDB only stores the last catalog, so there's no
>> way a broken catalog could have stayed there, right?
>>
>> I've fixed this by refreshing the catalog of all nodes in PuppetDB, but I've
>> got no idea how it got into this state. Any ideas?
>
> No good idea yet, but there is something suspicious in your curl
> responses - the "resource" hash, did you obfuscate this yourself on
> purpose? The two hashes between the first and second requests are
> identical. That hash is calculated based on the sum of the resource,
> including parameters - so it seems impossible that PuppetDB arrived at
> the same hash with and without parameters.
>
> Maybe just maybe the responses were identical, and somehow PuppetDB
> was not returning parameters as a fault. This might indicate some sort
> of integrity problem, whereby the link to the parameters in the RDBMS
> was lost somehow, although this is the first time I've heard of it
> Luke. Maybe this was an upgrade schema change failure between 1.0.1
> and 1.3.2? I'd have to consult what changed in the schema between
> those two points to determine if this is likely however. Does the
> timing of your upgrade, and the first time you saw this fault line up
> with such a possibility? Remember a schema change will only occur
> after a restart of PuppetDB ... (so perhaps consult your logs to see
> when this happened after upgrade).
>
> Let me at least try to replicate while I await your responses.
>
> ken.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to