Craig Ringer wrote:
Phoenix Kiula wrote:1. What extra tax will this constraint levy on an INSERT or UPDATE on this table? There are about 100,000 inserts a day, and over three times as many UPDATES. The concurrency is pretty high -- I mean sometimes 1,000 users at the same time but no more than that. If the additional cost of insertion/updating is not too heavy, I suppose this could be a nice approach.
...
I'd try a functional index first. If that didn't do the job, I'd use a trigger-maintained column _purely_ as an optimisation (ie I could drop it and lose no data) that stored text representations of the data. Honestly, though, I expect the functional index would be more than good enough and probably wouldn't have much of an INSERT/UPDATE cost. Again, of course, I'd test before setting anything in stone.
And do not forget you know beforehand if you are going to lookup a user or an IP. So I fail to see what problem needs to be solved here :( Maybe we can get an enlightenment on where every data comes from and whats going to be its usage? Maybe we can then work out a better solution at all? Peephole optimizing is great but sometimes the effect is better if you just open the door :-) Tino
smime.p7s
Description: S/MIME Cryptographic Signature