Thank you Colin and Marnen for your repsonses.

Marnen Laibow-Koser wrote:
> Colin Law wrote:
>> On 25 April 2010 00:45, Mike P. <li...@ruby-forum.com> wrote:
>>>...
>>> I think if people could just get over the "don't optimize too early"
>>> mantra, and realize that this can't possibly be best move for everyone,
>>> a lot of future stress could be avoided, for both the business owner and
>>> the customer.
>> 
>> Optimising "too early" is a bad thing by definition.  If it was a good
>> thing then it would not be "too early".
> 

But by whose definition? Someone stuck that "too early" in there, and 
it's an extremely relative and subjective phrase.

If "too early" is anytime before one has noticed a decrease in 
performance (which, by the way, probably has to be fairly significant to 
be noticeable by trying to use a site), and has measured it, then how 
much worse is the performance going to get between that initial 
observation and actually implementing the upgrades?

Is it really worth not doing anything about it until we see it for 
ourselves, even when we can use the experience of others to lessen the 
effect?

> Exactly.  You're trying to justify a bad idea, but it's still a bad 
> idea.
> 
> No one is saying that you shouldn't do research on possible future 
> optimizations.  But don't implement them until you know where your 
> performance problems *are*, not where you assume they'll be.
> 
> "Premature optimization is the root of all evil." --Donald Knuth
> 

Okay, but why is it considered an assumption when there are 
dozens/hundreds of posts out there about people dealing with performance 
issues of growing tables with a large number of rows?

And what about the cases when your research shows that based on the 
expectations for the table (lots of insertions and deletions, table gets 
increasingly large as new users come in, etc.), that the evidence 
suggests that this particular table will have the same issues as the 
tables mentioned in the other posts?

It's just kind of strange to me that there's no room for upfront 
optimizations, not even little ones meant to keep up good performance 
for a longer period.

It's almost like driving a car that you know is going to run out of gas 
on a road trip, into a rural area that you've never been to before 
(which is basically the road of "not knowing how fast something will 
grow"). Before you left, you purposefully chose not to pack an extra gas 
canister to "get you a little farther" or anything, because the car 
would work fine with the amount of gas you had when you left.

But you knew what you have to do when the car does start to run out of 
gas: add some fuel.

So you drive and drive, and then you happen to glance down at the 
dashboard and notice that the gas light is on. Then, and *only* then, 
you start looking for a gas station. At this point, you get a bit 
stressed because you don't know where you are, and you don't know how 
long the gas will last. Your primary focus is to get more gas. You know 
*what* you have to do, but it will take some time to do it.

So, you just try and keep the car running and hope you get to a gas 
station before the car starts chugging or stalls.

The driver knew that the car would run out of gas. He knows of other 
situations where people drove their car to that point and ended up 
having their car stall (i.e. timeouts and major downtime). The 
preventative measure would have been to either get more gas before 
he/she left, pack an "emergency" gas canister to help the car get to a 
gas station, if needed (i.e. if they didn't get the database-layer 
optimizations in place in time); or plan out the route based on the 
known fuel efficiency of the car so that he/she knows exactly when and 
where to get gas (a bit complicated, and probably not completely 
accurate).

So, I'm trying to get the fuel before starting, not when I absolutely 
need it.

I know this is a bit of a loose analogy, but hopefully you see where I'm 
going. We can learn from the experience of others. You've seen or heard 
companies mention "growing pains" on their websites, or in podcasts and 
interviews. Why wouldn't I try and lessen the effect of a known 
performance issue?

Does what I'm saying make sense though? I'm just trying to take the 
experience of others, learn something from it, and prolong the good 
performance of the most heavily used table in the database. This will 
help keep the customers happy, and make my experience less stressful in 
the future.

If you guys tell me that what I'm trying to do will either

1) Take a long time, or
2) Won't prolong the good performance of the database/index (or even 
have a negative effect on it)

then it would be a different story.

Again, I'm not trying to do something drastic here, I'm just asking for 
help on how to handle the multiple-table model at the application layer 
in a Rails app, to avoid too many 'has_many' associations being loaded 
for each User object.

Any advice/tips on that?

Thanks again,
Mike


>> 
>> Colin
> 
> Best,
> --
> Marnen Laibow-Koser
> http://www.marnen.org
> mar...@marnen.org

-- 
Posted via http://www.ruby-forum.com/.

-- 
You received this message because you are subscribed to the Google Groups "Ruby 
on Rails: Talk" group.
To post to this group, send email to rubyonrails-t...@googlegroups.com.
To unsubscribe from this group, send email to 
rubyonrails-talk+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/rubyonrails-talk?hl=en.

Reply via email to