Can i understand like that: If we have between 1MB and 1.25MB, then fragments happens, otherwise not? But i'm confused with the logic: If in the first time the write size is larger than 1.25MB, then no fragments and go to Lagain and try the second time, but the next step the write_len is between 1MB and 1.25MB, so multi fragments happen. So although we hoped to write as much as we can if we meet large object, but in fact multi fragments does happen, is that what we wanted?
On Sun, May 29, 2011 at 12:25 AM, John Plevyak <jplev...@acm.org> wrote: > I assume that this is related to the SSD code handling multiple > fragments... > > The default target fragment size is 1MB. This code says that if we have > between 1MB and 1.25MB then write only 1MB as we will be able to use > the fast non-fragmenting buffer freelist to hold the 1MB on read. Lower > than > 1MB it will not do the write until a close or more data arrives. Greater > than > 1.25 MB and we assume that we are falling behind on writing (when the > system > is not overloaded this code will be called frequently as data arrives) and > we > sacrifice some potential inefficiency on read by writing as much as we can. > > On Sat, May 28, 2011 at 7:55 AM, 张练 <wahu0315...@gmail.com> wrote: > > > I'm considering the multi fragment issue. In iocore/cache/CacheWrite.cc, > > function CacheVC::openWriteMain, there are phrases : > > if (length > target_fragment_size() && > > (length < target_fragment_size() + target_fragment_size() / 4)) > > write_len = target_fragment_size(); > > else > > write_len = length; > > > > I want to know why it does like that? > > If one object's size is between [target_fragment_size() + > > target_fragment_size() / 4, MAX_FRAG_SIZE) , then does multi fragments > > happen? > > > > -- > > Best regards, > > mohan_zl > > > -- Best regards, Lian Zhang