>I believe that journal is only wirtten to, when you make change at
filesystem level, like creating or removing files.<
This is more or less correct only, in case the _default_ journal strategy
"ordered" is used.
But even then, according to the docs, "metadata" is journalled. Which also
includes ti
Could you please elaborate? What’s wrong with rock on ext4? <
On 22.01.18 17:47, reinerotto wrote:
Default ext4 uses a "journal" of the modifications. Which adds I/O.
Timestamps of filemods are other I/Os. I do not think, that these features
are required for rock. Disabling journal completely w
Privet !
>Could you please elaborate? What’s wrong with rock on ext4? <
Default ext4 uses a "journal" of the modifications. Which adds I/O.
Timestamps of filemods are other I/Os. I do not think, that these features
are required for rock. Disabling journal completely will cause loss of data
(cached)
On 01/22/2018 02:39 AM, Ivan Larionov wrote:
> What’s wrong with rock on ext4?
ext4 does a lot more than rock needs in most environments. More useless
work usually means more overhead/worse performance. YMMV.
> Which filesystem works better for it?
In most cases, the simpler/dumber the filesyste
Could you please elaborate? What’s wrong with rock on ext4? Which filesystem
works better for it?
1500 iops is EBS volume limit and it includes all IO operations and it has no
idea about filesystem, it just provides block storage device.
On Jan 21, 2018, at 20:35, reinerotto wrote:
>> 1500 io
>1500 iops baseline performance< Does this include management operations of
the filesystem used ?
And which filesystem is used ? ext4 might be a bad choice, in case not
significantly "degenerated".
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
On 01/18/2018 05:35 PM, Ivan Larionov wrote:
> * Multiple squid swap in or swap out ops reading/writing contiguous
> blocks could be merged into one 256KB IO operation.
> * Random squid operations could be handled as single 32KB IO operation.
FWIW, on a busy Squid with a large disk cache in a st
Thanks Amos.
According to AWS docs:
> I/O size is capped at 256 KiB for SSD volumes
> When small I/O operations are physically contiguous, Amazon EBS attempts
to merge them into a single I/O up to the maximum size. For example, for
SSD volumes, a single 1,024 KiB I/O operation counts as 4 operati
On 01/18/2018 04:04 PM, Ivan Larionov wrote:
> So if I understand you correctly max-swap-rate doesn't limit disk IOPS
Correct. Squid does not know what the disk- or OS-level stats are.
> but limits squid swap ops instead
If you define a "swap op" as reading or writing a single I/O page for
th
On 19/01/18 12:04, Ivan Larionov wrote:
Thank you for the fast reply!
read_ops and write_ops is AWS EBS metric and in general it correlates
with OS-level reads/s writes/s stats which iostat shows.
So if I understand you correctly max-swap-rate doesn't limit disk IOPS
but limits squid swap op
Thank you for the fast reply!
read_ops and write_ops is AWS EBS metric and in general it correlates with
OS-level reads/s writes/s stats which iostat shows.
So if I understand you correctly max-swap-rate doesn't limit disk IOPS but
limits squid swap ops instead and every squid operation could in
On 01/18/2018 03:16 PM, Ivan Larionov wrote:
> cache_dir max-swap-rate documentation says that swap in requests
> contribute to measured swap rate. However in our squid 4 load test we
> see that read_ops + write_ops significantly exceeds the max-swap-rate we
> set and squid doesn't limit it.
In t
Hello.
cache_dir max-swap-rate documentation says that swap in requests contribute
to measured swap rate. However in our squid 4 load test we see that
read_ops + write_ops significantly exceeds the max-swap-rate we set and
squid doesn't limit it.
I tried to set it to 200 to confirm that it actual
On 12/04/2015 08:37 AM, Hussam Al-Tayeb wrote:
> Since this is a database, it is possible for part of the database to
> get corrupted through a crash or incorrect poweroff?
It depends on your definition of "corruption". Yes, it is possible that
some database updates will be incomplete because of a
On 5/12/2015 4:37 a.m., Hussam Al-Tayeb wrote:
> Hi. I am using squid with rock storage right now to cache computer
> updates for my Linux computers. It works well.
> Since this is a database, it is possible for part of the database to
> get corrupted through a crash or incorrect poweroff?
> I know
Hi. I am using squid with rock storage right now to cache computer
updates for my Linux computers. It works well.
Since this is a database, it is possible for part of the database to
get corrupted through a crash or incorrect poweroff?
I know from sql database that incorrect shutdowns can cause bin
On 2/06/2015 5:13 a.m., Hussam Al-Tayeb wrote:
> Hello, I added a 5000MB rock storage entry in squid.conf
> when it filled up, squid cache manager said:
> Storage Swap size: 512 KB
> Storage Swap capacity: 100.0% used, 0.0% free
>
> but du -BM says 4703M is the size of the r
Hello, I added a 5000MB rock storage entry in squid.conf
when it filled up, squid cache manager said:
Storage Swap size: 512 KB
Storage Swap capacity: 100.0% used, 0.0% free
but du -BM says 4703M is the size of the rock storage file.
ls -l says 524288.
Since it is f
18 matches
Mail list logo