[go-nuts] Re: Concurrent access to map

2017-05-16 Thread Val
It is not safe to do so and you're understandably asking yourself "but why? 
my goroutines are working on different already existing entries, so what's 
going on exactly?".
It turns out that a go builtin map is an opaque stucture that is free to 
completely reorganize itself at any write operation. Thus, it must be 
regarded as a whole when thinking about data races. The docs are very 
explicit about this restriction, and the runtime and the race detector do 
their best at crashing to guide you into enforcing the rule.

Although it would be "conceivable" for a map to reorganize itself on read 
operations (say, to move recently or frequently accessed entries to 
optimized buckets), go builtin maps don't do that and it is safe to have 
multiple concurrent readers, as long as no writer wolf enters.

HTH
Val

On Tuesday, May 16, 2017 at 7:10:56 AM UTC+2, Yan Tang wrote:
>
> Hi,
>
> I am aware of golang map is not safe for concurrent access, especially 
> when there is at least one writer.
>
> However, I want to know that if the map has been pre-populated, and 
> different coroutines access (read/write) different key/value pair, is it 
> safe to do so?  There is no deletion or adding keys after the 
> initialization.
>
> I think it should be safe (having been doing this in C++ quite frequently) 
> but just want to double check.  Thanks.
>
> Yan
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: [ANN] Ugarit

2017-05-16 Thread Luis Furquim


Em segunda-feira, 15 de maio de 2017 09:30:20 UTC-3, mhh...@gmail.com 
escreveu:
>
> On the code itself,
> i suspect you don t know yet about *go fmt*,
> I strongly suggest you to use it,
> just because its a great idea.
>
> https://github.com/luisfurquim/ugarit/blob/master/epub20/book.go#L66
> This hurts :O *if (len(metatag)>0) {*
>
> Ok. Now it is go fmted! It also has fixed the extra parenthesis!

 

> See also go vet.
>
I'll check this soon.
 
 

> https://github.com/luisfurquim/ugarit/blob/master/epub30/book.go#L126
> I don t think you need to init values here, see
> https://play.golang.org/p/eN9cT9gQlE
>
> Ok! fixed.

 

> https://github.com/luisfurquim/ugarit/blob/master/epub30/book.go#L137
> This kind of construction is redundant, en is the default value.
>
> Will check this soon

 

> https://github.com/luisfurquim/ugarit/blob/master/epub30/book.go#L134
> This if *seems* useless 
>
> When I applied some fixes the line numbering has made a few changes.
The 'if' you a refering is the language handling one? If so, the language
reference appears in more than one place in the EPub. So, I ask one
specification and then set all of them.
 
 

> https://github.com/luisfurquim/ugarit/blob/master/epub30/book.go#L456
> twice consecutive and identical conditions.
>
> I supose that you refer about the Manifest initialization which indeed was
repetitive. Fixed

 

> https://github.com/luisfurquim/ugarit/blob/master/epub30/book.go#L464
> I wonder.
>
I didn't get what is being pointed here. Maybe the line renumbering is 
confusing me here.
Maybe you are pointing out the method naming?

 

>
> I must say you took great care about error values and documentation
> ...I m a small player in comparison :p
>
> Thank you!

 

> was it helpful ? I don't know.
>
> A lot!
Thank you again!


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Concurrent access to map

2017-05-16 Thread Egon


On Tuesday, 16 May 2017 10:57:46 UTC+3, Val wrote:
>
> It is not safe to do so and you're understandably asking yourself "but 
> why? my goroutines are working on different already existing entries, so 
> what's going on exactly?".
> It turns out that a go builtin map is an opaque stucture that is free to 
> completely reorganize itself at any write operation. Thus, it must be 
> regarded as a whole when thinking about data races. The docs are very 
> explicit about this restriction, and the runtime and the race detector do 
> their best at crashing to guide you into enforcing the rule.
>
> Although it would be "conceivable" for a map to reorganize itself on read 
> operations (say, to move recently or frequently accessed entries to 
> optimized buckets), go builtin maps don't do that and it is safe to have 
> multiple concurrent readers, as long as no writer wolf enters.
>

There are additionally subtler issues involved. Because the compiler 
doesn't know that it is being accessed concurrently, it may make 
optimizations that are not safe... 

Even a simple example as this will potentially have problems:

var data = map[string]bool{}

func watch() {
var x int
data["running"] = true
for data["running"] {
x++
}
}

func kill() {
time.Sleep(time.Second)
data["running"] = false
}

func main() {
go kill()
watch()
}


This program may or may not terminate... because the compiler is free to 
optimize watch function as:

func watch() {
var x int
data["running"] = true
tmp := data["running"]
for tmp {
x++
}
}


+ Egon


> HTH
> Val
>
> On Tuesday, May 16, 2017 at 7:10:56 AM UTC+2, Yan Tang wrote:
>>
>> Hi,
>>
>> I am aware of golang map is not safe for concurrent access, especially 
>> when there is at least one writer.
>>
>> However, I want to know that if the map has been pre-populated, and 
>> different coroutines access (read/write) different key/value pair, is it 
>> safe to do so?  There is no deletion or adding keys after the 
>> initialization.
>>
>> I think it should be safe (having been doing this in C++ quite 
>> frequently) but just want to double check.  Thanks.
>>
>> Yan
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Is it ok to separate Open and Close logic?

2017-05-16 Thread ojucie
My dear friend st ov,

I think you better have a very compeling reason NOT to use defer, because 
every Go programmer on Earth will expect the defer pattern in such 
scenarios. By avoiding defer, code becomes different than the norm and so, 
harder to understand. The poor reader will keep wondering what reasons you 
had to do things differently.

On Wednesday, May 10, 2017 at 12:55:35 PM UTC-3, st ov wrote:
>
> Most examples of opening and closing a file have both calls in the same 
> function
>
> func DoFileStuff() {
>   file, _ := os.Open("file")
>   defer file.Close()
>
>   // do stuff with file
> }
>
> This makes sure any open file is closed.
> But how common is it to separate that logic into functions? 
> Should this be absolutely avoided as it could result in a file left open?
>
>
> func DoStuff(f string) {
>   file := OpenFile(f)
>
>   // call DoingStuff for some practical reason
>   DoingStuff(file)
> }
>
> func DoingStuff(f *File) {   
>// do stuff on file
>Cleanup(f)
> }
>
> func OpenFile(f string) *File {
>   file, _ := os.Open(f)
>   return file
> }
>
> func Cleanup(f *File) {
>   f.Close()
> }
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] exec.CommandContext wrong behavior

2017-05-16 Thread yuri . shakhmatov
Hi, all.

I have some code:

...

func RunTimeout(c *exec.Cmd, cnl context.CancelFunc) error {
   defer cnl()
   if err := c.Start(); err != nil {
  return err
   }

   return c.Wait()
}

func main() {
   ctx, cnl := context.WithTimeout(context.Background(), 10*time.Second)

   if err := RunTimeout(exec.CommandContext(ctx, "test.sh"), cnl); err != 
nil {
  if err == context.DeadlineExceeded {
 println("timed out")
  } else {
 println(err.Error())
  }   
   }
}

Script *test.sh* looks like:
#!/bin/bash
sleep 60s


If exec.CommandContext timed out it would kill process. But in my case 
there are two process: parent bash process *test.sh* and its child process 
with *sleep 60s*. In this case only bash process *test.sh* will be killed 
after 10 seconds.

Is this normal behavior? It seems strange.

--
Best regards, Yuri

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Manish Rai Jain
Hey guys,

We wrote this simple program to try to achieve what Fio (linux program) 
does. Fio can easily achieve 100K IOPS on an Amazon i3.large instance with 
NVMe SSD. However, with Go we're unable to achieve anything close to that.

https://github.com/dgraph-io/badger-bench/blob/master/randread/main.go

This program should be simple to run. It uses Fio generated files. And 
basically tries 3 things: 1. random reads in a single goroutine (turned off 
by default), 2. random reads using specified number of goroutines, 3. same 
as 2, but using a channel.

3 is slower than 2 (of course). But, 2 is never able to achieve the IOPS 
that Fio can achieve. I've tried other things, to no luck. What I notice is 
that Go and Fio are close to each other as long as number of Goroutines is 
<= number of cores. Once you exceed cores, Go stays put, while Fio IOPS 
keeps on improving, until it reaches SSD thresholds.

So, how could I change my Go program to realize the true throughput of an 
SSD? Or, is this something that needs further work in Go (saw a thread 
about libaio).

Cheers,
Manish

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Dave Cheney
I'd start with the execution profile, specially how many goroutines are running 
concurrently. Your workload may be accidentally sequential due to the 
interaction between the scheduler and the syspoll background thread. 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread canjian456
Generational and Compact gc have already been thought best practice. But 
golang doesn't adopt it. Who can tell me the reason? 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] exec.CommandContext wrong behavior

2017-05-16 Thread Ian Lance Taylor
On Tue, May 16, 2017 at 4:56 AM,   wrote:
>
> I have some code:
>
> ...
>
> func RunTimeout(c *exec.Cmd, cnl context.CancelFunc) error {
>defer cnl()
>if err := c.Start(); err != nil {
>   return err
>}
>
>return c.Wait()
> }
>
> func main() {
>ctx, cnl := context.WithTimeout(context.Background(), 10*time.Second)
>
>if err := RunTimeout(exec.CommandContext(ctx, "test.sh"), cnl); err
> != nil {
>   if err == context.DeadlineExceeded {
>  println("timed out")
>   } else {
>  println(err.Error())
>   }
>}
> }
>
> Script test.sh looks like:
> #!/bin/bash
> sleep 60s
>
>
> If exec.CommandContext timed out it would kill process. But in my case there
> are two process: parent bash process test.sh and its child process with
> sleep 60s. In this case only bash process test.sh will be killed after 10
> seconds.
>
> Is this normal behavior? It seems strange.

Yes, that is expected behavior.  Your use of /bin/bash suggests that
you are using a Unix system (you didn't say), and on a Unix system
there is no general mechanism for sending a signal to a process and
all of its children.  You can use SysProcAttr.Setpgid to put the child
process into a new process group, and then use syscall.Kill with a
negative number to send a signal to all members of that process group,
but that has various other consequences and exec.CommandContext won't
do it for you.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Ian Lance Taylor
On Tue, May 16, 2017 at 4:59 AM, Manish Rai Jain  wrote:
>
> 3 is slower than 2 (of course). But, 2 is never able to achieve the IOPS
> that Fio can achieve. I've tried other things, to no luck. What I notice is
> that Go and Fio are close to each other as long as number of Goroutines is
> <= number of cores. Once you exceed cores, Go stays put, while Fio IOPS
> keeps on improving, until it reaches SSD thresholds.

One thing I notice about your program is that each goroutine is
calling rand.Intn and rand.Int63n.  Those functions acquire and
release a lock, so that single lock is being contested by every
goroutine.  That's an unfortunate and unnecessary slowdown.  Give each
goroutine its own source of pseudo-random numbers by using rand.New.

You also have a point of contention on the local variable i, which you
are manipulating using atomic functions.  It would be cheaper to give
each goroutine a number of operations to do rather than to compute
that dynamically using a contended address.

I'll also note that if a program that should be I/O bound shows a
behavior change when the number of parallel goroutines exceeds the
number of CPUs, then it might be interesting to try setting GOMAXPROCS
to be higher.  I don't know what effect that would have here, but it's
worth checking.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread Ian Lance Taylor
On Tue, May 16, 2017 at 2:01 AM,   wrote:
>
> Generational and Compact gc have already been thought best practice. But
> golang doesn't adopt it. Who can tell me the reason?

This has been discussed in the past.

Ignoring details, the basic advantages of a compacting GC are 1) avoid
fragmentation, and 2) permit the use of a simple and efficient bump
allocator.  However, modern memory allocation algorithms, like the
tcmalloc-based approach used by the Go runtime, have essentially no
fragmentation issues.  And while a bump allocator can be simple and
efficient for a single-threaded program, in a multi-threaded program
like Go it requires locks.  In general it's likely to be more
efficient to allocate memory using a set of per-thread caches, and at
that point you've lost the advantages of a bump allocator.  So I would
assert that, in general, with many caveats, there is no real advantage
to using a compacting memory allocator for a multi-threaded program
today.  I don't mean that there is anything wrong with using a
compacting allocator, I'm just claiming that it doesn't bring any big
advantage over a non-compacting one.

Now let's consider a generational GC.  The point of a generational GC
relies on the generational hypothesis: that most values allocated in a
program are quickly unused, so there is an advantage for the GC to
spend more time looking at recently allocated objects.  Here Go
differs from many garbage collected languages in that many objects are
allocated directly on the program stack.  The Go compiler uses escape
analysis to find objects whose lifetime is known at compile time, and
allocates them on the stack rather than in garbage collected memory.
So in general, in Go, compared to other languages, a larger percentage
of the quickly-unused values that a generational GC looks for are
never allocated in GC memory in the first place.  So a generational GC
would likely bring less advantage to Go than it does for other
languages.

More subtly, the implicit point of most generational GC
implementations is to reduce the amount of time that a program pauses
for garbage collection.  By looking at only the youngest generation
during a pause, the pause is kept short.  However, Go uses a
concurrent garbage collector, and in Go the pause time is independent
of the size of the youngest generation, or of any generation.  Go is
basically assuming that in a multi-threaded program it is better
overall to spend slightly more total CPU time on GC, by running GC in
parallel on a different core, rather than to minimize GC time but to
pause overall program execution for longer.

All that said, generational GC could perhaps still bring significant
value to Go, by reducing the amount of work the GC has to do even in
parallel.  It's a hypothesis that needs to be tested.  Current GC work
in Go is actually looking closely at a related but different
hypothesis: that Go programs may tend to allocate memory on a
per-request basis.  This is described at
https://docs.google.com/document/d/1gCsFxXamW8RRvOe5hECz98Ftk-tcRRJcDFANj2VwCB0/view
.  This is work in progress and it remains to be seen whether it will
be advantageous in reality.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread Brian Hatfield
This is a really great response. I appreciated the high-level overview in
one place like this, and I feel like I learned something. Thanks for
writing it up, Ian.

On Tue, May 16, 2017 at 10:05 AM, Ian Lance Taylor  wrote:

> On Tue, May 16, 2017 at 2:01 AM,   wrote:
> >
> > Generational and Compact gc have already been thought best practice. But
> > golang doesn't adopt it. Who can tell me the reason?
>
> This has been discussed in the past.
>
> Ignoring details, the basic advantages of a compacting GC are 1) avoid
> fragmentation, and 2) permit the use of a simple and efficient bump
> allocator.  However, modern memory allocation algorithms, like the
> tcmalloc-based approach used by the Go runtime, have essentially no
> fragmentation issues.  And while a bump allocator can be simple and
> efficient for a single-threaded program, in a multi-threaded program
> like Go it requires locks.  In general it's likely to be more
> efficient to allocate memory using a set of per-thread caches, and at
> that point you've lost the advantages of a bump allocator.  So I would
> assert that, in general, with many caveats, there is no real advantage
> to using a compacting memory allocator for a multi-threaded program
> today.  I don't mean that there is anything wrong with using a
> compacting allocator, I'm just claiming that it doesn't bring any big
> advantage over a non-compacting one.
>
> Now let's consider a generational GC.  The point of a generational GC
> relies on the generational hypothesis: that most values allocated in a
> program are quickly unused, so there is an advantage for the GC to
> spend more time looking at recently allocated objects.  Here Go
> differs from many garbage collected languages in that many objects are
> allocated directly on the program stack.  The Go compiler uses escape
> analysis to find objects whose lifetime is known at compile time, and
> allocates them on the stack rather than in garbage collected memory.
> So in general, in Go, compared to other languages, a larger percentage
> of the quickly-unused values that a generational GC looks for are
> never allocated in GC memory in the first place.  So a generational GC
> would likely bring less advantage to Go than it does for other
> languages.
>
> More subtly, the implicit point of most generational GC
> implementations is to reduce the amount of time that a program pauses
> for garbage collection.  By looking at only the youngest generation
> during a pause, the pause is kept short.  However, Go uses a
> concurrent garbage collector, and in Go the pause time is independent
> of the size of the youngest generation, or of any generation.  Go is
> basically assuming that in a multi-threaded program it is better
> overall to spend slightly more total CPU time on GC, by running GC in
> parallel on a different core, rather than to minimize GC time but to
> pause overall program execution for longer.
>
> All that said, generational GC could perhaps still bring significant
> value to Go, by reducing the amount of work the GC has to do even in
> parallel.  It's a hypothesis that needs to be tested.  Current GC work
> in Go is actually looking closely at a related but different
> hypothesis: that Go programs may tend to allocate memory on a
> per-request basis.  This is described at
> https://docs.google.com/document/d/1gCsFxXamW8RRvOe5hECz98Ftk-
> tcRRJcDFANj2VwCB0/view
> .  This is work in progress and it remains to be seen whether it will
> be advantageous in reality.
>
> Ian
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] go generate, go run, and cross compilation with GOOS

2017-05-16 Thread florian
Hi everyone,

recently, I replaced a script which we used for code generation by a small 
go program with the same functionality. The reason for this change was that 
windows failed running the script because it can't interprete the script's 
shebang and tried to treat it like a compiled executable. The change solved 
that problem nicely. However, this seems to have broken cross compilation. 
When setting GOOS=linux on a windows computer, that argument seems to get 
passed to go generate and from there to go run. go run the compiles the 
program for linux (on the windows computer) and tries to run it, which 
fails for obvious reasons.

This doesn't sound like an extremely unusual situation, so I was puzzled 
that I couldn't find any information online on how to best handle this 
specific situation. If you have any hints, I'd be glad to take them. 
Resetting the environment in a script is no solution, as the point of the 
whole change is not to use a script. go generate doesn't provide any way to 
set environment variables.

Best regards,
 Florian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread Zellyn
Thanks for the enlightening and interesting reply, Ian.

One quick question: do you have a link or a short description of why 
“modern memory allocation algorithms, like the tcmalloc-based approach used 
by the Go runtime, have essentially no fragmentation issues”?

I was curious, but a quick search for [tcmalloc fragmentation] yielded 
mostly people struggling with fragmentation issues when using tcmalloc.

Zellyn

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Prometheus - NATS Exporter

2017-05-16 Thread Brian Flannery
For those interested, the NATS.io team have released a Prometheus exporter, 
you can find it here: 

https://github.com/nats-io/prometheus-nats-exporter

Via the readme: 

*The Prometheus NATS Exporter consists of both a both a package and 
application that exports NATS server metrics to Prometheus for monitoring. 
The exporter aggregates metrics from the server monitoring endpoints you 
choose (varz, connz, subsz, routez) across any number of monitored NATS 
servers into a single Prometheus Exporter endpoint.*




-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] exec.CommandContext wrong behavior

2017-05-16 Thread Peter Waller
On 16 May 2017 at 12:56,  wrote:
>
> [... kill does not kill all the processes ...] Is this normal behavior? It
> seems strange.
>

One moment of enlightenment for me was to discover why CTRL-C at the
terminal kills all of the processes, while kill(pid, SIG{whatever}) to the
parent process does not.

It's that CTRL-C doesn't just signal the parent process! It signals all
processes in the process group of the session leader process whose
controlling terminal is the current terminal. Phew, what a mouthful - I'm
not even sure I stated that correctly. But anyway, CTRL-C doesn't do kill.
It does kill_pgrp.

Anyway, here's where it happens in linux:

https://github.com/torvalds/linux/blob/a95cfad947d5f40cfbf9a
d3019575aac1d8ac7a6/drivers/tty/n_tty.c#L1258

and ultimately... https://github.com/torvalds/linux/blob/a95cfad
947d5f40cfbf9ad3019575aac1d8ac7a6/drivers/tty/n_tty.c#L1085

Some more docs:

http://www.informit.com/articles/article.aspx?p=397655&seqNum=6

I think this can be surprising, and it's not so obvious at first glance.

Anyway, one implication of this is that if your process escapes the process
group, then CTRL-C won't kill it. Also, Unix processes may quit for other
reasons than receiving a signal, for example if a file descriptor they're
reading from is closed (which might also happen when another process dies).
Food for thought.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread Eddie Ringle
On Tue, May 16, 2017 at 9:06 AM Ian Lance Taylor  wrote:

> On Tue, May 16, 2017 at 2:01 AM,   wrote:
> >
> > Generational and Compact gc have already been thought best practice. But
> > golang doesn't adopt it. Who can tell me the reason?
>
> This has been discussed in the past.
>

Perhaps, then, the information from this write-up should be added to the
FAQ on golang.org?

- Eddie

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread rlh via golang-nuts
The Johnstone / Wilson paper "The memory fragmentation problem: solved?" 
[1] is the original source.

Modern malloc systems including Google's TCMalloc, Hoard [2], and Intel's 
Scalable Malloc (aka Mcrt Malloc [3]) all owe much to that paper and along 
with other memory managers all segregate objects by size. Many languages, 
most notable C/C++, use these fragmentation avoidance memory managers to 
build large system without the need for copy compaction.

[1] Mark S. Johnstone and Paul R. Wilson. 1998. The memory fragmentation 
problem: solved?. In Proceedings of the 1st international symposium on 
Memory management (ISMM '98). ACM, New York, NY, USA, 26-36. 
DOI=http://dx.doi.org/10.1145/286860.286864

[2] Emery D. Berger, Kathryn S. McKinley, Robert D. Blumofe, and Paul R. 
Wilson. 2000. Hoard: a scalable memory allocator for multithreaded 
applications. SIGPLAN Not. 35, 11 (November 2000), 117-128. 
DOI=http://dx.doi.org/10.1145/356989.357000

[3] Richard L. Hudson, Bratin Saha, Ali-Reza Adl-Tabatabai, and Benjamin C. 
Hertzberg. 2006. McRT-Malloc: a scalable transactional memory allocator. In 
Proceedings of the 5th international symposium on Memory management (ISMM 
'06). ACM, New York, NY, USA, 74-83. 
DOI=http://dx.doi.org/10.1145/1133956.1133967

On Tuesday, May 16, 2017 at 12:48:38 PM UTC-4, Zellyn wrote:
>
> Thanks for the enlightening and interesting reply, Ian.
>
> One quick question: do you have a link or a short description of why 
> “modern memory allocation algorithms, like the tcmalloc-based approach used 
> by the Go runtime, have essentially no fragmentation issues”?
>
> I was curious, but a quick search for [tcmalloc fragmentation] yielded 
> mostly people struggling with fragmentation issues when using tcmalloc.
>
> Zellyn
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread 'David Chase' via golang-nuts
See also: Norman R. Nielsen. Dynamic memory allocation in computer 
simulation. Communications of the ACM, 20(11):864–873, November 1977.
This was the first place I saw this result.  A later improvement was 
realizing this allowed headerless BIBOP organization of allocated memory.

I think the first malloc/free-compatible GC/allocator that used BIBOP like 
this was the Boehm-Weiser conservative collector:
Hans Boehm and Mark Weiser. Garbage collection in an uncooperative 
environment. Software, Practice and Experience, pages 807–820, September 
1988.
I used the technique in a performance-tuned malloc at Sun back in the early 
90s, and its fragmentation was entirely acceptable;
not as good as Cartesian trees (best non-compacting fragmentation at the 
time) but not much worse, and far faster.

On Tuesday, May 16, 2017 at 3:26:42 PM UTC-4, Rick Hudson wrote:
>
> The Johnstone / Wilson paper "The memory fragmentation problem: solved?" 
> [1] is the original source.
>
> Modern malloc systems including Google's TCMalloc, Hoard [2], and Intel's 
> Scalable Malloc (aka Mcrt Malloc [3]) all owe much to that paper and along 
> with other memory managers all segregate objects by size. Many languages, 
> most notable C/C++, use these fragmentation avoidance memory managers to 
> build large system without the need for copy compaction.
>
> [1] Mark S. Johnstone and Paul R. Wilson. 1998. The memory fragmentation 
> problem: solved?. In Proceedings of the 1st international symposium on 
> Memory management (ISMM '98). ACM, New York, NY, USA, 26-36. DOI=
> http://dx.doi.org/10.1145/286860.286864
>
> [2] Emery D. Berger, Kathryn S. McKinley, Robert D. Blumofe, and Paul R. 
> Wilson. 2000. Hoard: a scalable memory allocator for multithreaded 
> applications. SIGPLAN Not. 35, 11 (November 2000), 117-128. DOI=
> http://dx.doi.org/10.1145/356989.357000
>
> [3] Richard L. Hudson, Bratin Saha, Ali-Reza Adl-Tabatabai, and Benjamin 
> C. Hertzberg. 2006. McRT-Malloc: a scalable transactional memory allocator. 
> In Proceedings of the 5th international symposium on Memory management 
> (ISMM '06). ACM, New York, NY, USA, 74-83. DOI=
> http://dx.doi.org/10.1145/1133956.1133967
>
> On Tuesday, May 16, 2017 at 12:48:38 PM UTC-4, Zellyn wrote:
>>
>> Thanks for the enlightening and interesting reply, Ian.
>>
>> One quick question: do you have a link or a short description of why 
>> “modern memory allocation algorithms, like the tcmalloc-based approach used 
>> by the Go runtime, have essentially no fragmentation issues”?
>>
>> I was curious, but a quick search for [tcmalloc fragmentation] yielded 
>> mostly people struggling with fragmentation issues when using tcmalloc.
>>
>> Zellyn
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread Zellyn
What a fantastic discussion. Thanks so much, folks!

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] recommended folder structure convention for a cgo static library?

2017-05-16 Thread Helin Wang


I have a go package foo, it is in $GOPATH/src/github.com/org/foo/


And I want to build a static library for C code to consume the Go library. 
But in order to do so, the cgo wrapper wrapper.go file needs to be using 
package main(see https://golang.org/cmd/go/#hdr-Description_of_build_modes , 
"-buildmode=c-archive"), and it's recommended to not put different packages 
into the same folder.


So where is the best place to put the wrapper.go?


I know there is a convention to put different source code files for 
different command CLI into $GOPATH/src/github.com/org/foo/cmd/ folder. 
E.g., $GOPATH/src/github.com/org/foo/cmd/cli_1.


So is $GOPATH/src/github.com/org/foo/lib/foo a good place to put the cgo 
wrapper file for library foo?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] go generate, go run, and cross compilation with GOOS

2017-05-16 Thread Ian Lance Taylor
On Tue, May 16, 2017 at 7:39 AM,   wrote:
>
> recently, I replaced a script which we used for code generation by a small
> go program with the same functionality. The reason for this change was that
> windows failed running the script because it can't interprete the script's
> shebang and tried to treat it like a compiled executable. The change solved
> that problem nicely. However, this seems to have broken cross compilation.
> When setting GOOS=linux on a windows computer, that argument seems to get
> passed to go generate and from there to go run. go run the compiles the
> program for linux (on the windows computer) and tries to run it, which fails
> for obvious reasons.
>
> This doesn't sound like an extremely unusual situation, so I was puzzled
> that I couldn't find any information online on how to best handle this
> specific situation. If you have any hints, I'd be glad to take them.
> Resetting the environment in a script is no solution, as the point of the
> whole change is not to use a script. go generate doesn't provide any way to
> set environment variables.

The general expectation is that you will write //go:generate comments
that generate the same output regardless of GOOS.  Then you will run
`go generate` once, or whenever the input files change, but in general
not on every build.  So you would run `go generate` without setting
GOOS and then run `GOOS=linux go build` to build your actual code.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Manish Rai Jain
So, I fixed the rand and removed the atomics usage (link in my original
post).

Setting GOMAXPROCS definitely helped a lot. And now it seems to make sense,
because (the following command in) fio spawns 16 threads; and GOMAXPROCS
would do the same thing. However, the numbers are still quite a bit off.

I realized fio seems to overestimate, and my Go program seems to
underestimate, so we used sar to determine the IOPS.

$ fio --name=randread --ioengine=psync --iodepth=32 --rw=randread --bs=4k
--direct=0 --size=2G --numjobs=16 --runtime=120 --group_reporting
Gives around 62K, tested via sar -d 1 -p, while

$ go build . && GOMAXPROCS=16 ./randread --dir ~/diskfio --jobs 16 --num
200 --mode 1
Gives around 44K, via sar. Number of cores on my machine are 4.

Note that this is way better than the earlier 20K with GOMAXPROCS = number
of cores, but still leaves much to be desired.

On Tue, May 16, 2017 at 11:36 PM, Ian Lance Taylor  wrote:

> On Tue, May 16, 2017 at 4:59 AM, Manish Rai Jain 
> wrote:
> >
> > 3 is slower than 2 (of course). But, 2 is never able to achieve the IOPS
> > that Fio can achieve. I've tried other things, to no luck. What I notice
> is
> > that Go and Fio are close to each other as long as number of Goroutines
> is
> > <= number of cores. Once you exceed cores, Go stays put, while Fio IOPS
> > keeps on improving, until it reaches SSD thresholds.
>
> One thing I notice about your program is that each goroutine is
> calling rand.Intn and rand.Int63n.  Those functions acquire and
> release a lock, so that single lock is being contested by every
> goroutine.  That's an unfortunate and unnecessary slowdown.  Give each
> goroutine its own source of pseudo-random numbers by using rand.New.
>
> You also have a point of contention on the local variable i, which you
> are manipulating using atomic functions.  It would be cheaper to give
> each goroutine a number of operations to do rather than to compute
> that dynamically using a contended address.
>
> I'll also note that if a program that should be I/O bound shows a
> behavior change when the number of parallel goroutines exceeds the
> number of CPUs, then it might be interesting to try setting GOMAXPROCS
> to be higher.  I don't know what effect that would have here, but it's
> worth checking.
>
> Ian
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread canjian456
Thanks for your patient and wonderful reply. 

在 2017年5月16日星期二 UTC+8下午10:06:25,Ian Lance Taylor写道:
>
> On Tue, May 16, 2017 at 2:01 AM,  > wrote: 
> > 
> > Generational and Compact gc have already been thought best practice. But 
> > golang doesn't adopt it. Who can tell me the reason? 
>
> This has been discussed in the past. 
>
> Ignoring details, the basic advantages of a compacting GC are 1) avoid 
> fragmentation, and 2) permit the use of a simple and efficient bump 
> allocator.  However, modern memory allocation algorithms, like the 
> tcmalloc-based approach used by the Go runtime, have essentially no 
> fragmentation issues.  And while a bump allocator can be simple and 
> efficient for a single-threaded program, in a multi-threaded program 
> like Go it requires locks.  In general it's likely to be more 
> efficient to allocate memory using a set of per-thread caches, and at 
> that point you've lost the advantages of a bump allocator.  So I would 
> assert that, in general, with many caveats, there is no real advantage 
> to using a compacting memory allocator for a multi-threaded program 
> today.  I don't mean that there is anything wrong with using a 
> compacting allocator, I'm just claiming that it doesn't bring any big 
> advantage over a non-compacting one. 
>
> Now let's consider a generational GC.  The point of a generational GC 
> relies on the generational hypothesis: that most values allocated in a 
> program are quickly unused, so there is an advantage for the GC to 
> spend more time looking at recently allocated objects.  Here Go 
> differs from many garbage collected languages in that many objects are 
> allocated directly on the program stack.  The Go compiler uses escape 
> analysis to find objects whose lifetime is known at compile time, and 
> allocates them on the stack rather than in garbage collected memory. 
> So in general, in Go, compared to other languages, a larger percentage 
> of the quickly-unused values that a generational GC looks for are 
> never allocated in GC memory in the first place.  So a generational GC 
> would likely bring less advantage to Go than it does for other 
> languages. 
>
> More subtly, the implicit point of most generational GC 
> implementations is to reduce the amount of time that a program pauses 
> for garbage collection.  By looking at only the youngest generation 
> during a pause, the pause is kept short.  However, Go uses a 
> concurrent garbage collector, and in Go the pause time is independent 
> of the size of the youngest generation, or of any generation.  Go is 
> basically assuming that in a multi-threaded program it is better 
> overall to spend slightly more total CPU time on GC, by running GC in 
> parallel on a different core, rather than to minimize GC time but to 
> pause overall program execution for longer. 
>
> All that said, generational GC could perhaps still bring significant 
> value to Go, by reducing the amount of work the GC has to do even in 
> parallel.  It's a hypothesis that needs to be tested.  Current GC work 
> in Go is actually looking closely at a related but different 
> hypothesis: that Go programs may tend to allocate memory on a 
> per-request basis.  This is described at 
>
> https://docs.google.com/document/d/1gCsFxXamW8RRvOe5hECz98Ftk-tcRRJcDFANj2VwCB0/view
>  
> .  This is work in progress and it remains to be seen whether it will 
> be advantageous in reality. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread canjian456
Thanks for your patient and wonderful reply. 

在 2017年5月16日星期二 UTC+8下午10:06:25,Ian Lance Taylor写道:
>
> On Tue, May 16, 2017 at 2:01 AM,  > wrote: 
> > 
> > Generational and Compact gc have already been thought best practice. But 
> > golang doesn't adopt it. Who can tell me the reason? 
>
> This has been discussed in the past. 
>
> Ignoring details, the basic advantages of a compacting GC are 1) avoid 
> fragmentation, and 2) permit the use of a simple and efficient bump 
> allocator.  However, modern memory allocation algorithms, like the 
> tcmalloc-based approach used by the Go runtime, have essentially no 
> fragmentation issues.  And while a bump allocator can be simple and 
> efficient for a single-threaded program, in a multi-threaded program 
> like Go it requires locks.  In general it's likely to be more 
> efficient to allocate memory using a set of per-thread caches, and at 
> that point you've lost the advantages of a bump allocator.  So I would 
> assert that, in general, with many caveats, there is no real advantage 
> to using a compacting memory allocator for a multi-threaded program 
> today.  I don't mean that there is anything wrong with using a 
> compacting allocator, I'm just claiming that it doesn't bring any big 
> advantage over a non-compacting one. 
>
> Now let's consider a generational GC.  The point of a generational GC 
> relies on the generational hypothesis: that most values allocated in a 
> program are quickly unused, so there is an advantage for the GC to 
> spend more time looking at recently allocated objects.  Here Go 
> differs from many garbage collected languages in that many objects are 
> allocated directly on the program stack.  The Go compiler uses escape 
> analysis to find objects whose lifetime is known at compile time, and 
> allocates them on the stack rather than in garbage collected memory. 
> So in general, in Go, compared to other languages, a larger percentage 
> of the quickly-unused values that a generational GC looks for are 
> never allocated in GC memory in the first place.  So a generational GC 
> would likely bring less advantage to Go than it does for other 
> languages. 
>
> More subtly, the implicit point of most generational GC 
> implementations is to reduce the amount of time that a program pauses 
> for garbage collection.  By looking at only the youngest generation 
> during a pause, the pause is kept short.  However, Go uses a 
> concurrent garbage collector, and in Go the pause time is independent 
> of the size of the youngest generation, or of any generation.  Go is 
> basically assuming that in a multi-threaded program it is better 
> overall to spend slightly more total CPU time on GC, by running GC in 
> parallel on a different core, rather than to minimize GC time but to 
> pause overall program execution for longer. 
>
> All that said, generational GC could perhaps still bring significant 
> value to Go, by reducing the amount of work the GC has to do even in 
> parallel.  It's a hypothesis that needs to be tested.  Current GC work 
> in Go is actually looking closely at a related but different 
> hypothesis: that Go programs may tend to allocate memory on a 
> per-request basis.  This is described at 
>
> https://docs.google.com/document/d/1gCsFxXamW8RRvOe5hECz98Ftk-tcRRJcDFANj2VwCB0/view
>  
> .  This is work in progress and it remains to be seen whether it will 
> be advantageous in reality. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Manish Rai Jain
On further thought about GOMAXPROCS, and its impact on throughput:

A file::pread would block the OS thread. Go runs one OS thread per core.
So, if an OS thread is blocked, no goroutines can be scheduled on this
thread, therefore even pure CPU operations can't be run. This would lead to
core wastage.

This is probably the reason why increasing GOMAXPROCS improves throughput,
and running any number of goroutines >= GOMAXPROCS has little impact on
anything. The underlying OS threads are already blocked, so goroutines
can't do much.

If this logic is valid, then in a complex system, which is doing many
random reads, while also performing calculations (like Dgraph) would
suffer; even if we set GOMAXPROCS to a factor more than number of cores.

Ideally, the disk reads could be happening via libaio, causing the OS
threads to not block, so all goroutines can make progress, increasing the
number of read requests that can be made concurrently. This would then also
ensure that one doesn't need to set GOMAXPROCS to a value greater than
number of cores to achieve higher throughput.


On Wed, May 17, 2017 at 10:38 AM, Manish Rai Jain 
wrote:

> So, I fixed the rand and removed the atomics usage (link in my original
> post).
>
> Setting GOMAXPROCS definitely helped a lot. And now it seems to make
> sense, because (the following command in) fio spawns 16 threads; and
> GOMAXPROCS would do the same thing. However, the numbers are still quite a
> bit off.
>
> I realized fio seems to overestimate, and my Go program seems to
> underestimate, so we used sar to determine the IOPS.
>
> $ fio --name=randread --ioengine=psync --iodepth=32 --rw=randread --bs=4k
> --direct=0 --size=2G --numjobs=16 --runtime=120 --group_reporting
> Gives around 62K, tested via sar -d 1 -p, while
>
> $ go build . && GOMAXPROCS=16 ./randread --dir ~/diskfio --jobs 16 --num
> 200 --mode 1
> Gives around 44K, via sar. Number of cores on my machine are 4.
>
> Note that this is way better than the earlier 20K with GOMAXPROCS = number
> of cores, but still leaves much to be desired.
>
> On Tue, May 16, 2017 at 11:36 PM, Ian Lance Taylor 
> wrote:
>
>> On Tue, May 16, 2017 at 4:59 AM, Manish Rai Jain 
>> wrote:
>> >
>> > 3 is slower than 2 (of course). But, 2 is never able to achieve the IOPS
>> > that Fio can achieve. I've tried other things, to no luck. What I
>> notice is
>> > that Go and Fio are close to each other as long as number of Goroutines
>> is
>> > <= number of cores. Once you exceed cores, Go stays put, while Fio IOPS
>> > keeps on improving, until it reaches SSD thresholds.
>>
>> One thing I notice about your program is that each goroutine is
>> calling rand.Intn and rand.Int63n.  Those functions acquire and
>> release a lock, so that single lock is being contested by every
>> goroutine.  That's an unfortunate and unnecessary slowdown.  Give each
>> goroutine its own source of pseudo-random numbers by using rand.New.
>>
>> You also have a point of contention on the local variable i, which you
>> are manipulating using atomic functions.  It would be cheaper to give
>> each goroutine a number of operations to do rather than to compute
>> that dynamically using a contended address.
>>
>> I'll also note that if a program that should be I/O bound shows a
>> behavior change when the number of parallel goroutines exceeds the
>> number of CPUs, then it might be interesting to try setting GOMAXPROCS
>> to be higher.  I don't know what effect that would have here, but it's
>> worth checking.
>>
>> Ian
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Dave Cheney
> So, if an OS thread is blocked, no goroutines can be scheduled on this 
thread, therefore even pure CPU operations can't be run.

The runtime will spawn a new thread to replace the one that is blocked.

On Wednesday, 17 May 2017 13:05:49 UTC+10, Manish Rai Jain wrote:
>
> On further thought about GOMAXPROCS, and its impact on throughput:
>
> A file::pread would block the OS thread. Go runs one OS thread per core. 
> So, if an OS thread is blocked, no goroutines can be scheduled on this 
> thread, therefore even pure CPU operations can't be run. This would lead to 
> core wastage.
>
> This is probably the reason why increasing GOMAXPROCS improves throughput, 
> and running any number of goroutines >= GOMAXPROCS has little impact on 
> anything. The underlying OS threads are already blocked, so goroutines 
> can't do much.
>
> If this logic is valid, then in a complex system, which is doing many 
> random reads, while also performing calculations (like Dgraph) would 
> suffer; even if we set GOMAXPROCS to a factor more than number of cores.
>
> Ideally, the disk reads could be happening via libaio, causing the OS 
> threads to not block, so all goroutines can make progress, increasing the 
> number of read requests that can be made concurrently. This would then also 
> ensure that one doesn't need to set GOMAXPROCS to a value greater than 
> number of cores to achieve higher throughput.
>
>
> On Wed, May 17, 2017 at 10:38 AM, Manish Rai Jain  > wrote:
>
>> So, I fixed the rand and removed the atomics usage (link in my original 
>> post).
>>
>> Setting GOMAXPROCS definitely helped a lot. And now it seems to make 
>> sense, because (the following command in) fio spawns 16 threads; and 
>> GOMAXPROCS would do the same thing. However, the numbers are still quite a 
>> bit off.
>>
>> I realized fio seems to overestimate, and my Go program seems to 
>> underestimate, so we used sar to determine the IOPS.
>>
>> $ fio --name=randread --ioengine=psync --iodepth=32 --rw=randread --bs=4k 
>> --direct=0 --size=2G --numjobs=16 --runtime=120 --group_reporting
>> Gives around 62K, tested via sar -d 1 -p, while
>>
>> $ go build . && GOMAXPROCS=16 ./randread --dir ~/diskfio --jobs 16 --num 
>> 200 --mode 1
>> Gives around 44K, via sar. Number of cores on my machine are 4.
>>
>> Note that this is way better than the earlier 20K with GOMAXPROCS = 
>> number of cores, but still leaves much to be desired.
>>
>> On Tue, May 16, 2017 at 11:36 PM, Ian Lance Taylor > > wrote:
>>
>>> On Tue, May 16, 2017 at 4:59 AM, Manish Rai Jain >> > wrote:
>>> >
>>> > 3 is slower than 2 (of course). But, 2 is never able to achieve the 
>>> IOPS
>>> > that Fio can achieve. I've tried other things, to no luck. What I 
>>> notice is
>>> > that Go and Fio are close to each other as long as number of 
>>> Goroutines is
>>> > <= number of cores. Once you exceed cores, Go stays put, while Fio IOPS
>>> > keeps on improving, until it reaches SSD thresholds.
>>>
>>> One thing I notice about your program is that each goroutine is
>>> calling rand.Intn and rand.Int63n.  Those functions acquire and
>>> release a lock, so that single lock is being contested by every
>>> goroutine.  That's an unfortunate and unnecessary slowdown.  Give each
>>> goroutine its own source of pseudo-random numbers by using rand.New.
>>>
>>> You also have a point of contention on the local variable i, which you
>>> are manipulating using atomic functions.  It would be cheaper to give
>>> each goroutine a number of operations to do rather than to compute
>>> that dynamically using a contended address.
>>>
>>> I'll also note that if a program that should be I/O bound shows a
>>> behavior change when the number of parallel goroutines exceeds the
>>> number of CPUs, then it might be interesting to try setting GOMAXPROCS
>>> to be higher.  I don't know what effect that would have here, but it's
>>> worth checking.
>>>
>>> Ian
>>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Manish Rai Jain
> The runtime will spawn a new thread to replace the one that is blocked.

Realized that after writing my last mail. And that actually explains some
of the other crashes we saw, about "too many threads", if we run tens of
thousands of goroutines to do these reads, one goroutine per read.

It is obviously lot more expensive to spawn a new OS thread. It seems like
this exact same problem was already solved for network via netpoller (
https://morsmachine.dk/netpoller). Blocking OS threads for disk reads made
sense for HDDs, which could only do 200 IOPS; for SSDs we'd need a solution
based on async I/O.

On Wed, May 17, 2017 at 2:01 PM, Dave Cheney  wrote:

> > So, if an OS thread is blocked, no goroutines can be scheduled on this
> thread, therefore even pure CPU operations can't be run.
>
> The runtime will spawn a new thread to replace the one that is blocked.
>
> On Wednesday, 17 May 2017 13:05:49 UTC+10, Manish Rai Jain wrote:
>>
>> On further thought about GOMAXPROCS, and its impact on throughput:
>>
>> A file::pread would block the OS thread. Go runs one OS thread per core.
>> So, if an OS thread is blocked, no goroutines can be scheduled on this
>> thread, therefore even pure CPU operations can't be run. This would lead to
>> core wastage.
>>
>> This is probably the reason why increasing GOMAXPROCS improves
>> throughput, and running any number of goroutines >= GOMAXPROCS has little
>> impact on anything. The underlying OS threads are already blocked, so
>> goroutines can't do much.
>>
>> If this logic is valid, then in a complex system, which is doing many
>> random reads, while also performing calculations (like Dgraph) would
>> suffer; even if we set GOMAXPROCS to a factor more than number of cores.
>>
>> Ideally, the disk reads could be happening via libaio, causing the OS
>> threads to not block, so all goroutines can make progress, increasing the
>> number of read requests that can be made concurrently. This would then also
>> ensure that one doesn't need to set GOMAXPROCS to a value greater than
>> number of cores to achieve higher throughput.
>>
>>
>> On Wed, May 17, 2017 at 10:38 AM, Manish Rai Jain 
>> wrote:
>>
>>> So, I fixed the rand and removed the atomics usage (link in my original
>>> post).
>>>
>>> Setting GOMAXPROCS definitely helped a lot. And now it seems to make
>>> sense, because (the following command in) fio spawns 16 threads; and
>>> GOMAXPROCS would do the same thing. However, the numbers are still quite a
>>> bit off.
>>>
>>> I realized fio seems to overestimate, and my Go program seems to
>>> underestimate, so we used sar to determine the IOPS.
>>>
>>> $ fio --name=randread --ioengine=psync --iodepth=32 --rw=randread
>>> --bs=4k --direct=0 --size=2G --numjobs=16 --runtime=120 --group_reporting
>>> Gives around 62K, tested via sar -d 1 -p, while
>>>
>>> $ go build . && GOMAXPROCS=16 ./randread --dir ~/diskfio --jobs 16 --num
>>> 200 --mode 1
>>> Gives around 44K, via sar. Number of cores on my machine are 4.
>>>
>>> Note that this is way better than the earlier 20K with GOMAXPROCS =
>>> number of cores, but still leaves much to be desired.
>>>
>>> On Tue, May 16, 2017 at 11:36 PM, Ian Lance Taylor 
>>> wrote:
>>>
 On Tue, May 16, 2017 at 4:59 AM, Manish Rai Jain 
 wrote:
 >
 > 3 is slower than 2 (of course). But, 2 is never able to achieve the
 IOPS
 > that Fio can achieve. I've tried other things, to no luck. What I
 notice is
 > that Go and Fio are close to each other as long as number of
 Goroutines is
 > <= number of cores. Once you exceed cores, Go stays put, while Fio
 IOPS
 > keeps on improving, until it reaches SSD thresholds.

 One thing I notice about your program is that each goroutine is
 calling rand.Intn and rand.Int63n.  Those functions acquire and
 release a lock, so that single lock is being contested by every
 goroutine.  That's an unfortunate and unnecessary slowdown.  Give each
 goroutine its own source of pseudo-random numbers by using rand.New.

 You also have a point of contention on the local variable i, which you
 are manipulating using atomic functions.  It would be cheaper to give
 each goroutine a number of operations to do rather than to compute
 that dynamically using a contended address.

 I'll also note that if a program that should be I/O bound shows a
 behavior change when the number of parallel goroutines exceeds the
 number of CPUs, then it might be interesting to try setting GOMAXPROCS
 to be higher.  I don't know what effect that would have here, but it's
 worth checking.

 Ian

>>>
>>>
>> --
> You received this message because you are subscribed to a topic in the
> Google Groups "golang-nuts" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/golang-nuts/jPb_h3TvlKE/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> golang-nuts+unsubscr...@go

[go-nuts] beginner seeks peer review

2017-05-16 Thread kbfastcat
https://github.com/kbfastcat/nrmetrics

if you have any helpful suggestions, I'd appreciate it...

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: go build gives "import .a: not a package file" error

2017-05-16 Thread ajinkyaghorpade

This solution worked for me. Kind of late to the party though. :) 

On Thursday, 21 August 2014 07:42:52 UTC-4, Dave Cheney wrote:
>
> Just rm -rf $GOPATH/pkg and you should be fine. 
> On 21 Aug 2014 21:38, "mark mellar" > 
> wrote:
>
>> Thanks for your reply Dave. Unfortunately go install is giving me the 
>> same issues...
>>
>> > go install -tags=lsf9 uniSched...
>> # uniSched/cmd/scrap
>> uniSched/cmd/scrap/scrap.go:4: import 
>> /work/scheduler/trunk/work/pkg/linux_amd64/gocommons/lsfcommon.a: not a 
>> package file
>> # uniSched/queuemonitor
>> uniSched/queuemonitor/queuemonitor.go:5: import 
>> /work/scheduler/trunk/work/pkg/linux_amd64/gocommons/lsfcommon.a: not a 
>> package file
>> # golsf/lsb
>> golsf/lsb/jobinfo.go:10: import 
>> /work/scheduler/trunk/work/pkg/linux_amd64/gocommons/lsfcommon.a: not a 
>> package file
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "golang-nuts" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/golang-nuts/VGa3vprcNiE/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> golang-nuts...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Go 1.8.1 is released

2017-05-16 Thread winlin
Great work!

On Saturday, April 8, 2017 at 2:02:49 AM UTC+8, Chris Broadfoot wrote:
>
> Hi gophers,
>
> We have just released Go version 1.8.1, a minor point release.
>
> This release includes fixes to the compiler, runtime, documentation, go 
> command, and the
> crypto/tls, encoding/xml, image/png, net, net/http, reflect, 
> text/template, and time packages. 
> https://golang.org/doc/devel/release.html#go1.8.minor
>
> You can download binary and source distributions from the Go web site:
> https://golang.org/dl/
>
> To compile from source using a Git clone, update to the release with "git 
> checkout go1.8.1" and build as usual.
>
> Thanks to everyone who contributed to the release.
>
> Chris
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Delve v1.0.0-rc.1 release

2017-05-16 Thread kbfastcat
:clap:

On Monday, May 8, 2017 at 11:42:43 AM UTC-7, Derek Parker wrote:
>
> Hey all,
>
> Just wanted to make some noise about the latest Delve 
>  release, v1.0.0-rc.1 
> 
> .
>
> This is a particularly big release for us, and includes a bunch of fixes, 
> improvements and new features. I'll break down the new features shortly, 
> but just wanted to call out this is an exciting milestone for the project, 
> and we're excited to be driving towards a 1.0.0 release. What does that 
> mean to you as a user? Well, not much should change, the project has been 
> pretty stable, we have had API compatibility guarantees for a while now, 
> etc, so this release is mostly symbolic. It does not mean the project is 
> feature complete, we will be working hard to continue to add new features, 
> support for more systems, and overall improvements as usual.
>
>  For a full list of changes, please check out the changelog 
> ,
>  
> but I wanted to highlight some interesting features:
>
> Ability to swap low-level back ends. This means you can select either the 
> native Delve back end, gdbserver, lldb-server, or Mozilla RR. The most 
> exciting of this, in my opinion, is the ability to use the Mozilla RR (
> http://rr-project.org/) project as a back end. This allows for record & 
> replay deterministic debugging, and allows you to combine the power of 
> Delve and RR into a very useful debugging tool for Go.
>
> Lastly, would just like to say thanks to the community overall for the 
> support of the project and for all the patches, bug reports submitted 
> editor integrations, etc, and co-maintainer Alessandro 
>  for all the help, fixes, improvements, 
> reviews and so on.
>
> Please check out this release, file bugs, and look out for the v1.0.0 
> release within the coming weeks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Stop HTTP Server with Context cancel

2017-05-16 Thread winlin
Will this be OK?

func ListenAndServe(ctx context.Context, srv *http.Server) error {
ctx,cancel := context.WitchCancel(ctx)

wg := sync.WaitGroup{}
defer wg.Wait()

wg.Add(1)
go func(ctx context.Context) {
defer cancel()
err := srv.ListenAndServe()
fmt.Println("Server err is", err)
}(ctx)

select {
case <-ctx.Done():
srv.Close()
}

return ctx.Err()
}

When the http server error and quit, it will call the cancel to unblock the 
select.
When the ctx is cancelled, it will call svr.Close() to unblock the server.
Does it works?

On Wednesday, April 5, 2017 at 2:02:16 AM UTC+8, Pierre Durand wrote:
>
> Hello
>
> I wrote a small helper to stop an HTTP Server when a Context is canceled.
> https://play.golang.org/p/Gl8APynVdh
>
> What do you think ?
> Is it OK to use context cancellation for stopping long running functions ?
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] humble file watcher

2017-05-16 Thread Igor Kim
Anyone here can do this... But if you really need a simple file watcher.

$ watchfile index.html diff index.html index_old.html


This command example will watch file index.html for changes. Once detected 
run a command diff index.html index_old.html. A command to run can be 
anything.

Is it barebones.

github 




-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread leventov . ru
It's not clear why when you use "a set of per-thread caches" you "lose 
advantages of bump allocator". At any point of time, a single goroutine is 
executed on a thread. The points when a goroutine gains and loses the execution 
context of a thread, and when it is transferred from one thread to another are 
known to runtime. At those points a goroutine could cache (eg in a register) 
the current thread's bump allocation address and use it for very fast bump 
allocation during execution.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why golang garbage-collector not implement Generational and Compact gc?

2017-05-16 Thread Ian Lance Taylor
On Tue, May 16, 2017 at 8:27 PM,   wrote:
>
> It's not clear why when you use "a set of per-thread caches" you "lose 
> advantages of bump allocator". At any point of time, a single goroutine is 
> executed on a thread. The points when a goroutine gains and loses the 
> execution context of a thread, and when it is transferred from one thread to 
> another are known to runtime. At those points a goroutine could cache (eg in 
> a register) the current thread's bump allocation address and use it for very 
> fast bump allocation during execution.

Fair enough, although it's considerably more complicated, as you have
to allocate a chunk of address space for each thread, you have to
replenish those chunks, you go back to worrying about fragmentation,
etc.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Ian Lance Taylor
On Tue, May 16, 2017 at 8:04 PM, Manish Rai Jain  wrote:
>
> Ideally, the disk reads could be happening via libaio, causing the OS
> threads to not block, so all goroutines can make progress, increasing the
> number of read requests that can be made concurrently. This would then also
> ensure that one doesn't need to set GOMAXPROCS to a value greater than
> number of cores to achieve higher throughput.

libaio sounds good on paper, but at least on GNU/Linux it's all in
user space.  In effect it does exactly what the Go runtime does
already: it hands file I/O operations off to separate threads.  The Go
runtime would gain nothing at all by switching to using libaio.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Ian Lance Taylor
On Tue, May 16, 2017 at 9:26 PM, Manish Rai Jain  wrote:
>> The runtime will spawn a new thread to replace the one that is blocked.
>
> Realized that after writing my last mail. And that actually explains some of
> the other crashes we saw, about "too many threads", if we run tens of
> thousands of goroutines to do these reads, one goroutine per read.
>
> It is obviously lot more expensive to spawn a new OS thread. It seems like
> this exact same problem was already solved for network via netpoller
> (https://morsmachine.dk/netpoller). Blocking OS threads for disk reads made
> sense for HDDs, which could only do 200 IOPS; for SSDs we'd need a solution
> based on async I/O.

Note that in the upcoming Go 1.9 release we now use the netpoller for
the os package as well.  However, it's not as effective as one would
hope, because on GNU/Linux you can't use epoll for disk files.  It
mainly helps with pipes.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Realizing SSD random read IOPS

2017-05-16 Thread Dave Cheney
Rather than guessing what is going on, I think it's time to break out the
profiling tools Manish.

On Wed, 17 May 2017, 15:23 David Klempner  wrote:

>
> On May 16, 2017 22:03, "Ian Lance Taylor"  wrote:
>
> On Tue, May 16, 2017 at 9:26 PM, Manish Rai Jain 
> wrote:
> >> The runtime will spawn a new thread to replace the one that is blocked.
> >
> > Realized that after writing my last mail. And that actually explains
> some of
> > the other crashes we saw, about "too many threads", if we run tens of
> > thousands of goroutines to do these reads, one goroutine per read.
> >
> > It is obviously lot more expensive to spawn a new OS thread. It seems
> like
> > this exact same problem was already solved for network via netpoller
> > (https://morsmachine.dk/netpoller). Blocking OS threads for disk reads
> made
> > sense for HDDs, which could only do 200 IOPS; for SSDs we'd need a
> solution
> > based on async I/O.
>
> Note that in the upcoming Go 1.9 release we now use the netpoller for
> the os package as well.  However, it's not as effective as one would
> hope, because on GNU/Linux you can't use epoll for disk files.
>
>
> There's a not very well documented API to make AIO completions kick an
> eventfd.
>
> It
> mainly helps with pipes.
>
>
> Ian
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
>
>
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.