[go-nuts] Re: profiling webserver with pprof and router middleware

2017-11-29 Thread skaldendudler
Does noone have an idea? :(

Am Montag, 27. November 2017 13:37:43 UTC+1 schrieb basti skalendudler:
>
> The go tool pprof command is interactive, so I thought it is enough type 
> 'png' to get the image after the benchmark is run
>
> I tested to start go tool pprof now as well during and after the benchmark 
> -> nothing changes
>
> Am Montag, 27. November 2017 04:37:48 UTC+1 schrieb Karan Chaudhary:
>>
>> From the top of my head,  shouldn't the benchmark be done when traffic is 
>> being sent to the server and not before it is sent?
>>
>> On Sunday, 26 November 2017 00:11:40 UTC+5:30, basti skalendudler wrote:
>>>
>>> Hey guiys, I posted a StackOF question two days ago, but so far nobody 
>>> was able to help me!
>>>
>>> I am trying to profile my web server I wrote, but my pprof does not 
>>> contain any data about the handler func.  
>>> I am using the httprouter package 
>>>  by julienschmidt, and 
>>> want to simply benchmark one of my handlers and see the pprof profile for 
>>> that. For the benchmarking, I am using go-wrk 
>>>   
>>>
>>> I set up my web server and pprof like this:
>>>
>>>
>>>  // Configure the server
>>>  server := &http.Server{
>>>  Addr:":4000",
>>>  Handler: router,
>>>  }
>>>
>>>
>>>  go func() {
>>>  log.Println(http.ListenAndServe(":6060", nil))
>>>  }()
>>>
>>>
>>>  // Start the server
>>>  err = server.ListenAndServe()
>>>  if err != nil {
>>>  panic(err)
>>>  }
>>>
>>>
>>> The router is initialized like this:
>>>
>>>   
>>>   // Create the httprouter
>>>  router := httprouter.New()
>>>  // Register all handlers
>>>  router.GET("/entities/:type/map", h.UseHandler(&h.
>>> ApiGetEntitiesMapRequest{}, p))
>>>
>>>
>>> And my handler looks like this:
>>>
>>>
>>>  func (req ApiGetEntitiesMapRequest) Handle(r *http.Request, hrp 
>>> httprouter.Params, p Params) (interface{}, error) {
>>>  test := make([]string, 0)
>>>  for i := 0; i < 1000; i++ {
>>>  test = append(test, "1")
>>>  test = append(test, "2")
>>>  // Ensure pprof has some time to collect its data
>>>  time.Sleep(10)
>>>  }
>>>  return test, nil
>>> }
>>>
>>> This handler is just a test, where I dynamically append a lot of 
>>> elements to a slice. The reason for that is, I wanted to test whether these 
>>> dynamic allocations are represented in the heap profile of pprof.
>>>
>>> Now, what I did was: 
>>>  
>>>  - Start my server  
>>>  - execute **go tool pprof http://localhost:6060/debug/pprof/heap** in 
>>> my terminal  
>>>  - then benchmark my handler by executing **go-wrk -no-c -d 5 
>>> http://localhost:4000/entities/object/map**  
>>>
>>> The request works and my benchmark also reports everything correctly. 
>>> However, when I type **png** in the pprof terminal, I get this graph:
>>>
>>>
>>> 
>>>
>>>
>>> The graph does not contain any information about my handler and the 
>>> costly heap allocations I did in my handler. What am I doing wrong?
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] best practice for listening multiple linux TCP port?

2017-11-29 Thread smallnest
I have a requirement that our application needs to listen multiple TCP 
ports (mayebe some hundreds). It is not important why we have such a 
strange requirements. We can listen more ports and stop some listened ports 
at runtime.

The application is running at Linux and listen by TCP.

My basic thought is starting multiple TCPListeners. One TCPListener per 
port and one goroutine for each TCPListener.


But what I want to know there are better solutions than this, for example, 
using epoll/ underlying file descriptor?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] How to check if db connection active after database server restarts ?

2017-11-29 Thread nupur8121991
Hi All,

I have built a Golang application based on REST services that connects to 
different databases and fetches results.
When 1 url is hit then if a connection does not already exist, it is 
created. If it already exists then it is used as such. Connection is being 
stored in a global variable that is being passed among packages.

*I want to know how I will be able to check if my connection exists or not 
in case once my db goes down and then is restarted. In that case the 
session in my global variable will not be empty but will be broken.*
*How to check if session variable has active connection or not ? *

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread Subramanian K
Hi

I am using native sort provided part of the package and to process 48MB of 
slice data, it takes ~45sec.
To run 2GB of data it takes really long time, I am trying to split these to 
buckets and make it run concurrently, finally need to collate results of 
all these small sorted buckets.

Do we have any sort package which can sort huge data swiftly?

Regards,
Subu. K

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread Jan Mercl
On Wed, Nov 29, 2017 at 3:19 PM Subramanian K  wrote:

> To run 2GB of data it takes really long time, I am trying to split these
to buckets and make it run concurrently, finally need to collate results of
all these small sorted buckets.

Have you measured and detected where the bottleneck is? If it's in any of
the sort.Interface methods, concurrency might not be the best approach to
consider.



-- 

-j

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread Subramanian Karunanithi
Hi,

Yes measuring it now shall get back, on a side note do we have any big
datafile parsing package part of go or some library?
The basic sort.Sort() what I have is taking 44sec for 48MB of data, I have
to parse 2G of data.


Regards,
Subu K

On Wed, Nov 29, 2017 at 8:03 PM, Jan Mercl <0xj...@gmail.com> wrote:

> On Wed, Nov 29, 2017 at 3:19 PM Subramanian K  wrote:
>
> > To run 2GB of data it takes really long time, I am trying to split these
> to buckets and make it run concurrently, finally need to collate results of
> all these small sorted buckets.
>
> Have you measured and detected where the bottleneck is? If it's in any of
> the sort.Interface methods, concurrency might not be the best approach to
> consider.
>
>
>
> --
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] How to check if db connection active after database server restarts ?

2017-11-29 Thread Shawn Milochik
Ping() lets you verify a connection. It should also be used when the
initial connection is made.

https://golang.org/pkg/database/sql/#DB.Ping

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread 'Axel Wagner' via golang-nuts
I'm not sure that the sort package is your your problem, here.
First, the actual size (in bytes) of the data set matters, but is not
*that* important. The number of elements is much more important. And I
wrote a naive program to measure how long it takes to sort different
in-memory data sets and get

48MB of uint64 (~6M elements): ~1s
48MB of uint8 (~50M elements): ~4s
2GB of uint64 (~270M elements): ~1m13s
2GB of uint8 (~2B elements): ~2m54s

Neither of these seem particularly bad to me. For example, in this post
 a C++
implementation of quick sort sorts 10M integers in 2.5s; my time is ~4s for
50M elements. Yes, the hardware is probably very different, but it doesn't
suggest a huge problem with the speed of the sort package to me.

I would assume the issue here is more that your comparison function is
slow, that the way you are reading the values is inefficient (e.g. do you
leave them on-disk?) or that your expectations about the time it takes to
sort hundreds of millions of elements are unrealistic.

On Wed, Nov 29, 2017 at 3:42 PM, Subramanian Karunanithi  wrote:

> Hi,
>
> Yes measuring it now shall get back, on a side note do we have any big
> datafile parsing package part of go or some library?
> The basic sort.Sort() what I have is taking 44sec for 48MB of data, I have
> to parse 2G of data.
>
>
> Regards,
> Subu K
>
>
> On Wed, Nov 29, 2017 at 8:03 PM, Jan Mercl <0xj...@gmail.com> wrote:
>
>> On Wed, Nov 29, 2017 at 3:19 PM Subramanian K  wrote:
>>
>> > To run 2GB of data it takes really long time, I am trying to split
>> these to buckets and make it run concurrently, finally need to collate
>> results of all these small sorted buckets.
>>
>> Have you measured and detected where the bottleneck is? If it's in any of
>> the sort.Interface methods, concurrency might not be the best approach to
>> consider.
>>
>>
>>
>> --
>>
>> -j
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] best practice for listening multiple linux TCP port?

2017-11-29 Thread Ian Lance Taylor
On Tue, Nov 28, 2017 at 10:01 PM,   wrote:
>
> I have a requirement that our application needs to listen multiple TCP ports
> (mayebe some hundreds). It is not important why we have such a strange
> requirements. We can listen more ports and stop some listened ports at
> runtime.
>
> The application is running at Linux and listen by TCP.
>
> My basic thought is starting multiple TCPListeners. One TCPListener per port
> and one goroutine for each TCPListener.
>
>
> But what I want to know there are better solutions than this, for example,
> using epoll/ underlying file descriptor?

The go runtime will use epoll internally.  Your Go code may as well
use multiple TCPListeners.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] golang library linking queries

2017-11-29 Thread jaysharma391
Hello All,

I am new to golang. I have a few queries:

I am developing: SDK which have 4-5 files, SDK has dependency on protocol 
buffer.

*1.  If I run the below command on SDK folder : *
$ go build
   - It build all the SDK files and create  "sdk.a" file in pkg 
folder.

*   Query*: sdk.a is static library. If I want to release only the 
static library [sdk,a] what is the best way to do that...such that people 
can link static library in application and call the SDK APIs ?
  
*2. To make a shared library I used following command: *
 $ go build -buildmode=shared
 $ go install -buildmode=shared std 
 $ go install -buildmode=shared -linkshared 

 With the above commands I got shard library [libsdk,so] in pkg folder.
 
 *Query*: If i want to distribute only shared file [libsdk.so], what is 
the best way to do that...such that people can link shared library in 
application and call the SDK APIs?


*Thanks in advance.*

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread Michael Jones
Using a personal sort to avoid the standard sort's abstraction...

// 50331648 bytes,6291456 8-byte elements,  0.687317 seconds (48MB of
uint64)
// 50331648 bytes,   50331648 1-byte elements,  1.605258 seconds (48MB of
uint8)
// 2147483648 bytes,  268435456 8-byte elements, 35.262593 seconds (2GB of
uint64)
// 2147483648 bytes, 2147483648 1-byte elements, 68.793937 seconds (2GB of
uint8)

...is good for another 2-3x. Running in parallel mode would get you another
3x-5x, but i expect 5x this rate is as good as you will get. (at least with
battery, laptop, pyjamas, and reclining on a sofa :-)
-- 
Michael T. Jones
michael.jo...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread Michael Jones
here's a little more detail on system / direct-type / and parallel
direct-type sorting:

celeste:double mtj$ go test -run=NONE  -bench Sort
goos: darwin
goarch: amd64
pkg: double
BenchmarkSSort250-8  10  11285 ns/op
BenchmarkSSort500-8   5  31001 ns/op
BenchmarkSSort1000-8 2  93048 ns/op
BenchmarkSSort2000-8 1 218677 ns/op
BenchmarkSSort4000-8  3000 473944 ns/op
BenchmarkSSort8000-8  20001033446 ns/op
BenchmarkSSort16000-8  10002208783 ns/op
BenchmarkSSort32000-8   3004599316 ns/op
BenchmarkSSort64000-8   2009571278 ns/op
BenchmarkSSort128000-8 100   20524194 ns/op
BenchmarkSSort256000-8  30   42263166 ns/op
BenchmarkSSort512000-8  20   89330628 ns/op
BenchmarkSSort1024000-8  10 190409724 ns/op
BenchmarkQSort250-8  50   2978 ns/op
BenchmarkQSort500-8  20  10271 ns/op
BenchmarkQSort1000-8 3  36474 ns/op
BenchmarkQSort2000-8 2  93520 ns/op
BenchmarkQSort4000-8 1 217781 ns/op
BenchmarkQSort8000-8  3000 470050 ns/op
BenchmarkQSort16000-8  20001020604 ns/op
BenchmarkQSort32000-8  10002184270 ns/op
BenchmarkQSort64000-8   3004750204 ns/op
BenchmarkQSort128000-8 2009841153 ns/op
BenchmarkQSort256000-8 100   21644805 ns/op
BenchmarkQSort512000-8  30   46220023 ns/op
BenchmarkQSort1024000-8  20   89012565 ns/op
BenchmarkPSort250-8  50   3009 ns/op
BenchmarkPSort500-8  20  10284 ns/op
BenchmarkPSort1000-8 5  36435 ns/op
BenchmarkPSort2000-8 3  48991 ns/op
BenchmarkPSort4000-8 2  87163 ns/op
BenchmarkPSort8000-8 1 156933 ns/op
BenchmarkPSort16000-8  5000 289609 ns/op
BenchmarkPSort32000-8  2000 616701 ns/op
BenchmarkPSort64000-8  10001183778 ns/op
BenchmarkPSort128000-8 5002602444 ns/op
BenchmarkPSort256000-8 3005492333 ns/op
BenchmarkPSort512000-8 100   10974195 ns/op
BenchmarkPSort1024000-8  50   22786689 ns/op
PASS
ok  double 104.041s

...which has parallel sorting an additional 2x faster on average. More CPUs
helps, but not anywhere close to scalably. The real issue here is cache
effects, memory bandwidth, and bounds checking.

On Wed, Nov 29, 2017 at 10:47 AM, Michael Jones 
wrote:

> Using a personal sort to avoid the standard sort's abstraction...
>
> // 50331648 bytes,6291456 8-byte elements,  0.687317 seconds (48MB of
> uint64)
> // 50331648 bytes,   50331648 1-byte elements,  1.605258 seconds (48MB of
> uint8)
> // 2147483648 <(214)%20748-3648> bytes,  268435456 8-byte elements,
> 35.262593 seconds (2GB of uint64)
> // 2147483648 <(214)%20748-3648> bytes, 2147483648 <(214)%20748-3648>
> 1-byte elements, 68.793937 seconds (2GB of uint8)
>
> ...is good for another 2-3x. Running in parallel mode would get you
> another 3x-5x, but i expect 5x this rate is as good as you will get. (at
> least with battery, laptop, pyjamas, and reclining on a sofa :-)
> --
> Michael T. Jones
> michael.jo...@gmail.com
>



-- 
Michael T. Jones
michael.jo...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread 'Axel Wagner' via golang-nuts
BTW, depending on what the problem is we are actually talking about, even
the numbers from the initial post might be fine. With those numbers,
sorting the 2GB should take roughly half an hour. A significant amount of
time, yes, but if you only need to do it once (or once a day or week or
whatever) it might not be worth spending any significant amount of time
making it go faster. Seems counter-intuitive, I know, but CPU time is very
cheap and human time is very expensive :)

On Wed, Nov 29, 2017 at 8:08 PM, Michael Jones 
wrote:

> here's a little more detail on system / direct-type / and parallel
> direct-type sorting:
>
> celeste:double mtj$ go test -run=NONE  -bench Sort
> goos: darwin
> goarch: amd64
> pkg: double
> BenchmarkSSort250-8  10  11285 ns/op
> BenchmarkSSort500-8   5  31001 ns/op
> BenchmarkSSort1000-8 2  93048 ns/op
> BenchmarkSSort2000-8 1 218677 ns/op
> BenchmarkSSort4000-8  3000 473944 ns/op
> BenchmarkSSort8000-8  20001033446 ns/op
> BenchmarkSSort16000-8  10002208783 ns/op
> BenchmarkSSort32000-8   3004599316 ns/op
> BenchmarkSSort64000-8   2009571278 ns/op
> BenchmarkSSort128000-8 100   20524194 ns/op
> BenchmarkSSort256000-8  30   42263166 ns/op
> BenchmarkSSort512000-8  20   89330628 ns/op
> BenchmarkSSort1024000-8  10 190409724 ns/op
> BenchmarkQSort250-8  50   2978 ns/op
> BenchmarkQSort500-8  20  10271 ns/op
> BenchmarkQSort1000-8 3  36474 ns/op
> BenchmarkQSort2000-8 2  93520 ns/op
> BenchmarkQSort4000-8 1 217781 ns/op
> BenchmarkQSort8000-8  3000 470050 ns/op
> BenchmarkQSort16000-8  20001020604 ns/op
> BenchmarkQSort32000-8  10002184270 ns/op
> BenchmarkQSort64000-8   3004750204 ns/op
> BenchmarkQSort128000-8 2009841153 ns/op
> BenchmarkQSort256000-8 100   21644805 ns/op
> BenchmarkQSort512000-8  30   46220023 ns/op
> BenchmarkQSort1024000-8  20   89012565 ns/op
> BenchmarkPSort250-8  50   3009 ns/op
> BenchmarkPSort500-8  20  10284 ns/op
> BenchmarkPSort1000-8 5  36435 ns/op
> BenchmarkPSort2000-8 3  48991 ns/op
> BenchmarkPSort4000-8 2  87163 ns/op
> BenchmarkPSort8000-8 1 156933 ns/op
> BenchmarkPSort16000-8  5000 289609 ns/op
> BenchmarkPSort32000-8  2000 616701 ns/op
> BenchmarkPSort64000-8  10001183778 ns/op
> BenchmarkPSort128000-8 5002602444 ns/op
> BenchmarkPSort256000-8 3005492333 ns/op
> BenchmarkPSort512000-8 100   10974195 ns/op
> BenchmarkPSort1024000-8  50   22786689 ns/op
> PASS
> ok  double 104.041s
>
> ...which has parallel sorting an additional 2x faster on average. More
> CPUs helps, but not anywhere close to scalably. The real issue here is
> cache effects, memory bandwidth, and bounds checking.
>
> On Wed, Nov 29, 2017 at 10:47 AM, Michael Jones 
> wrote:
>
>> Using a personal sort to avoid the standard sort's abstraction...
>>
>> // 50331648 bytes,6291456 8-byte elements,  0.687317 <06873%2017>
>> seconds (48MB of uint64)
>> // 50331648 bytes,   50331648 1-byte elements,  1.605258 seconds (48MB
>> of uint8)
>> // 2147483648 <(214)%20748-3648> bytes,  268435456 8-byte elements,
>> 35.262593 seconds (2GB of uint64)
>> // 2147483648 <(214)%20748-3648> bytes, 2147483648 <(214)%20748-3648>
>> 1-byte elements, 68.793937 seconds (2GB of uint8)
>>
>> ...is good for another 2-3x. Running in parallel mode would get you
>> another 3x-5x, but i expect 5x this rate is as good as you will get. (at
>> least with battery, laptop, pyjamas, and reclining on a sofa :-)
>> --
>> Michael T. Jones
>> michael.jo...@gmail.com
>>
>
>
>
> --
> Michael T. Jones
> michael.jo...@gmail.com
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Sort a huge slice of data around 2GB

2017-11-29 Thread Michael Jones
agree!

On Wed, Nov 29, 2017 at 11:37 AM, 'Axel Wagner' via golang-nuts <
golang-nuts@googlegroups.com> wrote:

> BTW, depending on what the problem is we are actually talking about, even
> the numbers from the initial post might be fine. With those numbers,
> sorting the 2GB should take roughly half an hour. A significant amount of
> time, yes, but if you only need to do it once (or once a day or week or
> whatever) it might not be worth spending any significant amount of time
> making it go faster. Seems counter-intuitive, I know, but CPU time is very
> cheap and human time is very expensive :)
>
> On Wed, Nov 29, 2017 at 8:08 PM, Michael Jones 
> wrote:
>
>> here's a little more detail on system / direct-type / and parallel
>> direct-type sorting:
>>
>> celeste:double mtj$ go test -run=NONE  -bench Sort
>> goos: darwin
>> goarch: amd64
>> pkg: double
>> BenchmarkSSort250-8  10  11285 ns/op
>> BenchmarkSSort500-8   5  31001 ns/op
>> BenchmarkSSort1000-8 2  93048 ns/op
>> BenchmarkSSort2000-8 1 218677 ns/op
>> BenchmarkSSort4000-8  3000 473944 ns/op
>> BenchmarkSSort8000-8  20001033446 ns/op
>> BenchmarkSSort16000-8  10002208783 ns/op
>> BenchmarkSSort32000-8   3004599316 ns/op
>> BenchmarkSSort64000-8   2009571278 ns/op
>> BenchmarkSSort128000-8 100   20524194 ns/op
>> BenchmarkSSort256000-8  30   42263166 ns/op
>> BenchmarkSSort512000-8  20   89330628 ns/op
>> BenchmarkSSort1024000-8  10 190409724 ns/op
>> BenchmarkQSort250-8  50   2978 ns/op
>> BenchmarkQSort500-8  20  10271 ns/op
>> BenchmarkQSort1000-8 3  36474 ns/op
>> BenchmarkQSort2000-8 2  93520 ns/op
>> BenchmarkQSort4000-8 1 217781 ns/op
>> BenchmarkQSort8000-8  3000 470050 ns/op
>> BenchmarkQSort16000-8  20001020604 ns/op
>> BenchmarkQSort32000-8  10002184270 ns/op
>> BenchmarkQSort64000-8   3004750204 ns/op
>> BenchmarkQSort128000-8 2009841153 ns/op
>> BenchmarkQSort256000-8 100   21644805 ns/op
>> BenchmarkQSort512000-8  30   46220023 ns/op
>> BenchmarkQSort1024000-8  20   89012565 ns/op
>> BenchmarkPSort250-8  50   3009 ns/op
>> BenchmarkPSort500-8  20  10284 ns/op
>> BenchmarkPSort1000-8 5  36435 ns/op
>> BenchmarkPSort2000-8 3  48991 ns/op
>> BenchmarkPSort4000-8 2  87163 ns/op
>> BenchmarkPSort8000-8 1 156933 ns/op
>> BenchmarkPSort16000-8  5000 289609 ns/op
>> BenchmarkPSort32000-8  2000 616701 ns/op
>> BenchmarkPSort64000-8  10001183778 ns/op
>> BenchmarkPSort128000-8 5002602444 ns/op
>> BenchmarkPSort256000-8 3005492333 ns/op
>> BenchmarkPSort512000-8 100   10974195 ns/op
>> BenchmarkPSort1024000-8  50   22786689 ns/op
>> PASS
>> ok  double 104.041s
>>
>> ...which has parallel sorting an additional 2x faster on average. More
>> CPUs helps, but not anywhere close to scalably. The real issue here is
>> cache effects, memory bandwidth, and bounds checking.
>>
>> On Wed, Nov 29, 2017 at 10:47 AM, Michael Jones 
>> wrote:
>>
>>> Using a personal sort to avoid the standard sort's abstraction...
>>>
>>> // 50331648 bytes,6291456 8-byte elements,  0.687317 <06873%2017>
>>> seconds (48MB of uint64)
>>> // 50331648 bytes,   50331648 1-byte elements,  1.605258 seconds (48MB
>>> of uint8)
>>> // 2147483648 <(214)%20748-3648> bytes,  268435456 8-byte elements,
>>> 35.262593 seconds (2GB of uint64)
>>> // 2147483648 <(214)%20748-3648> bytes, 2147483648 <(214)%20748-3648>
>>> 1-byte elements, 68.793937 seconds (2GB of uint8)
>>>
>>> ...is good for another 2-3x. Running in parallel mode would get you
>>> another 3x-5x, but i expect 5x this rate is as good as you will get. (at
>>> least with battery, laptop, pyjamas, and reclining on a sofa :-)
>>> --
>>> Michael T. Jones
>>> michael.jo...@gmail.com
>>>
>>
>>
>>
>> --
>> Michael T. Jones
>> michael.jo...@gmail.com
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Michael T. Jones
michael.jo...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Is there a way omit the the additional allocation when passing []byte to C function?

2017-11-29 Thread Владислав Митов
Hello, 

I'm writing a Go wrapper around C library and I'm trying to omit all 
possible allocations.

Here is my working example 
- 
https://github.com/milagro-crypto/milagro-crypto-c/commit/1f070f24d83c76c5e7e5c6c548a9715438abc758#diff-a3c034fc3f075297e9e3a7cca9ace62eR65.
There are 3 allocations there 65, 73 and 82. 

Passing `(*C.char)(unsafe.Pointer(&msg[0]))` at 65 panics with `runtime 
error: cgo argument has Go pointer to Go pointer`. 

Any suggestions? 

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is there a way omit the the additional allocation when passing []byte to C function?

2017-11-29 Thread Ian Lance Taylor
On Wed, Nov 29, 2017 at 1:00 PM, Владислав Митов
 wrote:
>
> I'm writing a Go wrapper around C library and I'm trying to omit all
> possible allocations.
>
> Here is my working example -
> https://github.com/milagro-crypto/milagro-crypto-c/commit/1f070f24d83c76c5e7e5c6c548a9715438abc758#diff-a3c034fc3f075297e9e3a7cca9ace62eR65.
> There are 3 allocations there 65, 73 and 82.
>
> Passing `(*C.char)(unsafe.Pointer(&msg[0]))` at 65 panics with `runtime
> error: cgo argument has Go pointer to Go pointer`.
>
> Any suggestions?

You may be running into https://golang.org/issue/14210.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is there a way omit the the additional allocation when passing []byte to C function?

2017-11-29 Thread Владислав Митов
Hey, thanks about the swift answer.
I got to that thread but the solutions didn't worked for me. I'm still hitting 
the cgo pointer check. Works if I disable it though.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Is there a way omit the the additional allocation when passing []byte to C function?

2017-11-29 Thread Tamás Gulácsi
The cgo checker is right: you send in a pointer with a pointer in it - mOct is 
a pointer, and mOct.val is a pointer, too.
You have to malloc it.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Is there a way omit the the additional allocation when passing []byte to C function?

2017-11-29 Thread Владислав Митов
So no way around 4 allocations for 2 values? 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Is there a way omit the the additional allocation when passing []byte to C function?

2017-11-29 Thread xingtao zhao
Make your C function to accept octet parameter, instead of *octet parameter? 
Then there will be no allocations.

On Wednesday, November 29, 2017 at 2:39:38 PM UTC-8, Владислав Митов wrote:
>
> So no way around 4 allocations for 2 values? 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Is there a way omit the the additional allocation when passing []byte to C function?

2017-11-29 Thread xingtao zhao
/* C code:

struct PKCS15_Ret {
int error_code;
int len;
};

struct PKCS15_Ret PKCS15_wrap(int hash_type, octet message, octet receive) {
int error_code = PKCS15(hashType, &mOct, &cOct);
return struct PKCS15_Ret{ error_code, cOct.len };
}

*/

func PKCS15_PLAIN(hashType, RFS int, msg []byte) ([]byte, error) {
// input
mOct := C.octet{
C.int(len(msg)),
C.int(len(msg)),
(*C.char)(unsafe.Pointer(&msg[0])),
}

// output
r := make([]byte, RFS)
cOct := C.octet{
C.int(0),
C.int(RFS),
(*C.char)(unsafe.Pointer(&r[0])),
}

rtn := C.PKCS15_wrap(C.int(hashType), mOct, cOct)

if rtn.error_code != 1 {
return nil, &Error{code: int(rtn.error_code)}
}

return r[:rtn.len], nil
}


On Wednesday, November 29, 2017 at 3:12:31 PM UTC-8, xingtao zhao wrote:
>
> Make your C function to accept octet parameter, instead of *octet parameter? 
> Then there will be no allocations.
>
> On Wednesday, November 29, 2017 at 2:39:38 PM UTC-8, Владислав Митов wrote:
>>
>> So no way around 4 allocations for 2 values? 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Building a select friendly condition type

2017-11-29 Thread Dave Cheney
Hello,

Anyone for a round of code golf?

I'm working on a piece of code for a streaming gRPC server and have found 
myself in the position that I need to wait for a notification event (in 
this case that a cache has been updated and I need to stream the results to 
the client) or a context.Done event. There may be multiple gRPC clients, 
and they may come and go without warning. Thus I'm in need of a sync.Cond 
type that works with select. 

I've coded this up, which appears to do the job

https://play.golang.org/p/pMZHwA1AD-

But I'm wondering if others have found themselves in the same position, and 
if so, what were your solutions? Am I missing something, will my cond loose 
notifications, or can it be simplified?

Thanks

Dave

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.