[go-nuts] Go Path & Github Sync

2017-04-05 Thread Mukund 8kmiles
Hi Is there a best practice or recommended folder structure followed for maintaining a GH repo inside GOPATH I maintain my GO source in the default GOPATH which is /home//go go --- bin --- pkg *--- src* *--flmain* *--flowlogs* *---* I would like to maintain the same path for Github

[go-nuts] Re: Random Number Genaration - Golang -- Error/Bug

2017-04-05 Thread Mukund 8kmiles
Thanks a lot Uli, &Peter, @Uli The go routine safe rand.Int63 resolved the problem. Regards Mukund On Wed, Apr 5, 2017 at 12:51 AM, Uli Kunitz wrote: > Hi Mukund, > > Please recognize that the Source object returned by rand.NewSource is not > safe for concurrent use by multiple goroutines.

[go-nuts] Random Number Genaration - Golang -- Error/Bug

2017-04-04 Thread Mukund 8kmiles
Hi, It is a basic index out of range but inside *math.Rand() *and not in objects that I create. Could that be a bug ? I have given my source, below. Any help or pointers is appreciated panic: runtime error: index out of range goroutine 32 [running]: panic(0x7d5780, 0xc420010140) /opt/go/src/runt

[go-nuts] Re: Compressing 2.5 GB data trims files

2017-02-16 Thread Mukund 8kmiles
Hey all, Thanks, the io.copy resolved the issue. Thanks & Regards Mukund On Thu, Feb 16, 2017 at 2:51 AM, Dave Cheney wrote: > Or use https://godoc.org/io/ioutil#ReadFile > > By really you don't need to buffer all the data in memory, io.Copy will do > that for you > > in, err := os.Open(inpu

[go-nuts] Compressing 2.5 GB data trims files

2017-02-15 Thread Mukund 8kmiles
Hi , Has anyone tried compressing > ~2.5 GB of data using golang. The following function compresses files. Files less than ~2.5 GB are successful with any data loss. Files greater than ~2.5 are getting compressed but the API trims section of the data at the last Any inputs are welcome!!! func

[go-nuts] Go Optimizations for IO Intensive programs

2017-02-10 Thread mukund . 8kmiles
Hello, I have written a GO program which downloads a 5G compressed CSV from Amazon S3, decompresses it and uploads the decompressed CSV (20G) to Amazon S3. Amazon S3 provides a default concurrent uploader/downloader and I am using a multithreaded approach to download files in parallel, decompre