1. *Go Version*
      - go version go1.6.3 linux/amd64
      2. *OS*
      - CentOS Linux release 7.1.1503 (Core)
      3. *Description of Problem*
      - *GOAL: *Handle files (can be of any arbitrarily large size) on my 
      server and then upload to google cloud storage.
      - With the code below I have no trouble uploading 40+mb files, 
      however, I have a requirement to be able handle an upload file of >2gb.
      - I have tried a number of things and encountered 3 distinct errors. 
      I was initially using the following code to handle the file upload
   4. *I am very new to Go*
   5. *This is my first Groups post ever*

*SERVER - works for smaller files (until I run out of memory)*
func upload(w http.ResponseWriter, r *http.Request) {
    // I dunno it'd just be cool to see where we are in our code from the 
server should prolly just delete this
    fmt.Println("handling file upload", r.Method)
    if r.Method == "POST" {
        // debugging only
        fmt.Println("We've got a POST")

        r.ParseMultipartForm(32 << 20)
        file, handler, err := r.FormFile("uploadfile")
        if err != nil {
            fmt.Println("FormFile: ", err)
            return
        }

        // Meat and potatoes - the only reason this method exists
        fileURL, err := storage.UploadFileToBucket(file, handler, bucketID)
        if err != nil {
            fmt.Println("UploadFileToBucket: ", err)
        }

        // I think clients would appreciate if we told them that we created 
something
        w.WriteHeader(http.StatusCreated)
        json.NewEncoder(w).Encode(map[string]string{"url": fileURL})
    } else {
        // You can't expect me to create a resource - Do you GET me?
        err := fmt.Errorf("Method is not supported, %s\n", r.Method)
        fmt.Print(err)

        json.NewEncoder(w).Encode(map[string]string{"error": err.Error()})
    }
}

Again the above works fine for files even in excess of 40mb however, if I 
attempt to handle a file uploaded of 2gb then I receive a
 no such file

error from the call to 
r.FormFile

I have tried increasing default memory size on `ParseMultipartForm` to 
256mb and same exact symptom.If I increase it to greater than 2gb I get an 
OOM error and connection get's reset. OK, fine - I get that.

Let's pivot!! So I try to use `multiPart.Reader` instead to stream the 
data. This, however, gives me a different error. Regardless of file size, I 
get an 
unexpected EOF
in io.Copy().

*UPDATE SERVER - doesn't work for any size files*
func uploadLarge(w http.ResponseWriter, r *http.Request) {
    // I dunno it'd just be cool to see where we are in our code from the 
server should prolly just delete this
    fmt.Println("handling large file upload", r.Method)
    if r.Method == "POST" {
        // debugging only
        fmt.Println("We've got a POST")

        // we have a huge file so we should try to stream it
        mr, err := r.MultipartReader()
        if err != nil {
            log.Fatal("MultipartReader: ", err)
        }
        fileURL, err := storage.UploadLargeFile(mr, bucketID)
        if err != nil {
            fmt.Println("UploadLargeFile: ", err)
        }

        // I think clients would appreciate if we told them that we created 
something
        w.WriteHeader(http.StatusCreated)
        json.NewEncoder(w).Encode(map[string]string{"url": fileURL})
    } else {
        // You can't expect me to create a resource - Do you GET me?
        err := fmt.Errorf("Method is not supported, %s\n", r.Method)
        fmt.Print(err)

        json.NewEncoder(w).Encode(map[string]string{"error": err.Error()})
    }
}

*STORAGE Package - UploadLargeFile*
func UploadLargeFile(fr *multipart.Reader, bucketID string) (url string, 
err error) {
    if client == nil {
        client = createStorageClient()
    }

    bucket, ok := buckets[bucketID]
    if !ok {
        bucket = client.Bucket(bucketID)
        buckets[bucketID] = bucket
    }

    var uploadFile *multipart.Part
    for {
        p, err := fr.NextPart()
        if err == io.EOF {
            break
        }
        if err != nil {
            log.Fatal(err)
        }
        if err != nil {
            log.Fatal(err)
        }
        if p.FormName() == "uploadfile" {
            uploadFile = p
            fmt.Println("UploadFile: ", uploadFile)
        }
    }

    // Create the object writer and send the file to GCS
    ctx := context.Background()
    w := bucket.Object(uploadFile.FileName()).NewWriter(ctx)
    w.ACL = []storage.ACLRule{{Entity: storage.AllUsers, Role: 
storage.RoleReader}}
    w.CacheControl = "public, max-age=86400"

    // Copy the file data to the writer
    if _, err := io.Copy(w, uploadFile); err != nil {
        fmt.Println("Copy: ", err)
        return "", err
    }
    if err := w.Close(); err != nil {
        return "", err
    }

    const publicURL = "https://storage.googleapis.com/%s/%s";
    return fmt.Sprintf(publicURL, bucketID, uploadFile.FileName()), nil
}

Now, again the error is happening in io.Copy here - [Unexpected EOF] - for 
any size file upload. And I can't quite figure out why. Perhaps it has to 
do with the client I am using to test it:

*CLIENT*
func uploadFileToService(path string, fileName string, paramName string) 
string {
    // Open the file that we plan on uploading
    file, err := os.Open(path)
    if err != nil {
        log.Fatal("File Open:", err)
    }
    defer file.Close()

    rPipe, wPipe := io.Pipe()
    req, err := http.NewRequest("POST", serviceHost+"/upload", rPipe)
    if err != nil {
        log.Fatal("NewRequest", err)
    }

    w := multipart.NewWriter(wPipe)
    req.Header.Set("Content-Type", w.FormDataContentType())
    // Need to read and write in parallel so spawn a goroutine to write to 
the
    // pipe when being read from
    go func() {
        defer wPipe.Close()
        part, err := w.CreateFormFile(paramName, fileName)
        if err != nil {
            log.Fatal("CreateFormFile:", err)
        }

        if _, err := io.Copy(part, file); err != nil {
            log.Fatal("CopyFile:", err)
        }

        if err = w.Close(); err != nil {
            log.Fatal("multipart.Writer.Close:", err)
        }
    }()

    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        log.Fatal("Do Request", err)
    }
    defer resp.Body.Close()

    ret, err := ioutil.ReadAll(resp.Body)
    return string(ret)
}

I am not sure what the issue is - do any of you Go experts out there happen 
to see what could be the issue here (let's neglect the fact that I run out 
of memory - I get why that is happening). But the `no such file` and 
`Unexpected EOF` errors are boggling me.

Thanks,
-Seth

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to