[
https://issues.apache.org/jira/browse/HDFS-17268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
katty he resolved HDFS-17268.
-----------------------------
Resolution: Resolved
> when SocketTimeoutException happen, overwrite mode can delete old data, and
> make file empty
> -------------------------------------------------------------------------------------------
>
> Key: HDFS-17268
> URL: https://issues.apache.org/jira/browse/HDFS-17268
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.2.2
> Reporter: katty he
> Priority: Major
>
> recently, i use fs.create(path, true/* createOrOverwrite */) to write data
> into parquet file a, but when SocketTimeoutException happend, such as "
> org.apache.hadoop.io.retry.RetryInvocationHandler [] -
> java.net.SocketTimeoutException: Call From xxx to namenodexxx:8888 failed on
> socket timeout exception: java.net.SocketTimeoutException: 60000 millis
> timeout while waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/node:33416
> remote=namenode:8888]; For more details see:
> http://wiki.apache.org/hadoop/SocketTimeout, while invoking
> ClientNamenodeProtocolTranslatorPB.create over namenode:8888. Trying to
> failover immediately." then i found the size of file a is zero, and read
> with error "file a is not a parquet file", and there were two create calls
> from two different routers in hdfs audit log, so i think overwrite is not
> safe in some situation
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]