Hi Harsh,
I did mean 0.18 - sorry about the typo.
I read through the BlockSender.sendChunks method once again and noticed
that I wasn't reading the checksum byte array correctly in my code.
Thanks for the help,
Dhaivat Pandya
On Sun, Apr 6, 2014 at 8:59 PM, Harsh J wrote:
> Ther
ked through the DataXceiver, BlockSender, DFSClient
(RemoteBlockReader) classes but I still can't quite grasp how this data
transfer is conducted.
Any help would be appreciated,
Dhaivat Pandya
Another question: is there some kind of documentation (other than the code)
that specifies the kind of serialization algorithm Writables use? I think I
may have to port the concept over to another language.
Thanks,
Dhaivat
On Sun, Mar 23, 2014 at 9:11 AM, Dhaivat Pandya wrote:
> Thank
.2/src/hdfs/org/apache/hadoop/hdfs/protocol/DatanodeID.java
>
> Best,
> Andrew
>
>
> On Sat, Mar 22, 2014 at 1:16 PM, Dhaivat Pandya >wrote:
>
> > Hi everyone,
> >
> > I'm currently working on an application that requires some important
> > details
ing a
single packet which tells the NameNode where the DataNode is "located"
(i.e. host and port).
However, in this packet, I'm not clear as to what scheme is used to
serialize the DataNode information. I am running Hadoop 1.2.1. Any
information will be appreciated.
Thank you,
Dhaivat Pandya
b-block rather than file-level caching,
> which is nice for apps that only read regions of a file.
>
> Best,
> Andrew
>
>
> On Mon, Dec 23, 2013 at 9:57 PM, Dhaivat Pandya >wrote:
>
> > Hi Harsh,
> >
> > Thanks a lot for the response. As it turns out, I fi
Actually, we can relegate this as a non-issue; I have found a different
source of error in the system.
On Sun, Dec 29, 2013 at 3:03 PM, Dhaivat Pandya wrote:
> Anyone?
>
>
> On Sat, Dec 28, 2013 at 1:06 PM, Dhaivat Pandya
> wrote:
>
>> Hi,
>>
>> I've be
Anyone?
On Sat, Dec 28, 2013 at 1:06 PM, Dhaivat Pandya wrote:
> Hi,
>
> I've been working a lot with the Hadoop NameNode IPC protocol (while
> building a cache layer on top of Hadoop). I've noticed that for request
> packets coming from the default DFS client that
Hi,
I've been working a lot with the Hadoop NameNode IPC protocol (while
building a cache layer on top of Hadoop). I've noticed that for request
packets coming from the default DFS client that do not have a method name,
the length field is often *completely *off.
For example, I've been looking at
response time.
Any thoughts on the cache layer would be greatly appreciated.
Thanks,
Dhaivat
On Mon, Dec 23, 2013 at 11:46 PM, Harsh J wrote:
> Hi,
>
> On Mon, Dec 23, 2013 at 9:41 AM, Dhaivat Pandya
> wrote:
> > Hi,
> >
> > I'm currently trying to build a cach
Any other ideas?
On Sun, Dec 22, 2013 at 10:38 PM, Dhaivat Pandya wrote:
> I understand that that is how the port is *later* retrieved, but how does
> the namenode know the port in the first place? i.e. if the datanode sends a
> packet to the namenode, how does the namenode know what p
>
>
> On Mon, Dec 23, 2013 at 9:41 AM, Dhaivat Pandya
> wrote:
> > Hi,
> >
> > I'm currently trying to build a cache layer that should sit "on top" of
> the
> > datanode. Essentially, the namenode should know the port number of the
> &
Hi,
I'm currently trying to build a cache layer that should sit "on top" of the
datanode. Essentially, the namenode should know the port number of the
cache layer instead of that of the datanode (since the namenode then relays
this information to the default HDFS client). All of the communication
13 matches
Mail list logo