[ 
https://issues.apache.org/jira/browse/HDFS-17480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu resolved HDFS-17480.
-----------------------------
    Resolution: Fixed

> [FGL] Solutions for GetListing RPC 
> -----------------------------------
>
>                 Key: HDFS-17480
>                 URL: https://issues.apache.org/jira/browse/HDFS-17480
>             Project: Hadoop HDFS
>          Issue Type: Task
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Major
>
> GetListing is a very common used RPC by end-users. But we should consider how 
> does GetListing support FGL.  
> For example, there is directory /a/b/c contains some children, such as d1, 
> d2, d3, f1, f2, f3.
> Normally, we should hold the write lock iNode c for listing /a/b/c to make 
> sure that there is no other threads are updating children of iNode c. But if 
> the listing path is /, the entire directory tree will be locked, which will 
> have a great impact.
>  
> There are two solutions to fix this problem:
> Solution 1:
>  * Hold the read lock of iNode c
>  * Loop through all children
>  ** Hold the read lock of each child and return it's file status
> The result may contains some stale file status, because the looped children 
> may be updated by other thread before the result of getListing is returned to 
> client.
>  
> Solution 2:
>  * Hold the write lock of parent and current Node when updating the current 
> node
>  ** Holding the write lock of iNode c and d1 when updating d1
>  * Hold the read lock of iNode c
>  * Loop through all children
> This solution will increases the scope of lock, since the parent's write lock 
> is usually not required.
>  
> I prefer the first solution, since namenode always returns results in 
> batches. Changes may have occurred between batch and batch.
> By the way, GetContentSummary will use the solution one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to