Re: [DISCUSS] Hive Support

2025-01-07 Thread Denys Kuzmenko
Hi Peter,

Re 
"Hive would provide a HMS client jar which only contains java code which is 
needed to connect and communicate using Thrift with a HMS instance (no internal 
HMS server code etc). We could use it as a dependency for our 
iceberg-hive-metastore module. Either setting a minimal version, or using a 
shaded embedded version."

In Hive-4.x `HiveMetaStoreClient` is shipped within 
`hive-standalone-metastore-common` jar that has a client code and security: 
https://mvnrepository.com/artifact/org.apache.hive/hive-standalone-metastore-common/4.0.1

Regards,
Denys


Re: [Discuss] Replace Hadoop Catalog Examples with JDBC Catalog in Documentation

2025-01-07 Thread Kevin Liu
Hey folks,

Happy new year! I want to bump this thread with the freshed PR #11845
. I've applied the
recommendations from this thread.
The PR replaces examples of Hadoop catalog in the Getting Started pages
with the JDBC Catalog along with an added example of configuring the REST
Catalog.

Please take a look and let me know what you think.

Best,
Kevin Liu

On Thu, Oct 17, 2024 at 6:10 AM Marc Cenac 
wrote:

> Hey Kevin,
>
> This approach sounds good to me and thanks for your work to improve
> the getting started docs!  I would consider using the file-based sqlite
> rather than in-memory since I've seen some users surprised when they
> realize their tables disappear from the catalog upon restart, but
> either way is a welcome change from the Hadoop catalog.
>
> Thanks!
> -Marc
>
> On Wed, Oct 16, 2024 at 1:42 PM Kevin Liu  wrote:
>
>> Hey folks,
>>
>>
>> Thanks for the discussions.
>>
>>
>> It seems everyone is in favor of replacing the Hadoop catalog example,
>> and the question now is whether to replace it with the JDBC catalog or the
>> REST catalog.
>>
>>
>> I originally proposed the JDBC catalog as a replacement primarily due to
>> its ease of use. Users can quickly set up a JDBC catalog backed by an
>> in-memory or file-based datastore without needing additional
>> infrastructure. It also aligns with the quick-start ethos of "it just
>> works." That said, I agree that an example of setting up the REST catalog
>> should be part of the getting-started guide since it’s the catalog the
>> community has aligned on.
>>
>>
>> Here's what I propose as a middle-ground.
>>
>>1. We replace the Hadoop catalog example with a JDBC catalog backed
>>by an in-memory datastore. This allows users to get started without 
>> needing
>>additional infrastructure, which was one of the main benefits of the 
>> Hadoop
>>catalog.
>>2. We add a new section describing the REST catalog, its benefits,
>>and how to set one up. We can use the REST catalog adapter [1], with the
>>adapter using the JDBC catalog as its internal catalog.
>>
>>
>> This approach gives users a way to quickly prototype while also guiding
>> them toward the REST catalog for production use cases.
>>
>>
>> Looking forward to hearing more from you all.
>>
>>
>> Best,
>>
>> Kevin Liu
>>
>>
>> [1] https://lists.apache.org/thread/xl1cwq7vmnh6zgfd2vck2nq7dfd33ncq
>>
>>
>>
>> On Thu, Oct 10, 2024 at 3:44 AM Eduard Tudenhöfner <
>> etudenhoef...@apache.org> wrote:
>>
>>> I would prefer to advocate for the REST catalog in those examples/docs
>>> (similar to how the Spark quickstart example
>>>  uses the REST catalog).
>>> The docs could then refer to the quickstart example to indicate what's
>>> required in terms of services to be started before a user can spawn a spark
>>> shell.
>>>
>>> On Thu, Oct 10, 2024 at 12:15 PM Jean-Baptiste Onofré 
>>> wrote:
>>>
 Hi

 As we are talking about "documentation" (quick start/readme), I would
 rather propose to use the REST catalog here instead of JDBC.

 As it's the catalog we "promote", I think it would be valuable for
 users to start with the "right thing".

 JDBC Catalog is interesting for quick test/started guide, but we know
 how it goes: it will be heavily use (see what happened with the
 HadoopCatalog used in production whereas it should not :) ).

 Regards
 JB

 On Tue, Oct 8, 2024 at 12:18 PM Kevin Liu 
 wrote:
 >
 > Hi all,
 >
 > I wanted to bring up a suggestion regarding our current
 documentation. The existing examples for Iceberg often use the Hadoop
 catalog, as seen in:
 >
 > Adding a Catalog - Spark Quickstart [1]
 > Adding Catalogs - Spark Getting Started [2]
 >
 > Since we generally advise against using Hadoop catalogs in production
 environments, I believe it would be beneficial to replace these examples
 with ones that use the JDBC catalog. The JDBC catalog, configured with a
 local SQLite database file, offers similar convenience but aligns better
 with production best practices.
 >
 > I've created an issue [3] and a PR [4] to address this. Please take a
 look, and I'd love to hear your thoughts on whether this is a direction we
 want to pursue.
 >
 > Best,
 > Kevin Liu
 >
 > [1] https://iceberg.apache.org/spark-quickstart/#adding-a-catalog
 > [2]
 https://iceberg.apache.org/docs/nightly/spark-getting-started/#adding-catalogs
 > [3] https://github.com/apache/iceberg/issues/11284
 > [4] https://github.com/apache/iceberg/pull/11285
 >

>>>


Re: [ANN] Apache Iceberg Summit 2025, dates, venue and CFP

2025-01-07 Thread Kevin Liu
Thanks for putting this together everyone. Looking forward to the event and
meeting in person!

Best,
Kevin Liu

On Tue, Jan 7, 2025 at 2:15 AM Jean-Baptiste Onofré  wrote:

> Hi everyone,
>
> With this new year comes a new announcement: Apache Iceberg Summit 2025 !
>
> Iceberg Summit 2025 is a hybrid event sanctioned by The Apache
> Software Foundation and organized by Dremio, Snowflake, and Microsoft.
> The summit aims to promote Apache Iceberg education and
> knowledge-sharing among data engineers, developers, architects and
> contributors.
>
> The event will take place at the Hyatt Regency SOMA in San Francisco,
> USA, on April 8 in person and Virtual on April 9 via the Bizzabo event
> platform. Featuring real-world talks from data practitioners and
> developers leveraging Apache Iceberg as their table format.
>
> The CFP is now open, so please, submit your talks here:
> https://sessionize.com/iceberg-summit-2025/
>
> The Apache Iceberg PMC settled the Selection Committee, responsible
> for selecting the talks for the Summit.
>
> If you are interested in sponsoring the event, please reach Russell
> (russell.spit...@gmail.com) or myself (jbono...@apache.org). We can
> share a prospectus and introduce you to the sponsors committee.
>
> We are working on the website for the event, I will share details soon.
>
> I would like to thank again the PMC members, and especially Russell,
> for their help and approval.
>
> I'm looking forward to the event and I'm sure we will have great talks ;)
>
> Regards
> JB
>


Re: ​[discuss] Allow 200 responses for HEAD requests in REST API

2025-01-07 Thread Kevin Liu
Hey folks,

Thanks for the feedback on the proposal.

I believe it’s best to retain the current 204 response code for HEAD
requests in the REST API. Existing client and server implementations that
adhere to the spec expect only 204 responses. Introducing an additional 200
response code would create backward compatibility issues and require all
existing clients to be updated.

While the MDN Web Docs' page on HEAD requests
 and S3’s
HeadObject documentation
 suggest
using 200 responses for HEAD requests, it’s better to stick with 204 since
that’s what most servers and clients already support.

We can treat 200 responses as exceptions rather than the rule. Clients can
optionally handle 200 responses as a workaround while servers transition to
sending 204 responses to fully adhere to the spec.

Thanks all for the discussion!

Best,
Kevin Liu

On Wed, Dec 18, 2024 at 1:00 AM Xuanwo  wrote:

> Hi,
>
> From my initial understanding of HTTP semantics, the HEAD request should
> be treated like a GET request without a response body. Therefore, returning
> a 204 for a HEAD request does not align with the concept held by most
> developers. I support the idea of allowing a 200 response instead.
>
> For example, AWS S3 HeadObject also returns a 200 status code when the
> file exists.
>
> Ref: https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD
>
> On Wed, Dec 18, 2024, at 16:36, Fokko Driesprong wrote:
>
> Hey Kevin,
>
> I also agree with Yufei. For PyIceberg we had a long list of issues around
> the head request (#1363
>  gives a nice
> overview) to check if the table is there (and that also has just been
> added to Java ). Allowing
> 200 to unblock users quickly, but implementations should adhere to the
> spec, and we should be reluctant with this kind of fixes.
>
> Kind regards,
> Fokko
>
> Op wo 18 dec 2024 om 07:56 schreef Eduard Tudenhöfner <
> etudenhoef...@apache.org>:
>
> I agree with Yufei's observation. Changing the return code in the spec
> from 204 to 200 will just cause additional downstream work that doesn't
> seem worth it. Returning 204 makes the API also very explicit in telling
> that the request succeeded but that there's no content in the response that
> the client needs to care about.
>
> Eduard
>
> On Tue, Dec 17, 2024 at 10:20 PM Yufei Gu  wrote:
>
> The distinction between 200 and 204 is subtle enough that I'm comfortable
> using them interchangeably in this context. My main concern is that, if we
> make this change, all clients—except for PyIceberg—will need to be updated
> to support both 200 and 204, since a server could return either status
> code. It might not be worth it.
>
> Yufei
>
>
> On Tue, Dec 17, 2024 at 12:52 PM Kevin Liu  wrote:
>
> Hey folks,
>
> I’d like to propose adding status code 200 as a valid response for HEAD
> requests in the Catalog REST API. Currently, the following HEAD requests
> return status code 204 for a successful response:
> * namespaceExists
> 
> * tableExists
> 
> * viewExists
> 
>
> In PyIceberg, support for status code 200 has already been implemented for
> table_exists
> 
>  and namespace_exists
> .
> The motivation for this change is to enable more intuitive and
> user-friendly integrations with catalogs, as Fokko highlighted here
> .
> Standardizing this behavior in the Catalog REST spec would promote
> consistency across implementations and make interactions easier for users
> and client developers.
> Would love to hear your thoughts on this proposal!
> Best,
> Kevin Liu
>
> Xuanwo
>
> https://xuanwo.io/
>
>


Re: [DISCUSS] REST: Way to query if metadata pointer is the latest

2025-01-07 Thread Taeyun Kim
Hi,

If the Table interface had been defined as immutable, sharing BaseTable objects 
could have been a viable option. However, since that’s not the case, changing 
the current design to share BaseTable objects may lead to compatibility issues 
with the existing API.

Regarding the current() and refresh() methods of TableOperations, these methods 
are defined to return or modify the state of the associated TableOperations 
object. If multiple threads in an application (e.g., a query engine) call these 
methods on the same TableOperations object, it should be the application's 
responsibility to handle the resulting state.

From a brief look at the source code, it seems that the refresh() method of 
RESTTableOperations is not thread-safe. If an application wants to avoid 
potential threading issues, it can either synchronize calls to these methods or 
use separate TableOperations instances for different threads.

As for converting TableMetadata into a BaseTable object, the source code 
suggests that the TableOperations object owns the TableMetadata directly, 
making this a lightweight operation. Additionally, this conversion process 
seems to apply to catalogs other than the REST catalog as well, so it doesn’t 
appear to be a REST-specific issue. Therefore, I believe it’s unnecessary to 
include this aspect within the scope of the current proposal.

Best regards,
Taeyun





-Original Message-
From: "Gabor Kaszab" 
To: ;
Cc:
Sent: 2025-01-07 (화) 22:10:32 (UTC+09:00)
Subject: Re: [DISCUSS] REST: Way to query if metadata pointer is the latest


Hi,


Thanks for offering help, JB! I think the REST spec related part of the 
proposal is quite simple, but since this is the first time I touch the spec, 
let me reach out if I have any questions.


Thanks for the response, Taeyun!
The decision on whether to make a REST call is handled implicitly within the 
API 
No, this decision is not part of the functionality in the Iceberg lib. In the 
"Client use case" section "client" is referring to a client of the Iceberg lib, 
e.g. a query engine. Because of this confusion I rewrote this now to "query 
engine", however, it's not just query engines that use the Iceberg lib and this 
API. So basically it's the query engine's responsibility to judge if they call 
the loadTable() API, where they can use a timer or anything else to make this 
decision. Whenever the loadTable() API is called, it will perform the REST call 
and will either re-load the table or will serve the result from the cache in 
RESTSessionCatalog (in case of 304).


About storing TableMetadata vs BaseTable objects:
You made a good point. However, I'm still hesitant here a bit because it's 
possible even now to invoke the current() and refresh() functions from 
different threads on the TableOperations object, so in theory the problematic 
use-case is already present now. Nothing prevents users from doing that.
But currently the loadTable() API gives a new BaseTable object for each call, 
so I feel that with the current proposal of re-using existing BaseTable objects 
we'd change the behaviour of the API. This would make me lean towards caching 
TableMetadata SoftReferences instead. My motivation for sharing these table 
objects was driven by being JVM memory optimal in a sense that multiple 
loadTable() calls for the same table would not increase the memory usage with 
having different objects for the same table and the motivation was also to 
eliminate the need of converting a TableMetadata response into a BaseTable 
object.


Let me give the above some further thought, though.
Since the REST spec part of the proposal seems to be agreed on, I'll create a 
PR in the upcoming days.


Regards,
Gabor




On Mon, Jan 6, 2025 at 10:56 AM Jean-Baptiste Onofré  wrote:
Hi Gabor

I did a new pass on the proposal and it looks good to me. Great work !

I'm volunteer to work with you on the spec PR according to the doc.

Thoughts ?

Regards
JB

On Thu, Dec 19, 2024 at 11:09 AM Gabor Kaszab  wrote:
>
> Hi All,
>
> Just an update that the proposal went through some iterations based on the 
> comments from Daniel Weeks. Thanks for taking a look, Daniel!
>
> In a nutshell this is what changed compared to the original proposal:
> - The Catalog API will be intact, there is no proposed new API function now. 
> With this the freshness aware functionality and the ETags in particular will 
> not be exposed to the clients of the API.
> - Instead of storing the ETags in TableMetadata we propose to store it in 
> RESTTableOperations since the proposal only focuses on the REST catalog. The 
> very same changes can be done on other TableOperations implementations if 
> there is going to be a need to have this for other catalogs too.
> - A SoftReference cache of (TableIdentifier -> Table object) is introduced on 
> the RESTSessionCatalog level. This can be used for providing previous ETags 
> to the HTTPClient and also to answer Catalog API calls with the latest table 
> metadat

Re: [DISCUSS] Hive Support

2025-01-07 Thread Péter Váry
Thanks Wing Yew,


We should remove the Iceberg Hive Runtime module, but make sure that the
Iceberg Hive Metastore module tests are running against the supported(?)
Hive 2.3.10/3.1.3/4.0.1 versions. Other tests could run against
whatever Hive version they prefer

In details:
--
Let me recap what I understand here:

   - Iceberg Hive metastore module is working with Hive 2, Hive 3 and Java
   11 - since neither the tests nor the users are complaining about it
   - Iceberg Hive runtime tests are using features from Hive which does not
   support Java 11 - as we have seen broken tests when we upgraded the Java
   version
   - Even Spark 4 uses an embedded Hive 2.3.10 - This means that the
   features used by Spark and Iceberg from Hive 2.3.10 are working with Java
   11, since neither the tests nor the users were complaining about it
   - Iceberg Hive Runtime tests are running against Hive 2.3.9 and Hive
   3.1.3
   - Iceberg Hive Metastore tests are running against Hive 2.3.9
   - Spark tests are running against Hive 2.3.10

We already decided that we would like to remove the Hive runtime support
from the Iceberg code in 1.8.0 release.
We should decide which Hive versions we would like to support for the
Iceberg Hive Metastore module. Based on my understanding above:

   - Hive 2.3.10 should be mandatory as Spark uses as a default
   - Hive 3.1.3 is what probably most of our users are using
   - Hive 4.0.1 is the current Hive version

Tell me if you think otherwise.

Since the Iceberg Hive Metastore module uses very specific Hive 3 related
codes (DynMethod loader for the HMS Client proxy), I don't think we can
claim support without at least some tests running using the appropriate
Hive versions. I am not even sure that the metastore module is working
with Hive 4 - maybe @Manu has more knowledge here.

Thanks,
Peter

Wing Yew Poon  ezt írta (időpont: 2025. jan.
7., K, 1:18):

> FYI --
> It looks like the built-in Hive version in the master branch of Apache
> Spark is 2.3.10 (https://issues.apache.org/jira/browse/SPARK-47018), and
> https://issues.apache.org/jira/browse/SPARK-44114 (upgrade built-in Hive
> to 3+) is an open issue.
>
>
> On Mon, Jan 6, 2025 at 1:07 PM Wing Yew Poon  wrote:
>
>> Hi Peter,
>> In Spark, you can specify the Hive version of the metastore that you want
>> to use. There is a configuration, spark.sql.hive.metastore.version, which
>> currently (as of Spark 3.5) defaults to 2.3.9, and the jars supporting this
>> default version are shipped with Spark as built-in. You can specify a
>> different version and then specify spark.sql.hive.metastore.jars=path (the
>> default is built-in) and spark.sql.hive.metastore.jars.path to point to
>> jars for the Hive metastore version you want to use. What
>> https://issues.apache.org/jira/browse/SPARK-45265 does is to allow 4.0.x
>> to be supported as a spark.sql.hive.metastore.version. I haven't been
>> following Spark 4, but I suspect that the built-in version is not changing
>> to Hive 4.0. The built-in version is also used for other things that Spark
>> may use from Hive (aside from interaction with HMS), such as Hive SerDes.
>> See
>> https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html.
>> - Wing Yew
>>
>>
>> On Mon, Jan 6, 2025 at 2:04 AM Péter Váry 
>> wrote:
>>
>>> Hi Manu,
>>>
>>> > Spark has only added hive 4.0 metastore support recently for Spark
>>> 4.0[1] and there will be conflicts
>>>
>>> Does this mean that Spark 4.0 will always use Hive 4 code? Or it will
>>> use Hive 2 when it is present on the classpath, but if older Hive versions
>>> are not on the classpath then it will use the embedded Hive 4 code?
>>>
>>> > Firstly, upgrading from Hive 2 to Hive 4 is a huge change
>>>
>>> Is this a huge change even after we remove the Hive runtime module?
>>>
>>> After removing the Hive runtime module, we have 2 remaining Hive
>>> dependencies:
>>>
>>>- HMS Client
>>>   - The Thrift API should not be changed between the Hive versions,
>>>   so unless we start to use specific Hive 4 features we should be fine 
>>> here -
>>>   so whatever version of Hive we use, it should work
>>>   - Java API changes. We found that in Hive 2, and Hive 3 the
>>>   HMSClient classes used different constructors so we ended up using
>>>   DynMethods to use the appropriate constructors - if we use a strict 
>>> Hive
>>>   version here, then we won't need the DynMethods anymore
>>>   - Based on our experience, even if Hive 3 itself doesn't support
>>>   Java 11, the HMS Client for Hive 3 doesn't have any issues when used 
>>> with
>>>   Java 11
>>>- Testing infrastructure
>>>   - TestHiveMetastore creates and starts a HMS instance. This could
>>>   be highly dependent on the version of Hive we are using. Since this 
>>> is only
>>>   a testing code I expect that only our tests are interacting with this
>>>
>>> *@Manu*: You know more of the details here. Do we have HMSClient issue

[ANN] Apache Iceberg Summit 2025, dates, venue and CFP

2025-01-07 Thread Jean-Baptiste Onofré
Hi everyone,

With this new year comes a new announcement: Apache Iceberg Summit 2025 !

Iceberg Summit 2025 is a hybrid event sanctioned by The Apache
Software Foundation and organized by Dremio, Snowflake, and Microsoft.
The summit aims to promote Apache Iceberg education and
knowledge-sharing among data engineers, developers, architects and
contributors.

The event will take place at the Hyatt Regency SOMA in San Francisco,
USA, on April 8 in person and Virtual on April 9 via the Bizzabo event
platform. Featuring real-world talks from data practitioners and
developers leveraging Apache Iceberg as their table format.

The CFP is now open, so please, submit your talks here:
https://sessionize.com/iceberg-summit-2025/

The Apache Iceberg PMC settled the Selection Committee, responsible
for selecting the talks for the Summit.

If you are interested in sponsoring the event, please reach Russell
(russell.spit...@gmail.com) or myself (jbono...@apache.org). We can
share a prospectus and introduce you to the sponsors committee.

We are working on the website for the event, I will share details soon.

I would like to thank again the PMC members, and especially Russell,
for their help and approval.

I'm looking forward to the event and I'm sure we will have great talks ;)

Regards
JB


Re: [DISCUSS] Hive Support

2025-01-07 Thread Manu Zhang
Thanks Wing Yew for filling in the missing part.
>
> The built-in version is also used for other things that Spark may use from
> Hive (aside from interaction with HMS), such as Hive SerDes.

AFAIK, this is blocking Spark itself from upgrade the built-in version to
Hive 4.

Thanks Peter for recap. The only thing to clarify is Hive 3 Runtime tests
have never been running while it's irrelevant now.
There were test failures[1] after upgrading metastore module to Hive 4 so I
guess it doesn't work yet.

Moving forward, I agree we should make sure the metastore tests running
against all Hive versions in use.  However, I'm not sure how to set up
modules and dependencies given the changes in Hive 4 (thanks Denys). I need
more experiments to explore various ideas.


1.
https://github.com/apache/iceberg/actions/runs/12339936020/job/34436774628?pr=11750

Thanks,
Manu

On Tue, Jan 7, 2025 at 8:01 PM Denys Kuzmenko  wrote:

> Hi Peter,
>
> Re
> "Hive would provide a HMS client jar which only contains java code which
> is needed to connect and communicate using Thrift with a HMS instance (no
> internal HMS server code etc). We could use it as a dependency for our
> iceberg-hive-metastore module. Either setting a minimal version, or using a
> shaded embedded version."
>
> In Hive-4.x `HiveMetaStoreClient` is shipped within
> `hive-standalone-metastore-common` jar that has a client code and security:
>
> https://mvnrepository.com/artifact/org.apache.hive/hive-standalone-metastore-common/4.0.1
>
> Regards,
> Denys
>


Re: [DISCUSS] REST Catalog bulk object lookup

2025-01-07 Thread Renjie Liu
Hi, Vladimir:

Thanks for raising this. I think your proposal is mixing two things up:
1. Add an endpoint for loading a catalog object by name without knowing its
type. This is reasonable to me.
2. Make the endpoint a bulk load operation. I'm hesitating with this option
since it makes error handling difficult. As you mentioned in doc, if we
have introduces 1, then the numbers of request will reduce from m * n to m,
where m is the number of object names, and n is the type of objects. For
the problem of requests burst and latency, will client cache + parallel
fetching solve your problem?

On Fri, Jan 3, 2025 at 7:33 PM Vladimir Ozerov 
wrote:

> A motivational example: Trino has to implement a parallel table metadata
> fetching recently (https://github.com/trinodb/trino/pull/23909) because
> otherwise metadata queries (e.g., INFORMATION_SCHEMA) was slow. Parallel
> metadata retrieval boosted metadata query performance significantly. But
> this solution is far from ideal:
>
>1. Now catalogs will experience request bursts whenever a user or a
>tool attempts to list Iceberg objects in Trino. This may potentially induce
>unpredictable latency spikes, especially for large schemas
>2. Each such request imposes a constant catalog overhead on
>request dispatching, serde, security checks, etc. which could be easily
>avoided with bulk metadata lookup
>3. The aforementioned fix addresses only parallel table retrieval. But
>then the engine will have to support the same thing for views and
>materialized views, producing even more requests bursts, with considerable
>number of requests returning error responses because we cannot get object
>type and its metadata in one shot.
>
>
> On Tue, Dec 24, 2024 at 10:29 PM Vladimir Ozerov 
> wrote:
>
>> Hi,
>>
>> Following the discussion [1] I'd like to formally propose an extension to
>> REST catalog API that allows efficient lookup of multiple catalog objects
>> without knowing their types in advance.
>>
>> When a query is submitted, the engine needs to resolve referenced
>> objects. The current REST API requires multiple catalog calls per query,
>> because it (1) assumes the prior knowledge of the object type (not the case
>> for virtually all query engines), and (2) lacks bulk object lookup
>> operation. This leads to increased query latency and increased REST catalog
>> load.
>>
>> The proposal aims to solve the problem introducing an optional endpoint
>> that returns information about several catalogs objects, including their
>> type (table, view) and metadata.
>>
>> Note that the proposal attempts to solve two distinct issues via a single
>> endpoint:
>>
>>1. Inability to lookup the object without knowing its type
>>2. Inability to lookup multiple objects in a single request
>>
>> If the community finds the proposal too complicated, we can minimize the
>> scope to the point 1, and introduce an endpoint for object lookup without
>> knowing it's type. Even without bulk lookup this can help engine developers
>> minimize SQL query planning latency.
>>
>> Proposal:
>> https://docs.google.com/document/d/1KfzdQT8Q2xiV_yPNvICROCepz-Qqpm0npob7hmb40Fc/edit?usp=sharing
>>
>> [1] https://lists.apache.org/thread/g44czzpjqqhdvronqfyckw4mnxvlpn3s
>>
>> Regards,
>> --
>> *Vladimir Ozerov*
>>
>>
>
> --
> *Vladimir Ozerov*
> Founder
> querifylabs.com
>