> One is occasionally I get a lot of undeterministic metadata errors when > BB_CACHE_POLICY = "cache", multiconfig, and hash equiv are > enabled. The errors are all on recipes for which SRCREV = "${AUTOREV}". It > doesn't always happen. But it did just now when I rebased > our "zeus-modified" branch onto the upstream "zeus" branch, to get the > changes starting with > 7dc72fde6edeb5d6ac6b3832530998afeea67cbc. > > Two is that, sometimes "Initializing tasks" stage appears stuck at 44% for a > couple minutes. I traced it down to this code in > runqueue.py (line 1168 on zeus): > > # Iterate over the task list and call into the siggen code > dealtwith = set() > todeal = set(self.runtaskentries) > while len(todeal) > 0: > for tid in todeal.copy(): > if len(self.runtaskentries[tid].depends - dealtwith) == 0: > dealtwith.add(tid) > todeal.remove(tid) > self.prepare_task_hash(tid) > > When I instrument the loop to print out the size of "todeal", I see it > decrease very slowly, sometimes only a couple at a time. I'm > guessing this is because prepare_task_hash is contacting the hash equiv > server, in a serial manner here. I'm over my work VPN which > makes things extra slow. Is there an opportunity for batching here? > > Batching is hard because the hashes you request might depend on hashes > previous received from the server. It might be possible to > figure out these dependencies and submit multiple batch requests, but this > would require some work in the code. Another option > would be to have some multi-level caching scheme where you can have a local > database that mirrors your centralized server. > > If you have any ideas on how to make it faster, we would love to hear your > opinion :)
Gotcha, unfortunately I don't have any ideas right now :/. Regarding the first issue, any ideas there? Thanks, Chris -- _______________________________________________ Openembedded-core mailing list Openembedded-core@lists.openembedded.org http://lists.openembedded.org/mailman/listinfo/openembedded-core