andreaslochbihler-da commented on code in PR #1841:
URL: https://github.com/apache/pekko/pull/1841#discussion_r2097793122


##########
stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/HubSpec.scala:
##########
@@ -561,6 +561,49 @@ class HubSpec extends StreamSpec {
       out.expectComplete()
     }
 
+    "handle unregistration concurrent with registration" in {
+
+      var sinkProbe1: TestSubscriber.Probe[Int] = null
+
+      def registerConsumerCallback(id: Long): Unit = {
+        if (id == 1) {
+          sinkProbe1.cancel()
+          Thread.sleep(10)
+        }
+      }
+
+      val in = TestPublisher.probe[Int]()
+      val hubSource = Source
+        .fromPublisher(in)
+        .runWith(Sink.fromGraph(new BroadcastHub[Int](0, 2, 
registerConsumerCallback)))
+
+      // Put one element into the buffer
+      in.sendNext(15)
+
+      // add a consumer to receive the first element
+      val sinkProbe0 = hubSource.runWith(TestSink.probe[Int])
+      sinkProbe0.request(1)
+      sinkProbe0.expectNext(15)
+      sinkProbe0.cancel()
+
+      // put more elements into the buffer
+      in.sendNext(16)
+      in.sendNext(17)
+      in.sendNext(18)
+
+      // Add another consumer and kill it during registration
+
+      sinkProbe1 = hubSource.runWith(TestSink.probe[Int])
+      Thread.sleep(100)

Review Comment:
   > the reliance on short sleeps will make this test brittle
   
   I don't think that the test is going to be flaky. It tests normal behavior 
of the `BroadcastHub` and therefore should pass even if you remove all the 
sleeps. The delays are only there to make a certain interleaving more likely. 
So you can argue that these short sleeps make the test brittle as a regression 
test because there is a chance that it will still pass even though a regression 
is introduced at some point in the future.
   
   If I wanted to make this test rigorous, then I would have to add more 
instrumentation hooks to `BroadcastHub`. However, I could not find any prior 
art on how Pekko's unit tests reliably trigger race conditions in 
implementations. Do you have a pointer how this is done normally?
   
   I have added this test primarily to demonstrate that there is a problem with 
the existing implementation. I'd also be fine with completely removing it if 
such race conditions are normally not covered by unit tests.
   
   > would it be possible to use Scalatest's eventually support to retry the 
test if it fails?
   
   I think we want the opposite: if this test fails, then there's something 
wrong with the implementation, and you do not want to hide this by retrying the 
test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to