Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New topic #2252

Merged
merged 27 commits into from
Feb 8, 2021
Merged

New topic #2252

merged 27 commits into from
Feb 8, 2021

Conversation

SystemFw
Copy link
Collaborator

@SystemFw SystemFw commented Feb 6, 2021

No description provided.

Comment on lines 33 to 34
* Topic has built-in back-pressure support implemented as a maximum
* bound (`maxQueued`) that a subscriber is allowed to enqueue.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Topic has built-in back-pressure support implemented as a maximum
* bound (`maxQueued`) that a subscriber is allowed to enqueue.
* Topic has built-in back-pressure support implemented as a maximum
* number (`maxQueued`) of requests that a subscriber is allowed to enqueue.

*
* Additionally the subscriber has possibility to terminate whenever size of enqueued elements is over certain size
* by using `subscribeSize`.
* Once that bound is hit, publishing may semantically block until
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Once that bound is hit, publishing may semantically block until
* Once that bound is reached, any publishing action will be semantically blocked,

Comment on lines +110 to 112
subs.foldLeft(F.unit) { case (op, (_, q)) =>
op >> q.offer(a)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be approaching this from afar, but is this a case of traverse_?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is traverse_, but there is no Traverse[LongMap]

Comment on lines 51 to 54
* If any of the subscribers is over the `maxQueued` limit, this
* will wait to complete until that subscriber processes enough of
* its elements such that `a` is enqueued.
*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* If any of the subscribers is over the `maxQueued` limit, this
* will wait to complete until that subscriber processes enough of
* its elements such that `a` is enqueued.
*
* This `F` operation deos not complete until after the
* given element has been enqued on all subscribers.
* If enqueuing on any subscriber is semantically blocked
* (because that subscriber is on its `maxQueued` limit),
* then this operation will also be blocked, waiting on that
* subscriber to dequeue an element.

One further clarification: does the whole Topic ensure that there is at most one call to publish1 at a given point?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, you can have multiple producers on a Topic, although I imagine it to be rarer

Copy link
Contributor

@diesalbla diesalbla Feb 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, is there a risk that different subscribers may receive messages from different producers on a different order? Curious to learn what the guarantees are, from a "message delivery" PoV.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, there is (and there's always been). I think fixing it would imply either enforcing single producer, if you still want full broadcast, or a KeyedTopic to do sharding a la Kafka

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mpilquist wdyt about this? Should we enforce a single producer, or just document it and it's up to the user?

.map { pubSub =>
/** Constructs a Topic */
def apply[F[_], A](implicit F: Concurrent[F]): F[Topic[F, A]] =
F.ref(LongMap.empty[Q[F, A]] -> 1L)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mpilquist I don't think we will have issues here, Long is very big (even though we only use half of them), but I'd welcome a second opinion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No concerns from me

@mpilquist
Copy link
Member

mpilquist commented Feb 7, 2021 via email

@SystemFw SystemFw merged commit eec468f into typelevel:main Feb 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants