Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shift requirements to dev #309

Merged
merged 2 commits into from
Oct 16, 2024
Merged

shift requirements to dev #309

merged 2 commits into from
Oct 16, 2024

Conversation

prjemian
Copy link
Contributor

Both pyarrow and pandas were added as package requirements in commit 672f579 as part of PR#301. Neither of these are imported by this package. Shift the both of them to requirements-dev.txt.

Motivation and Context

We ran into an installation dependency in prjemian/model_instrument#5 (comment). On inspection, this seems not to be a real requirement of this package.

@prjemian prjemian requested review from tacaswell and dmgav October 14, 2024 20:05
@prjemian prjemian self-assigned this Oct 14, 2024
@tacaswell
Copy link
Collaborator

#308 has a fix for the doc failure.

@tacaswell
Copy link
Collaborator

I suspect a few more can go (like mpl, skimage, and databroker).

@prjemian prjemian changed the title shift 2 requirements to dev shift requirements to dev Oct 14, 2024
@prjemian
Copy link
Contributor Author

databroker is used by worker:

(test) prjemian@arf:~/.../Bluesky/bluesky-queueserver$ git grep databroker | grep import
bluesky_queueserver/manager/tests/test_gen_lists.py:from databroker.v0 import Broker
bluesky_queueserver/manager/tests/test_profile_ops.py:from databroker.v0 import Broker
bluesky_queueserver/manager/tests/test_profile_ops.py:from databroker.v0 import Broker
bluesky_queueserver/manager/worker.py:                            from databroker import Broker

@prjemian
Copy link
Contributor Author

Let's get these through. Probably enough for the PR that triggered this. Any others can be noted in a new issue/branch/PR.

@dmgav
Copy link
Contributor

dmgav commented Oct 16, 2024

For the future, the following code was added at the very start of the project, serves no practical purpose and should be removed. The respective parameters should be removed. The instance of RE is always created in the startup code as well as all subscriptions. Built-in features only confuse beginners who look at the examples.

else:
# Instantiate a new Run Engine and Data Broker (if needed)
md = {}
if self._config_dict["use_persistent_metadata"]:
# This code is temporarily copied from 'nslsii' before better solution for keeping
# continuous sequence Run ID is found. TODO: continuous sequence of Run IDs.
directory = os.path.expanduser("~/.config/bluesky/md")
os.makedirs(directory, exist_ok=True)
md = PersistentDict(directory)
self._RE = RunEngine(md)
self._re_namespace["RE"] = self._RE
def factory(name, doc):
# Documents from each run are routed to an independent
# instance of BestEffortCallback
bec = BestEffortCallback()
if not self._use_ipython_kernel or not ipython_matplotlib:
bec.disable_plots()
return [bec], []
# Subscribe to Best Effort Callback in the way that works with multi-run plans.
rr = RunRouter([factory])
self._RE.subscribe(rr)
# Subscribe RE to databroker if config file name is provided
self._db = None
if "databroker" in self._config_dict:
config_name = self._config_dict["databroker"].get("config", None)
if config_name:
logger.info("Subscribing RE to Data Broker using configuration '%s'.", config_name)
from databroker import Broker
self._db = Broker.named(config_name)
self._re_namespace["db"] = self._db
self._RE.subscribe(self._db.insert)
if "kafka" in self._config_dict:
logger.info(
"Subscribing to Kafka: topic '%s', servers '%s'",
self._config_dict["kafka"]["topic"],
self._config_dict["kafka"]["bootstrap"],
)
kafka_publisher = kafkaPublisher(
topic=self._config_dict["kafka"]["topic"],
bootstrap_servers=self._config_dict["kafka"]["bootstrap"],
key="kafka-unit-test-key",
# work with a single broker
producer_config={"acks": 1, "enable.idempotence": False, "request.timeout.ms": 5000},
serializer=partial(msgpack.dumps, default=mpn.encode),
)
self._RE.subscribe(kafka_publisher)
if "zmq_data_proxy_addr" in self._config_dict:
from bluesky.callbacks.zmq import Publisher
publisher = Publisher(self._config_dict["zmq_data_proxy_addr"])
self._RE.subscribe(publisher)

@prjemian
Copy link
Contributor Author

@dmgav , @tacaswell -- Thanks!

@prjemian prjemian merged commit af0b4e7 into main Oct 16, 2024
63 of 70 checks passed
@prjemian prjemian deleted the dev-requirements-pyarrow branch October 16, 2024 03:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants