Skip to content

Commit

Permalink
Rename bibdata-staging.princeton.edu to bibdata-staging.lib.princeton…
Browse files Browse the repository at this point in the history
….edu (#2485)

Bibdata staging is on adc-dev and using the .lib
  • Loading branch information
christinach authored Sep 18, 2024
1 parent 8ab0d87 commit 9a2a985
Show file tree
Hide file tree
Showing 8 changed files with 12 additions and 12 deletions.
2 changes: 1 addition & 1 deletion config/environments/staging.rb
Original file line number Diff line number Diff line change
Expand Up @@ -87,5 +87,5 @@
# Do not dump schema after migrations.
config.active_record.dump_schema_after_migration = false

# config.action_mailer.default_url_options = { host: ENV["APPLICATION_URL"] || "bibdata-staging.princeton.edu", protocol: "https" }
# config.action_mailer.default_url_options = { host: ENV["APPLICATION_URL"] || "bibdata-staging.lib.princeton.edu", protocol: "https" }
end
4 changes: 2 additions & 2 deletions docs/alma.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ Use your netid and password to login and access [Alma Development instance](http
4. Click the ellipsis button.
5. Click Run.
This will trigger an incremental job in the alma sandbox. It takes around 45-60 minutes to complete.
If there are updated records then in [bibdata staging events](https://bibdata-staging.princeton.edu/events) a new event will be created with the 'dump type': 'Changed Records'. This event holds a dump file from the incremental dump that was triggered in the alma sandbox. [Example](https://bibdata-staging.princeton.edu/dumps/1124.json) with two dump_files.
In [bibdata staging sidekiq](https://bibdata-staging.princeton.edu/sidekiq) you can see the indexing progress. Keep in mind that it is fast and you might not notice the indexing job in the dashboard.
If there are updated records then in [bibdata staging events](https://bibdata-staging.lib.princeton.edu/events) a new event will be created with the 'dump type': 'Changed Records'. This event holds a dump file from the incremental dump that was triggered in the alma sandbox. [Example](https://bibdata-staging.lib.princeton.edu/dumps/1124.json) with two dump_files.
In [bibdata staging sidekiq](https://bibdata-staging.lib.princeton.edu/sidekiq) you can see the indexing progress. Keep in mind that it is fast and you might not notice the indexing job in the dashboard.
The indexing process uses the value of the env SOLR_URL that you can see if you ssh in bibdata-staging1.

## Accessing the Alma Production instance
Expand Down
2 changes: 1 addition & 1 deletion docs/augment_the_subject.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@
* Run the rspec tests tagged "indexing" `bundle exec rspec --tag indexing`
* If there are failing tests, work to get them passing
* On a branch, commit the changes resulting from these steps and open a pull request
1. Deploy the branch to bibdata-staging, and test according to the practices in the test_indexing.md file.
1. Deploy the branch to the Bibdata staging environment, and test according to the practices in the test_indexing.md file.
4 changes: 2 additions & 2 deletions docs/database_load_data.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ e.g.:
`ssh pulsys@bibdata-staging1`
`sudo service nginx start

7. Go to `https://bibdata-staging.princeton.edu/events` and make sure the application is working as expected and lists all the events that the production site has.
7. Go to `https://bibdata-staging.lib.princeton.edu/events` and make sure the application is working as expected and lists all the events that the production site has.

8. The files that are connected to these events exist in bibdata production `/data/bibdata_files`.
For example: you want to test an issue on staging using the event with ID:6248:
Expand All @@ -68,7 +68,7 @@ For example: you want to test an issue on staging using the event with ID:6248:

2. scp the file into one of the bibdata staging VMs:
- scp the file to your local and then to `deploy@bibdata-staging1:/data/bibdata_files`
- Visit https://bibdata-staging.princeton.edu/dumps/6248.json. The webpage should not error. You can also confirm that the file is attached to this event by searching the bibdata staging DB.
- Visit https://bibdata-staging.lib.princeton.edu/dumps/6248.json. The webpage should not error. You can also confirm that the file is attached to this event by searching the bibdata staging DB.
- `deploy@bibdata-staging1:/opt/bibdata/current$ bundle exec rails c`

- `DumpFile.where(dump_id: 6248)` you should be able to see in path: attribute the `incremental_36489280620006421_20240423_130418[009]_new.tar.gz`.
Expand Down
4 changes: 2 additions & 2 deletions docs/location_changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,15 @@
2. Stop the workers (this step is optional in staging):
- cd in your local princeton_ansible directory → pipenv shell → `ansible bibdata_staging -u pulsys -m shell -a "sudo service bibdata-workers stop"`. (Ignore the console error for the bibdata staging web servers. They don't run the worker service.)

3. Connect in one of the [bibdata-staging workers](https://github.com/pulibrary/princeton_ansible/blob/main/inventory/all_projects/bibdata#L9C1-L10):
3. Connect in one of the [bibdata_staging workers](https://github.com/pulibrary/princeton_ansible/blob/main/inventory/all_projects/bibdata#L9C1-L10):

- `ssh deploy@bibdata-worker-staging1`
- `cd /opt/bibdata/current`

4. Run the following rake task to delete and repopulate the locations in the bibdata staging database:
`RAILS_ENV=production bundle exec rake bibdata:delete_and_repopulate_locations`

5. Review the location changes in [Bibdata staging](https://bibdata-staging.princeton.edu/).
5. Review the location changes in [Bibdata staging](https://bibdata-staging.lib.princeton.edu/).

6. If in step 2 you stopped the workers then start the workers:
- cd in your local princeton_ansible directory → pipenv shell → `ansible bibdata_staging -u pulsys -m shell -a "sudo service bibdata-workers start"`. (Ignore the console error for the bibdata staging web servers. They don't run the worker service.)
Expand Down
2 changes: 1 addition & 1 deletion docs/missing_events.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ You can adjust the value in the UI but also remember to adjust it in the codebas
- If you trigger a Publishing job in the Princeton Alma Sandbox and you are missing the event/dump with the dumpFile then follow the same steps as above using the following:

- [Bibdata staging VMs](https://github.com/pulibrary/princeton_ansible/blob/main/inventory/all_projects/bibdata#L6-L10)
- [Bibdata staging events page](https://bibdata-staging.princeton.edu/events)
- [Bibdata staging events page](https://bibdata-staging.lib.princeton.edu/events)
- [Princeton Alma Sandbox](https://princeton-psb.alma.exlibrisgroup.com/)
- [alma-webhook-monitor-staging](https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#logsV2:log-groups/log-group/$252Faws$252Flambda$252Falma-webhook-monitor-staging-WebhookReceiver-za2o0tUQ0XNM)
- [AWS SQS AlmaBibExportStaging](https://us-east-1.console.aws.amazon.com/sqs/v3/home?region=us-east-1#/queues/https%3A%2F%2Fsqs.us-east-1.amazonaws.com%2F080265008837%2FAlmaBibExportStaging.fifo)
Expand Down
4 changes: 2 additions & 2 deletions docs/test_indexing.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Follow: [Incremental job in Alma Sandbox](https://github.com/pulibrary/bibdata/b
4. Assuming that the env SOLR_URL=http://lib-solr8d-staging.princeton.edu:8983/solr/catalog-staging find the index_manager that is currently used. `index_mgr=IndexManager.all.where(solr_collection: "http://lib-solr8d-staging.princeton.edu:8983/solr/catalog-staging").first`
5. Make sure that `index_mgr.dump_in_progress_id=nil` and `index_mgr.in_progress = false`. If not set them and save.
6. Find the previous event_id (which equals the dump_id) from the event you want to test reindexing and that has dump type 'Changed Records' and set it. For example if the previous dump that was indexed has id 1123 then `index_mgr.last_dump_completed_id = 1123` and `index_mgr.save`.
7. Check [Bibdata staging sidekiq](https://bibdata-staging.princeton.edu/sidekiq/busy) and click 'Live Poll'. The indexing process is fast.
8. Run `index_mgr.index_remaining!`. In [Bibdata staging sidekiq](https://bibdata-staging.princeton.edu/sidekiq/busy) you will see a new job.
7. Check [Bibdata staging sidekiq](https://bibdata-staging.lib.princeton.edu/sidekiq/busy) and click 'Live Poll'. The indexing process is fast.
8. Run `index_mgr.index_remaining!`. In [Bibdata staging sidekiq](https://bibdata-staging.lib.princeton.edu/sidekiq/busy) you will see a new job.
9. One way to test that the dump was indexed is to run `index_mgr.reload`. You should see that `last_dump_completed_id` is the event/dump id you wanted to test reindexing. `in_progress` should be `false`.
10. Another way would be to download the dump_file and manually check the timestamp of some of the mmsids in catalog-staging or in solr.
2 changes: 1 addition & 1 deletion lib/tasks/orangeindex.rake
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ require_relative '../../marc_to_solr/lib/cache_manager'
require_relative '../../marc_to_solr/lib/cache_map'
require_relative '../../marc_to_solr/lib/composite_cache_map'

default_bibdata_url = 'https://bibdata-staging.princeton.edu'
default_bibdata_url = 'https://bibdata-staging.lib.princeton.edu'
bibdata_url = ENV['BIBDATA_URL'] || default_bibdata_url

default_solr_url = 'http://localhost:8983/solr/blacklight-core-development'
Expand Down

0 comments on commit 9a2a985

Please sign in to comment.