Once the file.paths to find the instruments archive data are found, and the individual instrument R script and .qmd file are updated to reflect these, it is possible to start rendering the individual instrument webpage on the local computer. But when you have multiple instruments contributing data to the dashboard home page and individual instrument pages, we need a way to ensure the data from each instrument gets transferred to GitHub to allow Quarto to render the update version. This can be done either manually, or in an automated fashion. We will explore both options

Manual Transfer

We will first navigate to the InstrumentQC folder and locate the Examples_staff.qmd file. We will duplicate it, creating a new copy that will simply be called staff.qmd. This name is set within the .gitignore file to be ignored, allowing local copies with different file paths to be present on each instrument computer without causing issues in the version control.

Once we open the duplicated and renamed staff.qmd file, we find a single code chunk with several lines of code. This code chunk is meant to be executed by staff after morning QC has been carried out on the instrument by clicking the green play button showing “Run Current Chunk” on the upper-left side of the code-block. Once this is done, it will run the code, find the file.path to the .R script for that individual instrument, and run it. In the process, it executes the pull from the GitHub repository, processes the new data, and then pushes the updated data to the GitHub repository.

Once the source command has finished executing, the option is present to introduce a Flag.csv file. This will prevent automated transfer if set-up from occuring later that day (more below).

Manual sending the data has advantages as you are already at the instrument during the Instrument QC process, and the extra minute to run the current code-chunk shouldn’t add that much extra inconvenience. The drawbacks occur in that if you or others are not comfortable with Rstudio, you need to interact with it, and not forget to send the data before an user gets on the instrument.

Automating Transfer

The second option is to set up an automated transfer of the QC data from each instrument to the GitHub repository. This is possible by using the Windows Task Scheduler, which is the same system that schedules when your computer updates run, etc. The time at which the automated transfer is carried out can be designated by the user, and setup is carried out through use of the R package taskscheduleR.

In context of our institution, QC on all instruments is generally done by 10:30 AM. Consequently, we scedule automated transfer of the QC data to begin at 10:30 AM. When it works, we get all the data transferred from all instruments without any extra effort at the designated time daily, making updating of the dashboard simple.

Unfortunately, task scheduler opens a black terminal window pop up when it is actively running. This doesn’t affect anything on the cytometer acquisition, but it can startle anyone who doesn’t expect it. In our experience, when an user is on the instrument at the time, they will either panic and immediately close the black terminal window, or after not closing for 10 seconds get annoyed and close it down. At which point, no data will be automatically transferred to the GitHub repository for that day barring manual staff intervention. Emails to users and warning note on insturment computer help reduce the occasions this above scenario occurs, but user community awareness and buy in is still required. Alternately, schedule during non-operation hours, and have the dashboard data be delayed from real time.

To begin, we will open the TaskSchedules.qmd file within InstrumentQC. You will first call library for the TaskScheduleR package. Then you will verify the file.path to the instrument R script is correctly updated. You will then modify the first chunk of code to desired task schedule name, and set the time. Then run that code chunk. Finally check using the command to verify the task. If you want to remove a scheduled task, run the third set commands including the previous task schedule name and if confirmation received, the previously scheduled task is now removed.

Hybrid

In our setting, we have automated QC set to run at 10:30, staggered by two minutes for each instruments to avoid discrepancy issues of version control when the scripts push pull data from the github repository. We also have a staff.qmd file so that manual sending of the data can occur if SRL finish QC early or an user-shutdown occurs. To avoid having both things happen, the staff.qmd file generates a Flag.csv after manual completion. Within the individual R script, the first condition checks for presence of this file, and if found doesn’t proceed with automated QC for the designated time.

We then set another Task Schedule for later that same day to remove the Flag.csv file, allowing a reset of the process so that automated QC can proceed the next morning. To do this, we would go into Examples_Flag.qmd and copy the example into a new .R file we call FlagRemoval.R (also covered by the .gitignore file to avoid filepath conflicts across instruments). We then add a new taskschedule to run this R script later in the afternoon.

Finally, in attempt to reduce the time spent by the automated processing of the data during the morning being interrupted by the user, we set a TaskSchedule to pull from the GitHub repository at 6 AM, which brings in the updates from the other instruments and dashboard website, reducing the time the black terminal screen is open during the scheduled 10:30 round.

Rendering the Dashboard

Regardless whether you implemtent the manual, automated or hybrid approach, you will end up with the updated QC data from each instrument present on the GitHub repository. These changes are tracked by git as part of version control. We previously discussed the individual instrument pages designated by their .qmd files, but we need to discuss the home/summary index.qmd page that combines all the data sources.

Briefly, it shows the indicator blocks for the individual instruments (color-coded pass, caution, fail). The second column shows the results of the daily QC for the day (with tabsets for each instrument). And finally a six-month QC for all instruments is visible in the third column.

Additionally, the navbar and footer elements are specified within the _quarto.yaml file allowing for customization/rearranging/renaming as desired. In our dashboard there are additional tabs that reference Data to get the data as .csv or .pdf by the individual users, and a Help page for general information about the dashboard.

To assemble the dashboard with the updated data, one can either pull in the data to the local computer, and then render the project to assemble the website and individual dashboard pages into a single unit. All the changes to the files would then be committed, with the data passed to GitHub repository.

On the GitHub repository side, one would customize the repostitory to use GitHub pages, set to display contents of the docs folder (containing the rendered html pages). Upon pushing the updates, GitHub would then process them and display them as a website.

Future Directions - Github Actions

Rendering locally takes computer time and computer space. Additionally, it means each instrument needs to bring in edited versions of the website locally every morning with their initial pull. We are currently working to set up a GitHub actions that would ensure the website portion remains only within the GitHub repository (not rendered or copied locally to each instrument taking up additional memory) but additionally render at a given time reducing the requirement to manually render and push the updated dashboard. This occurs thanks to cloud access available to GitHub repositories that are public, with a certain ammount of free server time allocated to each.

The reason not currently available, is that it is more complicated technically and I am busy writing. But more broadly, the cloud instance needs to install R and R packages needed to run everything each and every time. And the Luciernaga package is still MB heavy due to it being in testing phase and having a lot of .fcs files in it’s extdata folder. Consequently, an area that has promise, but not yet implemented, stay tuned.