|Values are valid only on day of printing.|
Click CC to turn on closed captioning.
Published: July 2013Print Record of Viewing
Accurate identification of newborns with metabolic disease can significantly improve patient outcomes. Conversely, a missed diagnosis can result in significant morbidity and may even result in death. While a false-positive diagnosis does not carry the burden of increased morbidity or mortality, there are social and psychological costs that may generate significant harm. The Region 4 Stork Collaborative was developed to improve detection of true positive cases of metabolic disease and improve accurate diagnosis. The R4S project uses Mayo-developed software that provides postanalytical interpretation of complex metabolic profiles. The R4S project offers physicians worldwide the opportunity to utilize this software to analyze their patients’ test results, and compare them with other locations’ results.
R4S Collaborative Project Part 4 discusses high-throughput data entry portals. To improve download time, Part 4 has been separated into 2 subjects: 4A focuses on the Tool Runner tool, while 4B focuses on the All Conditions tool.
Presenter: Piero Rinaldo, MD, PhD
Thank you for the introduction. This presentation is the first portion (part A) of the fourth segment of the series describing the products and clinical tools of a newborn screening quality improvement project called Region 4 Stork, or R4S. The title of this presentation is “High-throughput data entry portals”. During editing this topic has been divided in two portions to remain within the required time limits of this type of presentation.
I have a disclosure to make: a provisional patent application related to some of the content of this presentation has been submitted by Mayo Clinic. The title of the application is “Computer-Based Dynamic Data Analysis.”
This presentation continues the overview of the second generation of R4S tools, this time focusing on how to use them effectively in a daily laboratory practice where convenient uploading and rapid, large scale data analysis are highly desirable.
In the previous segment, part III of this series, the tool builder and the two most commonly used tools, the one condition tool and the dual scatter plot, were introduced.
Although it was described how these tools are produced using the tool builder, there was no information about their use in a laboratory setting. It was done this way to underscore their clinical utility to a user rather than a “producer” of laboratory results. Indeed, the one condition tool and the dual scatter plot can be described as the CLINICIAN TOOLS, tools that can be clinically useful literally at the bed side of the patient after laboratory results have been interpreted and reported. In this segment , the focus is instead on the LABORATORY TOOLS, the tool runner and the all conditions tool.
We begin with the tool runner. This tool allows the simultaneous evaluation of all conditions with an active tool for multiple patients in any number from a few to thousands, typically one 96-well plate at a time.
What is the tool runner? The tool runner is a portal to upload to the R4S website
whole batches of raw data after conversion to a .csv (comma separated value) file. This type of file can be generated routinely, and automatically, by virtually any operating system of commercially available tandem mass spectrometers, and also most types of laboratory instrumentation. The tool calculates automatically every possible score for each case in a batch using all released one condition tools, or a smaller, user-selected panel. The tool runner generates a report of all instances with a score greater than the 1st percentile of the scores of all cases with a given condition. In other words, a score within the range of values obtained for the population of true positive cases.
This slide summarizes the process from raw data to an actionable report: the instrument software generates a csv file, the file is uploaded to the tool runner on the R4S website, all one condition tools are run simultaneously, the summary report of informative scores is produced. To emphasize the user-friendly nature of this process, we call it click click done.
The apparent simplicity of this process from the view point of an average user is quite a contrast to the underlying complexity and magnitude of the data being analyzed. A typical 96-well plate includes approximately 90 patient samples, depending on the number of controls added to each plate (in this case 5, the first 3 and the last 2 wells shown as blue circles). The total number of analyte results and calculated ratios for a plate is also variable, but in most cases exceeds 100. All things considered, a single plate analyzed for amino acids and acylcarnitines by tandem mass spectrometry routinely generates more than 10,000 results. The magnification of a small portion of this spreadsheet allows a better understanding of the raw data structure.
In this instrument-generated spreadsheet, one patient per row, the following elements are included: the overall file name of the batch (please note the file type .WIFF is the file format of one type of commercial instruments in use in our laboratory; other common types are, for example, “.RAW” and “.D”); the sequence number (here called the sample index); the specimen ID number that is assigned by the main laboratory information system; the last name of the patient, not shown here for obvious reasons; the sample type (“unknown” indicates a routine first specimen); and finally the analyte abbreviated names and actual results.
Two of these elements, the specimen ID and the patient name, are considered patient health information, or PHI, and as such are not suitable to uploading to a web-based application even if password protected like R4S.
For this reason, instrument softwares can be programmed to generate a different type of file using the .csv file format. As a reminder, a comma-separated value file stores data in a plain-text form. Plain text means that the file is a sequence of characters separated by a literal comma or a tab. As ratios are calculated by the R4S tools, only analyte values need to be included, reducing the cumulative number of results to less than half of the original file. Again, magnification of a small portion of this spreadsheet allows a better understanding of the raw data structure.
The data elements of a .csv file are the following: the analyte abbreviated names; the sequence number; the analyte results and a very important new element, the LOINC codes. LOINC stands for Logical Observation Identifiers Names and Codes which is a universal code system for identifying laboratory and clinical observations. As stated on the homepage of the website maintained by the Regenstrief Institute at the University of Indiana, LOINC has standardized terms for all kinds of observations and measurements that enable exchange and aggregation of electronic health data from many independent systems. A LOINC code is unique to a combination of component, system (sample type), scale and unit of measurement. More information on the application of LOINC codes in newborn screening can be found in this publication.
Going back to the csv file, the final output is void of any PHI, individual cases are identified only by their sequence number in the batch, an information that cannot be traced back to a specific patient.
When ready, the .csv file needs to be uploaded to the data entry portal of the tool runner. A user needs to login on the R4S website, access the post-analytical tools page, and click the tool runner link.
The selection page of the tool runner consists of three sections: they are, from top to bottom, select a tool, tools selected to run, and select data file.
Not all functionalities are accessible to all users. For example, the option to select a not yet released tool for testing and validation purposes is limited to those users who have been granted access to the tool builder. The remaining options in this section are relevant to a situation where a user is interested to run a subset of tools instead of the default action, which is to run all of them together. If, for example, a MN user wants to run a single tool created to identify MCAD carriers, he or she would first select fatty acid oxidation as condition type, then select the desired condition, tool type, and tool version from the drop down menu. By clicking the “Add” icon the chosen tool is activated in the next section.
In the next section, Tool selected to run, the opposite action can be taken, which is to remove one or more of the previously added tools. To continue using the customized subset, the user must select “Run selected tools” instead of the default “Run all single tools”. Two additional functionalities are available, one is to include in the report all the scores greater than zero but still below the selected threshold of clinical significance, the other is to convert the standard report into an unprotected Excel file ready to be saved on the user’s computer.
In the third and final section, Select data File, users have the option to select either the file format compatible with derivatized profiles, shown as D within squared brackets, or the alternative format that works with underivatized profiles, shown as U within squared brackets. The correct format should be selected and displayed automatically if a user with read & write access from the same site had answered the series of questions in the data submission/participant profile menu. If a choice was made in this window the format is adjusted accordingly in the tool runner. If an answer had not been provided yet, the default format is D. A user of an underivatized method from a site with an incomplete profile would not be prevented from using the tool runner, but an error message would appear explaining that the incorrect LOINC codes were submitted for some analytes.
When the display shows the “no file chosen” message, the uploading of a batch file is quick and easy once users have created on their computer desktop a shortcut to the server location where the instrument-generated files are stored. A click on the link brings on subfolders by year, by month, and by day. Each subfolder corresponds to a 96-well plate and it includes the .csv file ready to be uploaded to the tool runner.
Another seldom needed feature is the option to eliminate from the calculation of scores additional wells of the 96-well plate in addition to the first three and the last two which are used for quality control purposes. Again, many of these features are included to provide the greatest possible flexibility to meet diverse needs of a diverse number of users, but in reality the processing of a batch should not take more than a few seconds to access the tool runner, select a file, and click “Run Tools”.
After clicking the run tool icon, a report is generated in just a few seconds. The header shows how many profiles were processed (91) and how many scores were calculated (3731). The .csv file name and the batch ID of the original instrument-generated file are also included to facilitate tracking. Before discussing the report in more details, it is important to explain how the system is designed to react to incomplete data set in order to prevent the incorrect assumption that a condition tool was not informative because of one or more results were actually missing. In this instance, everything was in place so no alert message was displayed to the user. On the other hand, the uploading of the same file after a single value had been removed randomly, in this example just the numerical result of the acylcarnitine C5:1 in row 23,
triggers a report with a clear indication of the underlying problem. The header now includes a new descriptive element, the “Not run count”, and a table listing the specific tools that were inactive. If the C5:1 value is removed from all rows
The error message is phrased accordingly, with the addition of a note reminding the user that any inactive tool could be modified to be site-specific and not inclusive of the missing analyte. As a reminder, tool customization will be the main topic of the next presentation.
Going back to the report, when the system is set up properly truly all that it takes to generate a report is three clicks of the mouse . The report shows that out of the initial 91 cases there were only three with informative scores. The first one, sequence ID 26, triggered a score for the VLCAD carrier tool, but not for true VLCAD. For this reason, this finding could be discarded without further evaluation. The VLCAD carrier tool is site specific, only MN users can see it. Indeed it is not available to all users because it generates a score frequently, and could have the opposite effect of the real goal of the tool runner, which is to limit the number of unnecessary referrals. The second case, sequence ID 61, indicates multiple amino acid elevations that fit the pattern of total parenteral nutrition. In our experience, this result is sufficient to classify it for what it is, a nutritional artifact of no clinical significance that requires no referral to follow up and not even the collection of a repeat sample. In this batch, there is a SINGLE case that requires closer evaluation, the patient with sequence ID number 84.
The tool runner report shows an informative score for three conditions, VLCAD deficiency, 3-hydroxy acyl-CoA dehydrogenase deficiency combined with Trifunctional deficiency (shown as LCHAD / TFP; from this point forward these conditions will be described simply as LCHAD), and again VLCAD carrier status. Selection of the blue link for VLCAD brings up the data entry window with all the result fields already populated. Clicking the Calculate icon at the bottom of the window produces the one condition tool for VLCAD deficiency.
This slide shows the connection between the content of the tool runner report and the elements included in the score report of the one condition tool. The three elements that cross over are the actual score, calculated against the entire population of true positive cases and the percentile rank among those cases. The third element comes from the interpretation guidelines , and that is the interpretation group (score 30 to 50) that includes the calculated score (45).
It is worthwhile to show a quick reminder of the methodology used to define the interpretation groups in a consistent and objective manner. In the tool builder, the verification of a new tool before release into production is based on two steps called IMPACT and RUN. In the first one, the testing of different rules generates a report called Condition Score Ranges. The green background signifies that the rules did have a positive impact on the scoring. The interpretation guidelines of all R4S tools are based on three percentile values, the 1st, the 10th, and the 25th percentiles of the condition score range. For consistency, all threshold values are rounded to the nearest multiple of 5. A score below the 1st percentile, in this case a score less than 30, is considered not informative and therefore will not be included in the tool runner report unless the user activated the option to see not informative scores. A score between the 1st and the 10th percentile, or between 28 and 51, rounded to 30 and 50, respectively, is considered informative with a textual representation of being POSSIBLY indicative of VLCAD deficiency. As the score goes higher, between the 10 and the 25th percentile, it is represented as LIKELY indicative of VLCAD deficiency. Finally, a score greater than the 25th percentile is VERY LIKELY to be consistent with a biochemical diagnosis of VLCAD deficiency. No additional categories are deemed necessary as scores above the 25th percentile should be increasingly self-evident even when using the most conservative cutoff values. The tool builder allows the inclusion of a header that could be used for disclaimers. In the newborn screening tools, this option is used to warn users that R4S tools has been validated only for neonatal blood spots, collected before 10 days of age.
Going back to case number 84, this slide shows the three tools with informative scores . This outcome if far from being indicative that further action is required. Instead it should be framed like a differential diagnosis between VLCAD and VLCAD carrier and a verification that the score for LCHAD deficiency, barely above the 1st percentile, is indeed sufficiently significant to warrant a referral to follow up.
As discussed extensively in previous segments of this series, the degree of overlap between reference and disease ranges is the conceptual foundation of the post-analytical tools. In LCHAD deficiency, most of the informative markers, namely C16 and C18 hydroxy long chain species and related ratios, are present under normal circumstances at very low levels. Many participating sites have listed a zero value for their 1st and 10th cumulative reference percentiles, a situation that translates in the peculiar behavior represented in the plot by condition, especially at the low end. Using these skewed reference ranges, this case yields a score barely above the threshold.
Not all sites are affected by this problem, and indeed Minnesota is not one of them. In a case with a borderline score for LCHAD deficiency, MN users are trained to switch to our own percentiles, a change achieved by a single click in the data entry window.
The impact of this customization is shown here. The overlap between the reference and disease ranges of these analytes is reduced significantly. The calculated score (6) is now well below the 1st percentile of the LCHAD score range, and therefore can be interpreted with confidence to be a not informative (in other words, normal) result.
This simple process allowed the exclusion of LCHAD deficiency, leaving unresolved only the differential diagnosis between VLCAD deficiency and VLCAD carrier status. As shown in the previous segment, this can be promptly achieved by switching to the dual scatter plot.
The plot places convincingly this case within the VLCAD carrier cluster. In summary, the post-analytical interpretation of the results of a 96 well plate, 91 patients, can be finalized in less than one minute of work by a user trained to navigate the R4S website and the tools wherein. This process is rapid, paperless, and most importantly it provides a mechanism to maintain consistency among different users who may have vastly different expertise, from just a few months to years if not decades of experience.
To recapitulate the clinical utility of this functionality, the Tool Runner allows the simultaneous analysis of large batches of NBS data. While testing of one plate at a time is a logical approach in a routine situation, a .csv file where the data of 20 96-well plates were merged for testing purposes has been analyzed successfully. A database on approximately 60,000 cases has been processed after the selection of a single one condition tool. As a reminder, the tool runner calculates a score with every available one condition tool and generates a summary report of all informative scores. On the other hand, it can be applied to either a single tool or to any desired combination of general and/or site specific tools. In our experience, it is not uncommon to see reports like this one, where an entire plate could be resulted as negative when not a single one of the calculated 4095 scores was informative. With the understanding that some users might not be completely comfortable with this type of outcome, it is possible to expand the report to include all scores that were anywhere above zero but still below the threshold of clinical significance. Finally, the report of the tool runner can also be exported to Excel for documentation, research, quality assurance, and system evaluation purposes. Since the launch into production of the tool runner, a frequently asked question has been “Is anybody (beside MN) using it”? That is a legitimate question that we are glad to answer.
The tool runner became available to R4S users in November 2011 after being presented to the final R4S user meeting in San Diego. This activity was made possible by the HRSA Regional Collaborative grant that ended its second cycle in May 2012. It was utilized 79 times in December 2011 by MN and two other programs. In 2012, however, the tool runner was deployed more than 6000 times, 700,000 profiles were tested and almost 17 million scores were calculated. In the first two months of 2013 more than 5 million scores have been calculated, an 80% increase when expressed as average utilization per month. To date, the tool runner is used regularly by 21 programs; the majority of them are located outside of the United States.
This is the conclusion of the first portion of the fourth presentation of the R4S series. In the second portion, part B, the second type of high throughput data portal, the all conditions tool, will be presented.
Please do not hesitate to contact us if you have any questions or requests related to the content of this presentation. Thank you very much for your attention.