

In the resulting data, 755 out of the 905 rows are EVID=0, the remaining 150 rows are EVID=1. Input data did not contain any ID’s that weren’t used.

This makes it powerful for meta analyses, for reading a model developed by someone else - or one written by ourselves when we used to do things slightly differently. It assumes nothing about the Nonmem model itself and as little as possible about the organization of data and file paths/names. Like the rest of NMdata, this functionality assumes as little as possible about how you work. It is fast too.ĭefault argument values can be configured depending on your setup (data standards, directory structure and other preferences). The implementation in NMdata works for the vast majority of models and aims at preventing and checking for as many caveats as possible.
#NONMEM $PRED CODE#
But with the large degree of flexibility Nonmem offers, the code will likely have to be adjusted between models. In most cases, the steps above are not too hard to do. This way of reading the output and input data is fully compatible with most other of the great R packages for reading data from Nonmem. NMscanData by default (but optionally) follows the column names as read by Nonmem. If wanted, also restore rows from input data that were disregarded in Nonmem (e.g. observations or subjects that are not part of the analysis)Īn additional complication is the potential renaming of input data column names in the Nonmem $INPUT section.If wanted, read input data and restore variables that were not output from the Nonmem model.lst): Identify input and output table files In brevity, the most important steps are:

After scanning the Nonmem list file and/or control stream for file and column names, the data files are read and combined.
#NONMEM $PRED HOW TO#
This vignette focuses on how to use NMdata to automate what needs to be trivial: get one dataset out of a Nonmem run, combining all output tables and including additional columns and rows from the input data. If available, using an rds file to represent the input data in order to preserve all data properties (e.g. factor levels) from data set preparationĪfter having checked the rare exceptions, feeling confident that NMscanData should work on all your Nonmem models Including input data rows that were not processed by Nonmem ( ACCEPT and IGNORE)Ĭombining such data sets for multiple models Using automatically generated meta data to look up information on input and output tables, how they were combined, and results of checks performed by NMscanData. Switching between combining output and input data by mimicking the Nonmem data filters (IGNORE/ACCEPT) and merging by a row identifierĬonfiguring NMdata to return the data class of your preference (say data.table or tbl) instead of ame which is default Using NMscanData to read and combine all output and input data based only on (the path to) the Nonmem list file (understanding how NMscanData prioritizes output and input data in case of redundancy)
