Data quality is, at the moment, spread across a number of different features in Displayr which have to be run individually depending on what type of check you want to run. It would be good to have a central location for data quality checks, including all the "standard" scripts and features, and automate data quality as much as possible. Consider a single feature with an object inspector that allows the user to specify which variable(s) to use for speeding checks, which questions should be included in flatlining or patterning checks, what tests to run over open-ended data to identify potential bot responses, identification of duplicate cases, or inconsistent data, all with user-specified tolerances. Produce a filter variable for each test, and an additional filter variable that identifies cases that exceed a user-specified amount of tolerable "errors." Finally, a function to automatically delete from the file any case that fits the "final" filter and updates accordingly if there's added data in the source file (or if the settings change).