Health & Medical Question

The attached Central Monitoring Plan and Central Monitoring Analytics excel are both examples of what central monitoring plans and outputs would look like. These would go to the CRA and the CRA lead for that study. These are “given” as a result of the central monitoring effort. These are listed here to help you see an example of what an actual central monitoring plan would look like. The second excel sheet would be an example of an ongoing “output” from the central monitoring team to give you a view of what types of data would come directly to you as a CRA or CRA lead. Are these elements useful from the perspective of the study team?

These elements help you to craft your position paper “to the committee” for this assignment on the benefits of central monitoring.

Need a custom paper ASAP?
We can do it today.
Tailored to your instructions. 0% plagiarism.

Paper Scenario

You are a pharmaceutical project manager associated with a compound that is planned to have 4-5 co-occurring/overlapping trials of this compound.  The success of these trials is extremely important and you are on the executive committee with the goal of overseeing and verifying these trials are set up with a focus on quality. Because of the tight timelines to get the compound ready for FDA submission, some of the required trials will have to overlap (although minimally).  As such, the monitoring of the trial is critical to the success of the compound overall.  You propose instituting central monitoring (off-site monitoring that will feed in findings to the on-site monitoring team).  You are proposing monthly calls with the central monitor so you can see the central monitoring team’s analytical information (negative deviational trend information with pre-set parameters such as data entry, protocol deviations, re-querying rate, SAE/AE rates per subject). Your job at this time is to propose this central monitoring (CM) approach to the other executive committee members.

Background
Infrastructure and processes are necessary to protect the safety of their research subjects and
ensure trial data integrity. In tandem, both the Food and Drug Administration (FDA) and the
European Medicines Agency (EMA) released draft guidance for the conduct of risk-based
monitoring to assist sponsors in better meeting their regulatory obligations in 2011.
While these guidelines provided the regulatory perspective and rationale for risk-based
monitoring strategies, they did not mandate specific methodologies for implementation.
Sponsors, who must implement risk-based monitoring solutions in such a way that meets the
requirements, will be successful, and documented. As we are learning in this course,
traditionally sponsors manage this aspect of the trials with different software systems and
paper processes to oversee monitoring /reporting, safety and pharmacovigilance reporting,
document tracking overall, Trial Master File maintenance; and electronic data capture and
query management. Information is tracked within these separate systems and made available
to users through disconnected, preprogrammed status reports. There is often a heavy reliance
on manual tracking to analyze disparate data.
Central monitoring and analytics software and expertise is starting to emerge especially
through the TranCelerate “model”. This new platform seems to be accelerating the gathering
and understanding of clinical trial data. Sponsors and their global project teams need a
comprehensive and compliant solution, one that allows trial oversight through real-time
proactive risk assessment. Key here is the idea of the “availability of real-time, continuously
analyzed data and configured workflows greatly reduces, or even eliminates,
the potential for individual bias in issue management and decision-making.”
Disproportionately high level of monitoring oversight can be expended for relatively low-risk
situations. The FDA believes that targeted risk-based approaches that
focus on the most critical data elements will result in more effective monitoring and
help to overcome many of the limitations of on-site monitoring.
Draft Guidance on Risk-based Monitoring
The European Medicines Agency (EMA)
The EMA suggests that sponsors take a quantitative approach to the issue and
assign numeric values to specific risks identified in the protocol (both high and low
risks showing a range of “acceptable tolerance”. See analytics for more details). When
acceptable tolerance limits are passed, the assigned escalation previously detailed in the study
planning is triggered (for example, depending on the nature of the issue, this could trigger a
phon call from the on-site monitor or an additional visit by the onsite monitor). However, if a
deviation falls within the set tolerance range, then it may be considered an “expected
deviation” per the monitoring plan for the protocol.
The EMA states that tolerances/range limits should be defined early and documented
in a monitoring plan clearly. For those variables that are important to the trial objectives,
the plan could include more emphasis on central monitoring, quality assurance and
targeted SDV. The EMA guidance exists within the framework of the Clinical Trials
Directive and accommodates a range of risk-adapted approaches that will simplify
clinical trial processes.
Food and Drug Administration Perspective (FDA)
The FDA draft guidance shares many of the central tenets of the EMA’s reflection
paper, including the requirement for sponsors to use a variety of approaches to fulfill their
responsibilities related to monitoring; investigator conduct and the progress of investigational
drug and device exemption studies; Conduct a risk assessment to identify and evaluate risks
critical to studying data and processes; Design a monitoring plan tailored to address the
important and likely risks identified during the risk assessment (including remote, targeted and
reduced SDV). The guidance highlights the importance of documenting the monitoring plan
afterassessing the project risks and needs. It also recommends that sponsors analyze
ongoing data to continuously assess and adjust the monitoring strategy. This is
a vastly different approach from the traditional method of prospectively planning
monitoring visits at regulator intervals, regardless of therapeutic area, trial phase
or trial complexity.
Both the FDA and EMA encourage sponsors to adopt strategies that reflect a
risk-based monitoring approach using a combination of monitoring strategies and
activities. The approach should include a greater reliance on centralized monitoring,
a sharp focus on critical study parameters (such as those specific to the safety and
protection of human subjects) and a plan to address data integrity risks.
Moving forward
Industry sponsors must proactively plan risk-based monitoring while instituting guidance from
the regulatory agencies and incorporate learnings from traditional methods of monitoring.
Ultimately thinking of how they will utilize this approach and incorporate strategies that can
actually oversee all data/systems used in the study. (this is the challenge).
“Reflection Paper on Risk-Based Quality Management in Clinical Trials,” European Medicines
Agency, Aug. 2011,
“Guidance for Industry Oversight of Clinical Investigations: A Risk Based Approach to Monitoring
(Draft Guidance),” U.S. Food & Drug Administration, August 2011, http://www.fda.gov/downloads/
Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM269919.pdf.
“Risk-Adapted Approaches to the Management of Clinical Trials of Investigational Medicinal
Products,” Medicines and Healthcare Products Regulatory Agency, Oct. 10, 2011.
“Guidance for Industry Oversight of Clinical Investigations: A Risk Based Approach to Monitoring
(Draft Guidance),” U.S. Food & Drug Administration, August 2011.
“Monitoring the Quality of Conduct of Clinical Trials: A Survey of Current Practices,” Clinical Trials, Vol. 8, No. 3, pp 342–349,
June 2011,
Central Monitoring Plan
1.1
Introduction
The plan describes tasks performed by the Central Monitor and the approximate frequency of performance. In addition this guideline defines the
minimum criteria for central monitoring and does not replace an understanding of, or adherence to, the requirements contained in the
approved protocol and/or applicable procedures.
Version 1.0
1.2
Roles and Responsibilities
This plan establishes the procedures that will be followed by the Central Monitor for the XXX study. The Central Monitor is responsible for
review of data throughout the study to assess site performance and potential risk to the study.
1.2.1
Central Monitor (CM)
The CM will be responsible for consultation in the development and review of analytics used for the conduct of central monitoring. The CM will
hold routine meetings with the CTM and will be responsible for communication of issues to site monitoring representatives. The CM will enter
issues identified through the central monitoring into an issues tracking system. Site monitors will be notified of open issues and will take
appropriate action to resolve these issues. These actions will be documented in the issues tracking system. The central monitor will remain
blinded until unblinding is approved by appropriate study team personnel.
Version 1.0
2. Central Monitoring Tasks
This section includes the tasks and reviews to be completed by the responsible party (as noted in the table below) to monitor the progress of the
trial. As a result of completing these tasks, areas requiring further investigation may be identified. Any issues detected through the investigations
should be communicated to the study team.
Issue ID
Category
Sub-category
Requirement
Risk Indicator(s)
Threshold
Description of Task
IP
Investigational
Product
Investigational
Product (IP)
Confirm subject
received IP.
Subject(s) did not
receive study
treatment.
1 or more subject(s)
with IP eCRF
inconsistent with IP
IWRS data (e.g., not
recorded in eCRF,
recorded in eCRF but
not consistent with
IP data)
Late or lack of
sufficient resolution
of issues at a site or
an excessive number
of issues at a given
site (significant or
non-significant) in
comparison to other
sites.
1. A site with 100%
more unresolved
issues than the
median number of
unresolved issues at
all sites.
Review IP reports to
confirm that all
patients assigned to a
treatment are
accounted for in eDC
data and that
patients receiving
treatment per eDC
are listed on IP
reports. CRF and IP
data will be used to
assess this
requirement.
Identify sites with
100% more
unresolved issues
than the median
number of
unresolved issues at
all sites.
Issue by
Site
Issues
Management
Issues and
Deviation
Management
Issue &
Dev Mgmt
Issues
Management
Issues and
Deviation
Management
Review/control
issues for the
study – based on
the predefined
risk indicators
and thresholds.
Follow up with
appropriate
action as per
issues
management
SOP.
Review/control
issues for the
study – based on
the predefined
risk indicators
Late or lack of
sufficient resolution
of issues at a site or
an excessive number
of issues at a given
2. A site with 1 or
more issues that are
unresolved greater
than 60 days.
Identify sites with
unresolved nonsignificant issues
more than 60 days
Approximate
Frequency
Monthly
Monthly
Monthly
Version 1.0
and thresholds.
Follow up with
appropriate
action as per
issues
management SOP
Perform data
review of SAEs
leading to
discontinuation of
treatment or
discontinuation
from the study
Perform data
review of AEs
site (significant or
non-significant) in
comparison to other
sites.
(Analytic Name:
Issues List)
old (Analytic Name:
Issues List).
Increased number of
subjects with SAEs
leading to
discontinuation at a
site.
Identify sites with 2
or more subjects
who suspended or
discontinued study
drug due to SAE.
Identify sites with 2
or more subjects who
suspended or
discontinued study
drug due to SAE.
Monthly
Inaccurate or
under/over
reporting of AEs.
Identification of a
site where the
average number of
AEs per subject is
more than 2 SD
higher than or more
than 2 SD lower than
the average number
of AEs per subject
across all sites.
Identify sites where
the average number
of AEs per subject
per site is greater
than or less than 2
standard deviations
(SD) from the
average number of
AEs per subject per
site across all sites.
CRF data will be used
to assess this
requirement.
Monthly
SAE
Safety
Serious Adverse
Events (SAE)
Nonserious
AEs
Safety
Non-Serious
Adverse Events
(AE)
Av Query
Response
Time
Data Quality
Central Data
Review
Monitor CRF
completion
Persistent late entry
of data into the eDC
tool at a site. Failure
of an investigator to
sign CRFs in a timely
manner.
1. Identification of a
subject for whom
the average time
from visit date to
entry of data into
eDC for all visits
exceeds 10 days.
Identification of a
subject for whom the
average time from
visit date to entry of
data into eDC for all
visits exceeds 10
days.
Monthly
Overdue
data entry
by Site
Data Quality
Central Data
Review
Monitor CRF
completion
Persistent late entry
of data into the eDC
tool at a site. Failure
of an investigator to
sign CRFs in a timely
manner.
2. Identification of
sites where subjects
have discontinued or
completed the study
where the CRFs have
not been signed off
within 30 days.
Identification of sites
where subjects have
discontinued or
completed the study
where the CRFs have
not been signed off
within 30 days.
Monthly
.
Version 1.0
Overdue
queries by
data point
& site
Data Quality
Central Data
Review
Monitor CRF
query
management
Late or inadequate
resolution of
queries.
Overdue
queries by
data point
& site
Data Quality
Central Data
Review
Monitor CRF
query
management
Late or inadequate
resolution of
queries.
Re-query
Rate
Data Quality
Central Data
Review
Monitor CRF
query
management
Late or inadequate
resolution of
queries.
Queries
per Subject
Visit
Data Quality
Central Data
Review
Monitor CRF
query
management
Late or inadequate
resolution of
queries.
Overdue
Queries by
site
Data Quality
Central Data
Review
Monitor CRF
query
management
Late or inadequate
resolution of
queries.
(Analytic Name: CRF
sign-off).
2. Identification of a
site with more than
5 overdue queries.
Overdue is defined
as more than 30
days elapsed from
initial query without
resolution.
3. Identification of a
site where the
average number of
queries per
completed field is
more than 2 SD
greater than the
average number of
queries per
completed field
across all sites.
5. Identification of a
site whose average
requery rate per
subject is more than
2 SD greater than
the average requery
rate across all sites.
6. Identification of a
site whose average
number of queries
per subject visit is
more than 2 SD
higher than the
average number of
queries per subject
visit across all sites.
7. Identification of a
site whose overdue
queries (as a percent
of all queries at that
site) is more than 2
(Analytic Name: CRF
sign-off).
Identification of a site
with more than 5
overdue queries.
Overdue is defined as
more than 30 days
elapsed from initial
query without
resolution.
Identification of a site
where the average
number of queries
per completed field is
more than 2 SD
greater than the
average number of
queries per
completed field
across all sites.
Identification of a site
whose average
requery rate per
subject is more than
2 SD greater than the
average requery rate
across all sites.
Identification of a site
whose average
number of queries
per subject visit is
more than 2 SD
higher than the
average number of
queries per subject
visit across all sites.
Identification of a
site whose overdue
queries (as a percent
of all queries at that
site) is more than 2
Monthly
Monthly
Monthly
Monthly
Monthly
Version 1.0
Not
visualized
in analytics
Study
Management
Subject
Recruitment
Evaluate subject
recruitment by
site including
subjects enrolled
and screen
failures
An increase or
decrease in the rate
of subjects failing
screening at a site or
deviation from
planned enrollment
rate.
Not
visualized
in analytics
Study
Management
Subject
Recruitment
Evaluate subject
recruitment by
site including
subjects enrolled
and screen
failures
An increase or
decrease in the rate
of subjects failing
screening at a site or
deviation from
planned enrollment
rate.
Not
visualized
in analytics
Study
Management
Subject
Recruitment
Evaluate subject
recruitment by
site including
subjects enrolled
and screen
failures
Not
visualized
in analytics
Study
Management
An increase or
decrease in the rate
of subjects failing
screening at a site or
deviation from
planned enrollment
rate.
Increase in early
discontinuations of
study treatment at a
site or an imbalance
in the reasons for
discontinuation at a
site.
Subject
Discontinuation
Evaluate subject
disposition by site
including subject
continuation,
discontinuations
from the study,
discontinuations
from study
treatment, dropouts, lost to
follow-up, and
completions.
SD higher than the
overdue query
percentage across all
sites.
1. Identification of a
site whose ratio of
screen failures to
enrolled subjects is
greater than 2 SD of
the ratio of screen
failures to enrolled
subjects across all
sites.
2. Identification of a
site whose average
number of enrolled
subjects per month
is less than 50% of
expected average
enrolled subjects per
month (expected
enrollment rate is 1
patient/site/month).
3. Identification of
sites with a time
from last subject
enrolled greater
than 60 days while
enrollment is still
ongoing.
1. Identification of a
site whose rate of
early
discontinuations
(number of subjects
discontinuing study
treatment prior to
completion of the
expected course of
treatment divided by
the total number of
subjects enrolled) is
SD higher than the
overdue query
percentage across all
sites.
Identification of a site
whose ratio of screen
failures to enrolled
subjects is greater
than 2 SD of the ratio
of screen failures to
enrolled subjects
across all sites.
Monthly
Identification of a site
whose average
number of enrolled
subjects per month is
less than 50% of
expected average
enrolled subjects per
month (expected
enrollment rate is 1
patient/site/month).
Identification of sites
with a time from last
subject enrolled
greater than 60 days
while enrollment is
still ongoing.
Monthly
Identification of a site
whose rate of early
discontinuations
(number of subjects
discontinuing study
treatment prior to
completion of the
expected course of
treatment divided by
the total number of
subjects enrolled) is
more than 2 SD
Monthly
Monthly
.
Version 1.0
Not
visualized
in
Analytics
Study
Management
Subject
Discontinuation
Evaluate subject
disposition by site
including subject
continuation,
discontinuations
from the study,
discontinuations
from study
treatment, dropouts, lost to
follow-up, and
completions
Increase in early
discontinuations of
study treatment at a
site or an imbalance
in the reasons for
discontinuation at a
site.
more than 2 SD
higher than the rate
of early
discontinuations
across all sites.
2. Identifications of a
site where the
proportion of
subjects
discontinuing for a
given reason is more
than 2 SD of the
proportion
discontinuing for
that reason across all
sites.
higher than the rate
of early
discontinuations
across all sites.
Review all reasons
for discontinuation to
identify any trends in
discontinuations
(review those
discontinuations due
to physician decision
or subject decision
reasons as compared
to other sites). CRF
data will be used to
assess this
requirement.
Monthly
Identifications of a
site where the
proportion of
subjects
discontinuing for a
given reason is more
than 2 SD of the
proportion
discontinuing for that
reason across all
sites.
Version 1.0
Risk-based Monitoring: Central Monitoring Plan
Sponsored Special Section by TransCelerate BioPharma: Original Research
Statistical Monitoring in Clinical Trials:
Best Practices for Detecting Data Anomalies
Suggestive of Fabrication or Misconduct
Therapeutic Innovation
& Regulatory Science
2016, Vol. 50(2) 144-154
ª The Author(s) 2016
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/2168479016630576
tirs.sagepub.com
David Knepper, MS, MBA1, Anne S. Lindblad, PhD2,
Gaurav Sharma, PhD2, Gary R. Gensler, MS2, Zorayr Manukyan, PhD3,
Abigail G. Matthews, PhD2, and Yodit Seifu, PhD4
Abstract
Background: Traditional site-monitoring techniques are not optimal in finding data fabrication and other nonrandom data distributions with the greatest potential for jeopardizing the validity of study results. TransCelerate BioPharma conducted an
experiment testing the utility of statistical methods for detecting implanted fabricated data and other signals of noncompliance.
Methods: TransCelerate tested statistical monitoring on a data set from a chronic obstructive pulmonary disease (COPD) clinical
study with 178 sites and 1554 subjects. Fabricated data were selectively implanted in 7 sites and 43 subjects by expert clinicians in
COPD. The data set was partitioned to simulate studies of different sizes. Analyses of vital signs, spirometry, visit dates, and
adverse events included distributions of standard deviations, correlations, repeated values, digit preference, and outlier/inlier
detection. An interpretation team, including clinicians, statisticians, site monitoring, and data management, reviewed the results
and created an algorithm to flag sites for fabricated data. Results: The algorithm identified 11 sites (19%), 19 sites (31%), 28 sites
(16%), and 45 sites (25%) as having potentially fabricated data for studies 2A, 2, 1A, and 1, respectively. For study 2A, 3 of 7 sites
with fabricated data were detected, 5 of 7 were detected for studies 2 and 1A, and 6 of 7 for study 1. Except for study 2A, the
algorithm had good sensitivity and specificity (>70%) for identifying sites with fabricated data. Conclusions: We recommend a crossfunctional, collaborative approach to statistical monitoring that can adapt to study design and data source and use a combination
of statistical screening techniques and confirmatory graphics.
Keywords
statistical monitoring, central monitoring, risk-based monitoring, fabrication, fraud, misconduct, TransCelerate
Introduction
Traditional site-monitoring techniques may find random data
errors, but they are not optimal in identifying data fabrication
and other nonrandom data distributions with greatest potential
for jeopardizing the validity of study results.1,2 One strategy
being used is statistical monitoring, or the use of various statistical methods during study conduct to detect data anomalies
suggestive of fabrication or noncompliance.1,3-5 The multicentric nature of most clinical trials offers an opportunity to
check the plausibility of data from one site against data from all
other sites.3
Statistical monitoring relies on the highly structured nature
of data because each protocol is expected to be implemented
consistently at all sites.4 Statistical checks are powerful tools
because the multivariate structure and/or time dependence of
variables are sensitive to deviations and hard to mimic. Fabricated data, even if plausible from a univariate perspective, are
likely to exhibit abnormal multivariate patterns that are detectable statistically. Statistical methods used to detect data
anomalies can be applied to all trials; however, many methods
are less reliable for small trials or large trials with small numbers of subjects per site.6
An objective of a previous TransCelerate BioPharma publication on data-quality issues7 was to provide recommendations
based on best practices for detecting data-integrity issues early
in study conduct to allow implementation of meaningful corrective and preventive actions. A review of the literature
included reports evaluating the utility of statistical methods
1
Drug Development Operations, Allergan, Jersey City, NJ, USA
The Emmes Corporation, Rockville, MD, USA
3
Biostatistics, Pfizer Global Research and Development, Cambridge, MA, USA
4
Statistical Science and Programming, Allergan Inc, Bridgewater, NJ, USA
2
Submitted 17-Nov-2015; accepted 08-Jan-2016
Corresponding Author:
David Knepper, MS, MBA, Drug Development Operations, Allergan, 1900
Plaza 5, Jersey City, NJ 07311, USA.
Email: David.Knepper@actavis.com
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
Knepper et al
145
to detect data anomalies suggestive of data-integrity issues.
However, previous work did not sufficiently focus on ongoing
pharmaceutical industry-sponsored studies.5,6,8-10 The current
article is a continuation of the previous publication7 and incorporates findings from that work in our conclusions and
recommendations.
Building on prior research, 3-6,8-10 TransCelerate tested
statistical-monitoring methods on a biopharmaceutical industry–sponsored clinical study database under conditions
partially mimicking those found during real-world study conduct. The objective of this experiment was to detect data
anomalies suggestive of noncompliance, but it primarily
focused on intentionally fabricated data because of the potential difficulty in detecting this type of noncompliance. This
experiment used a combination of contrived and real-world
conditions and does not represent a true test in actual clinical
study conditions. The objectives of this project were as follows:
1.
2.
3.
4.
Examine the ability of statistical monitoring to detect
implanted fabricated data and other data anomalies suggestive of noncompliance.
Determine which statistical methods are the most useful
in high– and low–data volume conditions and describe
the effect of data volume on the reliability of the statistical output.
Determine what information can be drawn from statistical output and suggest optimal use of graphical and
other data-visualization techniques.
Explore the best use of an interpretation team and strategies for interpreting statistical signals with company
operational and clinical experts (Data-Science Model).
Methods and Materials


Fabrication of Data
Seven physicians actively practicing medicine were requested
to create fabricated data. All physicians had experience conducting clinical studies (average of 12.8 industry-sponsored
pulmonology studies as principal investigator). No physician
was employed by a pharmaceutical company. The independent
expert clinicians were blinded to the analysis plan. Each physician was provided with an Excel workbook containing data
from all ‘‘selected subjects’’ plus 2 reference subjects from a
selected site with all data intact. Physicians were provided
demography, medical history, smoking history, concomitant
medications at screening, and clinical laboratory data at screening in addition to the ‘‘selected data’’ fields. Physicians were
asked to fill the missing ‘‘selected data’’ with plausible fabricated data. Approximately half of the AEs were removed from
the 3 sites selected for the AE component, leaving only AEs
with mild severity at these sites.
Maintaining Plausibility of the Fabricated Data
After the physicians returned the completed Excel workbooks,
the fabricated data were corrected by the corresponding author
(D.K.), who was not directly involved in the fraud detection
method assessment. These corrections included the following:
A TransCelerate member company provided a clean, deidentified clinical database from a randomized, double-blind,
placebo-controlled, study in chronic obstructive pulmonary
disease (COPD) with a locked database.

Selection of Sites and Subjects for Implanting
Fabricated Data

Systolic blood pressure (SBP), diastolic blood pressure (DBP),
and pulse rate were selected because these vital sign evaluations were measured at every visit. Forced expiratory volume in
1 second (FEV1) and forced vital capacity (FVC) were selected
because the data were collected serially as well as produced
mechanistically and evaluated centrally, thus significantly
reducing the chance for human error. Selected data were
deleted and replaced with fabricated plausible data.

Seven sites were randomly chosen from all sites with 12
or more randomized subjects (range, 12-32). The number of subjects was later reduced for studies 1A and 2A
as explained below.
Approximately 30% (range, 25%-43%) of subjects at the
7 sites were selected to receive fabricated data. For the
selected data fields, all data were deleted except for
the screening visit, which was provided for reference.
The terms ‘‘selected sites,’’ ‘‘selected subjects,’’ and
‘‘selected data’’ will be used to describe the sites, subjects, and data fields selected for fabrication.
Three additional sites were selected for deletion of
adverse event (AE) data.


Pervasive repeated data copying: If numerous values
were repeated within a single patient, some values were
slightly adjusted.
Decimal errors: for example, an FVC value of 38.34 was
corrected to 3.834.
For technical reasons, each spirometry data time point
(FEV1 and FVC) must contain a value that is the duplicate of the highest of the remaining values. If this was
not done by the physician, the second-to-lowest value
for the time point was replaced with a duplicate of the
highest value.
Two physicians did not complete the fabricated data for all
‘‘selected subjects.’’ Fabricated data from the completed
‘‘selected subjects’’ was copied to the blank ‘‘selected subjects’’ after adjusting the data for the difference in screening measurements for the ‘‘selected subjects.’’
Implanting Fabricated Data
Prior to implantation of the fabricated data into the database, all
sites with zero subjects were removed. Actual data for the
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
146
Therapeutic Innovation & Regulatory Science 50(2)
selected sites and subjects were replaced with fabricated data.
The total amount of data did not increase or decrease; if a
measurement was missing in the original data set, it remained
missing in the final data set.
Creation of Databases for 4 Simulated Studies
Differing in Data Volume
The entire data set included 2996 subjects across 178 sites in the
US. Screen failure subjects were removed prior to this experiment. To test statistical monitoring in high– and low–data volume conditions, 4 simulated study databases were created by
randomly removing sites and subjects from the data set. Study
2 was created by randomly removing 117 nonselected sites from
study 1. Studies 1A and 2A were created by randomly removing
about half of the nonselected patients from all sites in studies 1
and 2, respectively, including from the ‘‘selected sites.’’
Simulated studies 2A, 2, 1A, and 1 (smallest to largest):




chosen was to assess how various analyses proposed in the literature were effective in detecting implanted fabricated data as
well as data that may signal misconduct. Data anomalies were
detected by comparing individual subject or site measurements
against aggregated measurements from all subjects or sites. The
following data-quality evaluations were specified in the SAP:
Visit date check for holidays and weekends
Any visit occurring on a Sunday or US federal holiday.
Visit timing
Distribution of visit lag (distance of actual visit date relative to
the target date) on site and subject levels, and enrollment rates
by site and month.
Interrelation of the assessments:

Study 2A (61 sites, 338 subjects, range 1-19 subjects)
Study 2 (61 sites, 627 subjects, range 1-32 subjects)
Study 1A (178 sites, 824 subjects, range 1-19 subjects)
Study 1 (178 sites, 1554 subjects, range 1-32 subjects)
The simulated studies included all 7 ‘‘selected sites,’’ but studies 2A and 1A contained a portion of the ‘‘selected subjects.’’
Interpretation Team

The interpretation team included data managers, clinicians,
statisticians, and site monitors from TransCelerate member
companies as well as members of the independent analysis
center (IAC; described below). The physicians who created the
fabricated data did not participate in the interpretation team.
This team reviewed statistical output, made recommendations
for further analyses, and developed a method to flag sites and
subjects for potentially fraudulent data.
Independent Analysis Center
The Emmes Corporation (Rockville, MD) participated as the
IAC. They were blinded to the methodology for preparing the
databases, including the number of sites, number of subjects, and
data fields selected for data fabrication. The statistical analysis
plan (SAP), blank case report forms, protocol, prepared data sets,
and data dictionaries were submitted to the IAC, which provided
statistical output and findings to the interpretation team.
Statistical Analysis Plan
A statistical analysis plan (SAP) was developed by statisticians
from participating TransCelerate member companies with subsequent input from the interpretation team. The statisticians
were told of the study design but were blinded to the specific
data fields selected for fabrication. The data panels identified in
the SAP included vital signs (SBP, DBP, height, weight, and
heart rate), spirometry variables (FEV1 and FVC), AEs, visit
dates, and the date of the first dose. The purpose of the methods

Standard deviation (SD): The SD of measures across
visits for each subject for vital signs and spirometry
measurements. Studywide versus subject-specific confidence intervals (CIs) were compared assuming
a ¼ 0.00001 for spirometry analyses and a ¼ 0.001 for
vital signs analyses to accommodate multiple testing
adjustments balanced against the percentage of sites or
subjects ultimately flagged. Also, a repeated-measures
model was used for the SD estimates.11
Correlation: Subject-, site-, and study-level correlations
were obtained between SBP, DBP, and pulse and
between FEV1, FVC, and their ratios. Comparisons for
flagging purposes were made using the CI approach.
The ‘‘Proc Freq’’ feature in SAS (SAS Institute, Cary,
NC) was used for calculation of Pearson correlation and
the Wald approach to calculate CIs.
Mahalanobis distance (MD)12: The MD for each subject
as compared to the studywide value was calculated separately for vital signs and spirometry measurements. A
large MD for a subject would correspond to an outlier,
and a small MD would correspond to an inlier.
Carryover effect/repeated values
Carryover, defined as an exact match of a value for a subject
from one visit to the next, was calculated. Repeated values,
defined as the number of identical values for a subject within
a visit (for spirometry) or overall, were calculated.
Digit preference and rounding
For each subject, last digit frequency distribution was compared to other subjects at that site and across the study using
either a w2 or Fisher exact test, as appropriate, and by comparing the mean and SD of last digit value within a subject to
studywide distributions using the CI approach.
Missing data
The rate of missing data was calculated by dividing the number
of missed measures by the number of times it was expected.
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
Knepper et al
147
STUDY1 Odd Site
STUDY1 Reference Site
STUDY2AOdd Site
STUDY2AReference Site
100
80
Ratio (FEV1/FVC) %
60
40
100
80
60
40
1
2
3
4
5
6
7 1
2
3
4
5
6
7
Visits
Figure 1. Site versus study value distributions of FEV1/FVC. Red ¼ studywide mean, green ¼ sitewide mean, gray ¼ subjects’ values over time.
AE reporting frequency
The AE rate per person-week of time on study was calculated.
Subjects with no AEs but who were in the top 10% of prestudy
comorbidity prevalence and sites where 80% of all AEs were of
a single severity level were flagged.
Additional analyses were performed to examine potential
data inconsistencies, including subject-level values by site
showing the average study value across time, within site average values over time, and 95% CIs, and individual subject-level
values across time. All analyses were conducted in R13 (The R
Project for Statistical Computing; https://www.r-project.org/)
and SAS 9.3 and 9.4.14
Results
An Algorithm for Detecting Fabricated Data
The interpretation team reviewed graphical data summaries.
For example, Figure 1 depicts the distribution of the FEV1/
FVC ratio for a site with suspected data fabrication (‘‘odd
site’’) and a ‘‘reference site’’ not suspected of having fabricated
data in studies 2A and 1.
This graphical summary quickly identified a site with an
unusual data pattern without any statistical testing even with
only 2 subjects randomized at the ‘‘odd site’’ in study 2A.
However, most sites were not so graphically apparent, and
statistical testing was required to detect statistical significance.
Given the number of sites and subjects, a more structured and
automated approach was explored.
The interpretation team created a study-specific algorithm
to flag subjects and sites with potentially fabricated data as well
as other types of noncompliance. The interpretation team
reviewed output from the 4 studies from smallest to largest,
starting with study 2A. A spreadsheet was created that summarized statistically significant results in any of the examined
domains at both site and subject levels. Sites and subjects were
classified as low-, moderate-, or high-suspicion. Through iterative rounds of data review and discussion, a team consensus
developed regarding the utility of various statistical methods
and outputs. Discussions focused on the likelihood that certain
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
148
Therapeutic Innovation & Regulatory Science 50(2)
Table 1. Site-Level Sensitivity, Specificity, and Positive/Negative Predictive Values.a
Number of sites flagged (%)
Number of ‘‘selected’’ sites flagged (sensitivity)
Specificity
Positive (negative) predictive value
Study 2A:
61 Sites
338 Subjects
7 ‘‘Selected’’ Sites
Study 2:
61 Sites
627 Subjects
7 ‘‘Selected’’ Sites
Study 1A:
178 Sites
824 Subjects
7 ‘‘Selected’’ Sites
Study 1:
178 Sites
1554 Subjects
7 ‘‘Selected’’ Sites
11 (19%)
3 (43%)
85%
27% (92%)
19 (31%)
5 (71%)
74%
26% (95%)
28 (16%)
5 (71%)
86%
18% (99%)
45 (25%)
6 (86%)
77%
13% (99%)
a
Sensitivity: % of sites with fabricated data flagged; specificity: % of sites without fabricated data not flagged; positive predictive value: % of flagged sites with
fabricated data; negative predictive value: % of sites not flagged without fabricated data.
findings could be due to chance or would be clinically highly
unlikely. For example, in the spirometry domain, the interpretation team put heavier weight on statistically significant deviations from the expected correlation between FEV1 and FVC
and repeated values, given that a device was used for measurement and each included 4 digits (X.XXX format). More variability and repetition of values were expected in vital signs
measurements; thus, higher thresholds for flagging a site or
subject were used for vital signs. After the discussion of study
2A, the team was unblinded to the variables fabricated, allowing independent focus on fabrication versus other data anomalies suggestive of noncompliance (eg, visit date and missing
data that were not involved in fabrication). But the interpretation team remained blinded to ‘‘selected sites’’ and ‘‘selected
subjects.’’
The team reviewed parallel coordinate plots15 of SDs, carryover/repeated values, correlations, scatter plots of correlations
and SDs, and last-digit histograms and distributions. The key
statistical methods for identifying potentially fabricated data
were repeated values and the CI approach for correlations, SDs,
and last-digit preferences for both spirometry and vital signs
variables. An algorithm that scored each subject’s result in each
of these areas with a 0, 1, or 2 was developed; this algorithm
applied heavier weights to spirometry anomalies (2) than to
vital anomalies (1). A subject score was calculated by summing
the scores across domains within subject. A subject was
flagged if the subject’s score in either domain (spirometry or
vitals) exceeded a threshold value selected by the team. A site
was flagged if it had at least 4 subjects, and 25% or more of the
subject scores exceeded a site-level threshold value selected by
the team or if an individual site had a least one flagged subject,
regardless of the number of subjects at the site. This approach
allowed small sites (fewer than 4 subjects) to be flagged.
The algorithm was applied to studies 2, 1A, and 1, after
which the team was unblinded to the actual sites and subjects
with fabricated data. The algorithm diagnostics of sensitivity
and specificity at the site and subject level were calculated
using this unblinded information. Because the technique for
fabrication differed between spirometry and vital data compared to AE data, separate analyses were performed to detect
fabricated data corresponding to AEs. In addition, signals of
protocol noncompliance and other data anomalies not associated with the fabricated data were identified.
Site-Level Outcomes
The prevalence of ‘‘selected sites’’ ranged from 11% of sites to
4% (Table 1). The algorithm was applied without adjustment
for the size of the study. Results from the algorithm identified
11 sites (19%), 19 sites (31%), 28 sites (16%), and 45 sites
(25%) as having potentially fabricated data for studies 2A, 2,
1A, and 1, respectively. The proportion of ‘‘selected sites’’
detected by the algorithm increased with study size from 3/7
(43%) in study 2A to 6/7 (86%) in study 1. Except for study 2A,
the algorithm had good sensitivity and specificity (>70%) for
identifying sites with fabricated data. The negative predictive
value, or the proportion of sites not flagged that did not have
fraudulent data, exceeded 90% for all studies.
Other individual statistical approaches executed as part of
the analysis plan did not demonstrate improved performance in
terms of sensitivity or specificity. When last-digit preference
across all variables (spirometry and vital signs data combined)
was tested, we found high false-positive rates. The MD of
spirometry data identified at most 3 of the ‘‘selected’’ sites.
The mixed-model approach to estimating SDs did not improve
identification of ‘‘selected’’ sites. Carryover proved less useful
than exact repeated values, with only one of the ‘‘selected’’
sites having a very high rate of carryover. Selecting a specific
preference for BP data ending in 0 or 5 also did not yield
additional value. In study 2A, although 4 ‘‘selected’’ sites were
flagged by this criterion, only 3 subjects at those sites were
‘‘selected.’’
Subject-Level Outcomes
Subject scores from the algorithm resulted in approximately
one-third of subjects being identified as potentially having fabricated data across all 4 studies with sensitivity near 80% and
specificity just over 70% (Table 2). Negative predictive value
was 97% across all studies.
An evaluation of the components of the algorithm showed
that correlation and repeated values should have a higher
weight in the spirometry domain compared with vital signs.
Figure 2 shows a scatter plot of individual subject-level
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
Knepper et al
149
Table 2. Subject-Level Sensitivity, Specificity, and Positive/Negative Predictive Values.a
Number of subjects flagged (%)
Number of ‘‘selected’’ subjects flagged (sensitivity)
Specificity
Positive (negative) predictive value
Study 2A:
338 Subjects
33 ‘‘Selected’’
Subjects
Study 2:
627 Subjects
33 ‘‘Selected’’
Subjects
Study 1A:
824 Subjects
43 ‘‘Selected’’
Subjects
Study 1:
1554 Subjects
43 ‘‘Selected’’
Subjects
112 (33%)
27 (81%)
72%
23% (97%)
202 (32%)
26 (79%)
71%
17% (98%)
248 (30%)
35 (81%)
72%
10% (99%)
474 (31%)
34 (79%)
71%
7% (99%)
a
Sensitivity: % of sites with fabricated data flagged; specificity: % of sites without fabricated data not flagged; positive predictive value: % of flagged sites with
fabricated data; negative predictive value: % of sites not flagged without fabricated data
correlations between both FEV1 and FVC and their calculated
ratio (average across all values across 7 visits) for 4 types of
sites: true-positive, false-positive, true-negative, and falsenegative (see Figure 2 footnotes).
Figures 3 and 4 provide scatter plots of subject-level SDs for
vital signs and spirometry data, respectively. These figures
illustrate the potential difficulty of pattern recognition without
statistical testing.
Without vital signs in the algorithm, approximately 40% of
subjects in each study would be flagged, sensitivity would
increase to 80%, but specificity would drop to about 60%.
Within the statistical methods chosen for vital signs score (SD,
correlation, digit preference, and repeat values), the SD had the
best performance in terms of sensitivity, specificity, and positive and negative predictive values. Site-level sensitivity using
this measure alone ranged from 41% to 53%, and specificity
was about 90%. Only 9% to 14% of ‘‘selected’’ subjects would
be flagged (data not shown).
Adverse Event Anomaly Detection
Three approaches to AE detection were taken, and the approach
that flagged sites based on a greater than 80% severity type
prevalence successfully identified all ‘‘selected’’ sites within
each study. Flagging based on lower-than-expected event rates
identified a number of sites of concern although it was not
successful for the fabrication approach selected for this
experiment.
Other Analyses
Evaluation of visit lag distributions, holiday and Sunday visits,
and missed visits identified several sites with unusual visit
timing or missed visits; however, none of the identified data
had been fabricated.
Discussion
Errors can have many etiologies. Data errors from causes such
as equipment miscalibration produce clear shifts in the mean
and are easy to detect. Other easily detectable patterns include
protocol misinterpretation by whole sites or countries because
these errors produce mean shifts in a whole group of data
compared to the aggregate. Conversely, intentional patientlevel data fabrication can be much more difficult to detect
because investigators use medical and scientific knowledge
to construct clinically plausible data. This experiment focused
on detecting fabricated data intentionally created by knowledgeable clinicians specifically asked to act in a manner likely
to evade detection.
O’Kelly8 tested various statistical methods in subjective
data (Montgomery-Åsberg Depression Rating Scale [MADRS]
scores) and noted challenges with performing evaluations of
means and correlations. Subjective measures may not be good
candidates for statistical monitoring. Pogue et al9 and Wu and
Carlsson10 handpicked sites from a large study so that each site
had approximately 20 subjects, a condition not likely to exist
during study conduct. Kirkwood et al6 intentionally planted
‘‘easy to detect’’ fabricated data and found that some statistical
assessments were not applicable for sites with fewer than 10
subjects. In the present experiment, a COPD study was chosen
because of the objectivity of measurements (actual spirometry
data were instrument generated and vital signs data were collected by various methods).
This project tested statistical monitoring methods on a queried and cleaned pharmaceutical industry–sponsored clinical
study database with implanted fabricated data in high– and
low–data volume conditions mimicking conditions found during study conduct. The project evaluated a battery of statistical
methods, report types, and graphical displays to identify strategies most useful to flag sites and subjects as anomalous. A
SAP included site-level exploration of reported data with
respect to visit lag, Sunday or holiday visits, enrollment patterns, digit preference, repeat data values, carryover of values,
and exploration of data distributions, variance, and correlations
of key variables over time by subject compared to site-level and
study-level distributions. A data-science model with an interdisciplinary interpretation team was used to evaluate statistical
results. Review of the output for the smallest study by the
interpretation team resulted in an algorithm applied to the 3
progressively larger studies. No single statistical approach was
adequate, and the algorithm using a combination of statistical
approaches provided better performance than any individual
statistical test. The exact formula for this algorithm is not
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
150
Therapeutic Innovation & Regulatory Science 50(2)
1-True Positive
1.0
2-False Positive
A
11
2927
13
0.8
0.4
22
20
21
197
0.2
0.0
23
31
32
5
2
-0.4
6
-0.6
10
-1.0
30
17
26
8
4
2
28
3
7
6
11
9 4
4-True Negative
C
D
16
0.6
4
0.4
11
0.2
2
6
0.0
9
14
1
5
5
3
8 17
13
-0.4
1
6
3
-0.2
-0.8
9
1
15
25
0.8
-0.6
8
12
3-False Negative
1.0
5
10
1218
14
16
-0.2
FVC vs Ratio
B
1 24
0.6
-0.8
3
4
12
8
2
10
7
9
15
7
10
-1.0
-1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
-1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
FEV vs Ratio
Figure 2. Subject-level correlations for FEV1, FVC, and their ratio for 4 sites. Circles represent individual subject values; numbers are the
de-identified subject number. (A) true-positive ¼ flagged ‘‘selected site.’’ (B) False-positive ¼ flagged site that was not a ‘‘selected site.’’
(C) False-negative ¼ ‘‘selected site’’ that was not flagged. (D) True-negative ¼ nonflagged site that was not a ‘‘selected site.’’
provided because it is the process we describe that we suggest
could be replicated by others—regardless of disease or location
of study sites—rather than the unique formula developed for
this study.
Identification of data issues should occur early during study
conduct to allow for corrective action. Our results suggest utility of centrally reviewing queried and cleaned data even during
the early phase of a study. Distributions (Figure 1), even early
in study conduct, could show issues that can be corrected
before more subjects are enrolled. Review of outliers in values,
SDs, or correlations may identify queries or range checks that
need to be systematically implemented.
A variety of methods have been proposed to explore accumulating data in clinical studies, with general agreement
regarding the potential benefit of SD analyses, MD measures
to detect inliers and outliers, carryover or repeated measures,
and correlations.3,5 Inliers may be more indicative of fraud,
whereas outliers may indicate sloppiness in data collection.
The current experiment evaluated each of these approaches and
found that a multidisciplinary interpretation team reviewing
and discussing the relevance of the statistical findings to the
clinical condition and the data collection methods was critical to
the formulation of the algorithm. The interpretation team chose
to assign more weight to statistically significant subject deviations in the most objective data (spirometry data) than in vital
signs data, which had more variability in collection techniques.
The final result had acceptable sensitivity and false-negative
rates in all studies except for the smallest study (2A). There was
one ‘‘selected site’’ that the algorithm failed to find in any of the
study conditions. For this site, the implanted data was only partially fabricated by the physician collaborator, who did not complete the selected cases; consequently, the corresponding author
(D.K.) fabricated data for the remaining selected subjects. It is
possible that the author’s intimate knowledge of the SAP may
have influenced the data fabrication; however, this does not fully
explain the failure to detect this site.
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
Knepper et al
151
1-True Positive
15.0
2-False Positive
A
B
1
32
12.5
10.0
SD for Systolic BP (in mmHg)
7.5
4
5.0
6
1611
10
8
3
1
2.5
25
21
18
24
8
5
17
3
15
27
12 13
26
19
9
29
2831
2
30
22
6
23
20
11
4
10
14
12
7
97
3-False Negative
15.0
2
5
4-True Negative
C
2
D
12.5
2
10
5
12
10.0
16
13
1
1
6
7.5
5.0
49
6
11
8
5 315
9
7
14
3
8
10
4
17
7
2.5
2.5
5.0
7.5
10.0
12.5
2.5
5.0
7.5
10.0
12.5
SD for Diastolic BP (in mmHg)
Figure 3. Blood pressure standard deviations for 4 sites. Circles represent individual subject values; numbers are the de-identified subject
number. (A) true-positive ¼ flagged ‘‘selected site.’’ (B) False-positive ¼ flagged site that was not a ‘‘selected site.’’ (C) False-negative ¼ ‘‘selected
site’’ that was not flagged. (D) True-negative ¼ nonflagged site that was not a ‘‘selected site.’’
Successful detection of AE manipulation was dependent on
the manipulation pattern. The fabrication approach in this
experiment is not an adequate representation of improper
reporting in real-world settings. A robust statistical monitoring
program should include evaluation of the AE reporting rate at
each site compared to all sites, while accounting for length of
patient follow-up and severity of clinical presentation.
The effectiveness of statistical monitoring has been questioned under low–data volume conditions.4,6 Small sample
sizes within sites can limit the likelihood of statistical methods
to flag a site and may hamper parametric statistical testing. The
final algorithm used for this experiment scored each subject
against studywide values and subsequently scored the site
based on the subject scores. This approach may provide protection from missing a single subject at a small site; however,
flagging a small site because of a single subject may increase
false-positive rates. An adaptive algorithm that uses different
cut-offs for subject and site flagging depending on the stage
and size of the study may be prudent.
In cases where the site personnel performing a procedure
(examiner) is identified in a database, cross-examiner data
comparisons could be performed to look for patterns. If only
subjects examined by one individual are flagged, then further
investigation is warranted. Although fabricated data at large
sites is likely to have a more significant impact on study result
reliability, we recommend all sites be reviewed. For very large
studies, a staged approach that begins with automated screening to flag sites based on statistical testing followed by more
detailed investigation of flagged sites may be more practical
then initial review of output from all sites.
The experimental design did not truly replicate real-world
conditions, so generalizability is limited, and the diminishing
prevalence of ‘‘selected subjects’’ across the 4 study conditions
makes comparisons of the positive and negative predictive values across studies problematic. Although this data set had been
cleaned and locked, it is possible that some of the sites listed as
false-positives are in fact true-positives due to calibration
issues in spirometers, training, or other factors. Some of the
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
152
Therapeutic Innovation & Regulatory Science 50(2)
1-True Positive
0.50
A
B
0.45
24
0.40
6
0.35
13
0.30
0.25
0.20
0.15
SD for FVC (in Liters)
0.10
0.05
2-False Positive
5
29
8
30 12
26 10 14 11 1627
754
0.00
1
21
9
123 17
32
19 2 18
3125
28
3
2
11
9 12
7
3
10
6
15
22
4
3-False Negative
0.50
C
4-True Negative
D
11
0.45
8
20
1
2
0.40
0.35
0.30
2
0.25
0.20
0.15
0.10
0.05
13
9
15
5
9 3
4
6 4 116 14
5 7
10
6
10
8
17
3
8 12
0.00
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
SD for FEV1 (in Liters)
Figure 4. Spirometry standard deviations for 4 sites. Circles represent individual subject values; numbers are the de-identified subject number.
(A) true-positive ¼ flagged ‘‘selected site.’’ (B) False-positive ¼ flagged site that was not a ‘‘selected site.’’ (C) False-negative ¼ ‘‘selected site’’ that
was not flagged. (D) True-negative ¼ nonflagged site that was not a ‘‘selected site.’’
outliers found by MD analysis could have been readily discovered via routine data range checks. Three of our 7 independent
expert clinicians made errors in the fabricated data that were
obvious from a cursory review, and 2 failed to complete all
subjects and left many fields blank. The choice of weights as
part of a flagging algorithm, adjustments for multiple comparisons, and the use of cut points (eg, percentage of repeat values)
should be driven by clinical relevance, thus subjective and
variable from study to study and across interpretation teams.
In our algorithm, we chose multiple comparison alpha adjustments and thresholds so that approximately 25% of sites would
be flagged. Whether other approaches to algorithm creation
implemented in other studies would have the same properties
is unknown.
An algorithm derived by an actual study team that includes
clinicians and statisticians working together over an extended
time frame with intensive knowledge of the protocol and lessons learned from predecessor studies would be expected to
have better performance. For example, the actual study team
found that the dose-response curve formed by the FEV1 and
FVC was the best approach to assess the biological plausibility
of spirometry data. The IAC and interpretation team did not
have enough information to apply this approach. Also, the
interpretation team only had 2 weeks working together to interpret and refine the statistical approach, did not receive the
protocol prior to review of the statistical output, and had not
previously worked with each other. Finally, although our
experiment included a ‘‘small study,’’ it can be argued that the
‘‘small study’’ did not adequately test statistical methods in
truly small studies (eg, 30 sites and 3 subjects per site).
Conclusions and Recommendations
We recommend that statistical monitoring within a datascience approach be considered to augment other study quality
management activities, including on-site monitoring and
review of risk indicators.16 Consistent with previous authors,4,5
we caution using unqueried data for statistical assessment. To
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
Knepper et al
153
this end, ‘‘cohort database cleaning’’ approaches or data safety
monitoring board (DSMB) submission cycles provide excellent
triggers for statistical monitoring evaluations. Moreover, riskbased monitoring (RBM) methodology emphasizes real-time
central monitoring, supporting ongoing data query resolution.
Statistical monitoring can be used to identify data anomalies
suggestive of fabrication, noncompliance, or other nonrandom
errors that need further investigation or monitoring, and should
balance sensitivity against the cost of investigating falsepositive sites. The choice of statistical methods should be influenced by study design and data collection characteristics.
Although this experiment focused on detecting fabricated data,
other signals of noncompliance are equally important. Statistical methods should include evaluations of






standard deviation distributions,
repeat values,
digit preference,
correlations,
visit date plausibility (holidays, Sundays), and
adverse event underreporting.
Statistical monitoring should be performed iteratively
throughout a study starting as soon as there is adequate data
volume and at a frequency determined by the pace of data
collection or aligned with cohort cleaning or DSMB submission schedules. Small, rapidly enrolling studies may receive
limited benefit from many statistical methods during study
conduct except when extreme outliers are present; however,
some graphical techniques may be useful when low–data volume conditions exist, including early during study conduct.
The utility of graphical techniques may be highest early in a
study, or within a small study, while algorithm application may
be more useful later in a study with a large number of sites (eg,
>100 sites) and subjects (eg, >6 subjects/site).
Using a study-specific algorithm, large amounts of data can
be reviewed at multiple stages during the study with minimal
time expenditure. Discussions of statistical monitoring algorithms should be initiated during study planning and refined
throughout the study. Algorithms should never be known to the
site investigators.
We suggest a process that includes
1.
2.
3.
4.
creation of a statistical monitoring SAP,
creation of a cross-functional interpretation team,
review of statistical results by the interpretation team,
development of a scoring algorithm for consistency and
efficiency, and
5. application of the algorithm with interpretation team
review and adjustment throughout the study.
The statistical monitoring field is in development, and we
encourage further investigation of best practices and reporting
on methods found to be successful in improving data quality
and integrity. Further work is particularly needed regarding the
application of statistical monitoring in low–data volume
conditions, including few subjects per site and few study visits
per subject, as is characteristic early during study conduct. This
study recommends the utility of a cross-functional, collaborative approach to statistical monitoring rather than a fully automated approach. The approach should be adapted to study
design and data source and should use a combination of statistical screening techniques and confirmatory graphics reviewed
by a multidisciplinary interpretation team.
Acknowledgments
The authors acknowledge the following individuals for medical and
scientific consultation: Vincent Gimino, MD (The Corvallis Clinic,
PC, Corvallis, OR, USA); Jakub Lata, MD (Synexus Clinical
Research, Clinic of Allergology, Chair of Pulmonology & Allergology, Medical University of Gdańsk, Gdansk, Poland); Anna Ploszczuk, MD, PhD (private medical practice, Pediatric-Allergology Clinic,
Bialystok, Poland); Anton Poterajlo, MBBS (Synexus Clinical
Research Ltd, Birmingham Research Park, Edgbaston, Birmingham,
UK); Roy C. St. John, MD (Aventiv Research, Columbus, OH, USA);
Vivienne van de Walle, MD, PhD (PT&R, Beek, Limburg, the Netherlands); Paul Weinberg, MD (Gwinnett Biomedical Research, Lawrenceville, GA, USA); Alun Bedding, PhD (Biostatistics, Roche
Products, Welwyn Garden City, Hertfordshire, UK); and Marcin
Makowski, MD, PhD (GMD Clinical Operations, AstraZeneca, Warsaw, Poland). The authors acknowledge the members of the interpretation team: John Polzer, DVM, MS, MS (Global Statistical Sciences,
Eli Lilly and Company, Indianapolis, IN, USA); Andy Lawton,
CSTAT (Biometrics and Data Management, Boehringer-Ingelheim,
Bracknell, UK); Li-An Xu, PhD (Bristol-Myers Squibb, Hopewell,
NJ, USA); Haiyuan Zhu, PhD (Statistical Science, Allergan, Plc, Jersey City, NJ, USA); Natalya Makulova, MD, PhD, Respiratory GMed
(Global Medicine Development, AstraZeneca, Gaithersburg, MD,
USA); Craig Allen Serra, MBA, MS, PMP, CCDM (Development
Operations, Pfizer Inc, New York, NY, USA); Brian Neptune, BA
(Project Clinical Platforms & Sciences, GlaxoSmithKline, Research
Triangle Park, NC, USA); Janice White, RN (Clinical Management,
GlaxoSmithKline, Mississauga, Ontario, Canada); and Jakub Tyszecki, MSc (Data Management, AstraZeneca, Warsaw, Poland).The
authors gratefully acknowledge the support of TransCelerate BioPharma Inc, a nonprofit organization dedicated to improving the
health of people around the world by accelerating and simplifying
the research and development (R&D) of innovative new therapies.
The organization’s mission is to collaborate across the global biopharmaceutical R&D community to identify, prioritize, design, and facilitate implementation of solutions designed to drive the efficient,
effective, and high-quality delivery of new medicines. The authors
also thank the CRO Forum established by ACRO for their review of
the draft manuscript.
Author’s Note
Anne S. Lindblad, Gaurav Sharma, Gary R. Gensler, Zorayr Manukyan, Abigail G. Matthews, and Yodit Seifu also are members of the
interpretation team.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to
the research, authorship, and/or publication of this article.
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
154
Therapeutic Innovation & Regulatory Science 50(2)
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
1. Baigent C, Harrell FE, Buyse M, Emberson JR, Altman DG.
Ensuring trial validity by data quality assurance and diversification of monitoring methods. Clin Trials. 2008;5:49-55.
2. US Food and Drug Administration. Guidance for industry oversight
of clinical investigations: a risk-based approach to monitoring.
http://www.fda.gov/downloads/drugs/guidancecomplianceregula
toryinformation/guidances/ucm269919. Accessed April 18, 2014.
3. Buyse M, George SL, Evans S, et al. The role of biostatistics in
the prevention, detection and treatment of fraud in clinical trials.
Stat Med. 1999;18:3435-3451.
4. Venet D, Doffagne E, Burzykowski T, et al. A statistical approach
to central monitoring of data quality in clinical trials. Clin Trials.
2012:9:705-713.
5. Lindblad AS, Manukyan Z, Purohit-Sheth T, et al. Central site
monitoring: results from a test of accuracy in identifying trials and
sites failing Food and Drug Administration inspection. Clin
Trials. 2014;11:205-217.
6. Kirkwood AA, Cox T, Hackshaw A. Application of methods for
central statistical monitoring in clinical trials. Clin Trials. 2013;
10:783-806.
7. Knepper D, Fenske C, Nadolny P, et al. Detecting data quality
issues in clinical trials: current practices and recommendations.
Therapeutic Innovation & Regulatory Science. 2016;50:15-21.
8. O’Kelly M. Using statistical techniques to detect fraud: a test
case. Pharm Stat. 2004:3:237-246.
9. Pogue JM, Devereaux PJ, Thurland K, et al. Central statistical
monitoring: detecting fraud in clinical trials. Clin Trials. 2013:10:
225-235.
10. Wu X, Carlsson M. Detecting data fabrication in clinical trials
from cluster analysis perspective. Pharm Stat. 2011;10:257-264.
11. Garrett M, Fitzmaurice N, Laird M, Ware J. Applied Longitudinal
Analysis. New York, NY: John Wiley & Sons Inc; 2005.
12. Mahalanobis PC. On the generalised distance in statistics. Proc
Natl Inst Sci (Calcutta). 1936;2:49-55.
13. Murrel P. R Graphics. Boca Raton, FL: Chapman and Hall/CRC;
2006.
14. Heath D. Effective graphics made simple using SAS/GRAPH1
SG procedures. Paper presented at: SAS Global Forum 2008,
March 16-19, 2008, San Antonio, TX. http://www2.sas.com/pro
ceedings/forum2008/255-2008.pdf.
15. Wegman EJ. Hyperdimensional data analysis using parallel coordinates. J Am Stat Assoc. 1990;85:664-675.
16. Gough J, Wilson B, Zerola M, et al. Defining a central monitoring
capability: sharing the experience of TransCelerate BioPharma’s
approach, part 2. Therapeutic Innovation & Regulatory Science.
2016;50:8-14.
Downloaded from dij.sagepub.com at QUINTILES INC on February 23, 2016
CRF Misalignment wit
CRF Misalignment with IVRS/IWRS (treatment administration system)
Issues by site
Issues by site
Issues List
AE rate per site
Adverse
Event ID
Serious AE Flag
Number
of AEs for
NOT
enrolled
Subject
Country
4
13
Adverse Event Term
ER Visit due to Excessive Dizzyness and
0 Weakness
0 Diarrhea
UNITED STATES
UNITED STATES
0
0
4
0 Bronchitis
UNITED STATES
0
1
0 Bronchitis
UNITED STATES
0
1
1
0 arthroscopy of Left knee
0 sinuusitis
UNITED STATES
UNITED STATES
1
0
6
UNITED STATES
0
2
1 Acute exacerbation of COPD
Laparoscopic Ventral Hernia Repair with
0 Mesh
UNITED STATES
0
5
1 Acute pulmonary insufficiency
UNITED STATES
0
11
2
0 Acute Bronchitis
0 Laryngitis
UNITED STATES
UNITED STATES
0
0
7
1 Minimal Chronic Sinus Mucosal Disease
UNITED STATES
0
2
0 sinus congestion
UNITED STATES
0
3
10
Post Surgery Problem- Abdominal Pain
after Hernia Surgery with Cough- pain
0 made worse from cough (ER Visit)
0 Anosmia
UNITED STATES
UNITED STATES
0
0
8
12
0 Neck Pain
0 Sore of Lip- Cheilitis
UNITED STATES
UNITED STATES
0
0
16
0 Shortness of Breath
UNITED STATES
0
14
0 Constipation
UNITED STATES
0
1
0 Chronic Constipation
UNITED STATES
0
8
1 Acute Pulmonary Embolus
UNITED STATES
0
6
0 Right Carpal Tunnel Release
UNITED STATES
0
9
0 Oral thrush
UNITED STATES
0
1
0 Weight Gain
UNITED STATES
0
17
Abdominal Pain, possible etiologies being
UNITED STATES
1 viral gastroenteritis versus constipation
0
15
0 Abdominal Pain
UNITED STATES
0
5
0 Detached Retina
UNITED STATES
0
9
0 Muscle Spasms
UNITED STATES
0
10
0 Low Back Pain
UNITED STATES
0
7
0 Migraine Headache
UNITED STATES
0
11
0 Multiple skin wounds
UNITED STATES
0
1
0 Myalgias in lower body
UNITED STATES
0
Number
of AEs for Number
ALL
of AEs for
Entered Enrolled _User Has AE Rate
Subjects Subject
Access?
per Site
High Level Term
Site
1
1
1
1
TRUE
TRUE
0.00213 Therapeutic procedures NEC
0.00213 Diarrhoea (excl infective)
108
108
1
1
TRUE
0.00213 Lower respiratory tract and lung infections
108
1
1
TRUE
108
1
1
0
1
TRUE
TRUE
0.00213 Lower respiratory tract and lung infections
Musculoskeletal and soft tissue imaging
procedures
0.00213 Upper respiratory tract infections
1
1
TRUE
0.00213 Bronchospasm and obstruction
108
1
1
TRUE
0.00213 Hernia repairs
108
1
1
TRUE
0.00213 Respiratory failures (excl neonatal)
108
1
1
1
1
TRUE
TRUE
108
108
1
1
TRUE
1
1
TRUE
0.00213 Lower respiratory tract and lung infections
0.00213 Upper respiratory tract infections
Paranasal sinus disorders (excl infections and
0.00213 neoplasms)
Paranasal sinus disorders (excl infections and
0.00213 neoplasms)
1
1
1
1
TRUE
TRUE
108
108
1
1
1
1
TRUE
TRUE
0.00213 Non-site specific procedural complications
0.00213 Olfactory nerve disorders
Musculoskeletal and connective tissue pain
0.00213 and discomfort
0.00213 Oral soft tissue disorders NEC
1
1
TRUE
108
1
1
TRUE
1
1
TRUE
0.00213 Breathing abnormalities
Gastrointestinal atonic and hypomotility
0.00213 disorders NEC
Gastrointestinal atonic and hypomotility
0.00213 disorders NEC
1
1
TRUE
0.00213 Pulmonary thrombotic and embolic conditions
108
108
108
108
108
108
108
108
108
1
1
TRUE
0.00213 Peripheral nerve therapeutic procedures
108
1
1
TRUE
108
1
1
TRUE
0.00213 Candida infections
Physical examination procedures and organ
0.00213 system status
1
1
TRUE
1
1
TRUE
1
1
TRUE
1
1
TRUE
1
1
1
Gastrointestinal and abdominal pains (excl oral
0.00213 and throat)
Gastrointestinal and abdominal pains (excl oral
0.00213 and throat)
Retinal structural change, deposit and
0.00213 degeneration
108
108
108
108
108
TRUE
0.00213 Muscle related signs and symptoms NEC
Musculoskeletal and connective tissue pain
0.00213 and discomfort
1
TRUE
0.00213 Migraine headaches
108
1
1
TRUE
0.00213 Skin injuries NEC
108
1
1
TRUE
0.03448 Muscle pains
166
108
Lower Level
Term
Number of AEs
PTCD
Preferred Term
Office visit
Diarrhea
1 10053323 Office visit
1 10012735 Diarrhoea
Bronchitis
1 10006451 Bronchitis
Bronchitis
Arthroscopy L
knee
Sinusitis
1 10006451 Bronchitis
COPD
exacerbation
Ventral hernia
repair
Acute
respiratory
insufficiency
Acute
bronchitis
Laryngitis
0 10003411 Arthroscopy
1 10040753 Sinusitis
Chronic
obstructive
pulmonary
1 10009033 disease
Abdominal
1 10060802 hernia repair
Acute
respiratory
1 10001053 failure
1 10006451 Bronchitis
1 10023874 Laryngitis
Sinus disorder
Sinus
congestion
1 10062244 Sinus disorder
Sinus
1 10040742 congestion
Postoperative
pain
Anosmia
1 10064882 Procedural pain
1 10002653 Anosmia
Neck pain
Cheilitis
Shortness of
breath
1 10028836 Neck pain
1 10008417 Cheilitis
Constipation
1 10010774 Constipation
Constipation
Pulmonary
embolus
1 10010774 Constipation
Pulmonary
1 10037377 embolism
1 10013968 Dyspnoea
Carpal tunnel
release
Carpal tunnel
1 10007695 decompression
Oral thrush
1 10030963 Oral candidiasis
Weight
1 10047899 increased
Weight gain
Abdominal
pain
Abdominal
pain
Detached
retina
Muscle
spasms
1 10000081 Abdominal pain
1 10000081 Abdominal pain
Retinal
1 10038848 detachment
1 10028334 Muscle spasms
Low back pain
Migraine
headache
1 10003988 Back pain
Skin wound
Myalgia of
lower
extremities
1 10072170 Skin wound
1 10027599 Migraine
1 10028411 Myalgia
SHOW_ME
System Organ Class
SUBJECTS_TOTAL_C SUBJECT_ENROLLE
OUNT
D_COUNT
TRUE
TRUE
Surgical and medical procedures
Gastrointestinal disorders
14
14
4
4
TRUE
Infections and infestations
14
4
TRUE
Infections and infestations
14
4
TRUE
TRUE
Investigations
Infections and infestations
14
14
4
4
TRUE
Respiratory, thoracic and
mediastinal disorders
14
4
TRUE
Surgical and medical procedures
14
4
TRUE
Respiratory, thoracic and
mediastinal disorders
14
4
14
14
4
4
14
4
14
4
14
14
4
4
14
14
4
4
14
4
TRUE
TRUE
TRUE
TRUE
Infections and infestations
Infections and infestations
Respiratory, thoracic and
mediastinal disorders
Respiratory, thoracic and
mediastinal disorders
TRUE
Injury, poisoning and procedural
complications
Nervous system disorders
Musculoskeletal and connective
tissue disorders
Gastrointestinal disorders
Respiratory, thoracic and
mediastinal disorders
TRUE
Gastrointestinal disorders
14
4
TRUE
Gastrointestinal disorders
Respiratory, thoracic and
mediastinal disorders
14
4
14
4
TRUE
TRUE
TRUE
TRUE
TRUE
TRUE
Surgical and medical procedures
14
4
TRUE
Infections and infestations
14
4
TRUE
Investigations
14
4
TRUE
Gastrointestinal disorders
14
4
TRUE
Gastrointestinal disorders
14
4
TRUE
Eye disorders
Musculoskeletal and connective
tissue disorders
Musculoskeletal and connective
tissue disorders
14
4
14
4
14
4
14
4
TRUE
Nervous system disorders
Injury, poisoning and procedural
complications
14
4
TRUE
Musculoskeletal and connective
tissue disorders
1
1
TRUE
TRUE
TRUE
Subject
SUBJECT_NOT_ENR Enrolled
OLLED_COUNT
Flag
Subject
Serious AE
TOTAL_TRAI
L_DAYS_SIT
E
Study
10
10
1
1
1475 No
1475 No
939 I1V-MC-EIBH
939 I1V-MC-EIBH
10
1
1204 No
939 I1V-MC-EIBH
10
1
1204 No
939 I1V-MC-EIBH
10
10
0
1
1068 No
1074 No
I1V-MC-EIBH
939 I1V-MC-EIBH
10
1
1204 Yes
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1204 Yes
939 I1V-MC-EIBH
10
10
1
1
1475 No
1204 No
939 I1V-MC-EIBH
939 I1V-MC-EIBH
10
1
1204 Yes
939 I1V-MC-EIBH
10
1
1074 No
939 I1V-MC-EIBH
10
10
1
1
1475 No
1204 No
939 I1V-MC-EIBH
939 I1V-MC-EIBH
10
10
1
1
1475 No
1475 No
939 I1V-MC-EIBH
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1204 Yes
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1204 No
939 I1V-MC-EIBH
10
1
1132 No
939 I1V-MC-EIBH
10
1
1475 Yes
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1475 No
939 I1V-MC-EIBH
10
1
1204 No
939 I1V-MC-EIBH
0
1
1839 No
58 I1V-MC-EIBH
Overdue
threashold (30
days)
Additional
response tolerance
(5 days0
_User Has Access?
Subjects Entered
Time to First Requery
ResponseRateRequery
(Days)
Site
Number of Queries
Number of Records
TRUE
1 0.4803588
100.00% Yes
129
1
1
TRUE
1
0.002963
100.00% Yes
129
1
1
TRUE
1 6.9833565
100.00% Yes
129
1
1
TRUE
1 1.3803356
100.00% Yes
129
1
1
TRUE
1
12.24662
100.00% Yes
129
1
1
TRUE
1
0.001956
100.00% Yes
129
1
1
TRUE
1 0.3896065
100.00% Yes
129
1
1
TRUE
1 0.3915162
100.00% Yes
129
1
1
TRUE
TRUE
1 1.1862269
1 19.460255
100.00% Yes
100.00% Yes
129
130
1
1
1
1
TRUE
1 0.3436343
100.00% Yes
136
1
1
TRUE
TRUE
1 0.0079977
1 1.0527662
100.00% Yes
100.00% Yes
136
143
1
1
1
1
TRUE
1 0.0003009
100.00% Yes
129
1
1
TRUE
1 0.0022569
100.00% Yes
129
1
1
TRUE
TRUE
1
0.84625
1 0.2396181
100.00% Yes
100.00% Yes
130
136
1
1
1
1
TRUE
1 11.238079
100.00% Yes
136
1
1
TRUE
1 0.7920949
100.00% Yes
143
1
1
TRUE
1 0.0788194
100.00% Yes
143
1
1
TRUE
1 0.0054167
100.00% Yes
163
1
1
TRUE
TRUE
TRUE
TRUE
1
1
1
1
9.6828704
0.5734144
0.5698032
0.4955324
100.00% Yes
100.00% Yes
100.00% Yes
100.00% Yes
129
129
129
129
1
1
1
1
1
1
1
1
TRUE
1 0.0020602
100.00% Yes
129
1
1
TRUE
1 0.3906713
100.00% Yes
129
1
1
TRUE
1 1.1825926
100.00% Yes
129
1
1
TRUE
1 0.0001042
100.00% Yes
129
1
1
TRUE
1 0.0007407
100.00% Yes
129
1
1
TRUE
1 0.3893634
100.00% Yes
129
1
1
TRUE
1 1.1863194
100.00% Yes
129
1
1
TRUE
1 31.064259
100.00% Yes
130
1
1
TRUE
1 4.3759838
100.00% Yes
143
1
1
TRUE
1 0.9089352
100.00% Yes
143
1
1
TRUE
1
0.679456
100.00% Yes
143
1
1
TRUE
1 0.2666898
100.00% Yes
163
1
1
TRUE
1 0.0021991
100.00% Yes
129
1
1
TRUE
1 1.1822569
100.00% Yes
129
1
1
TRUE
1
0.022338
100.00% Yes
129
1
1
TRUE
TRUE
TRUE
TRUE
TRUE
1 8.904456
1 0.339537
1 0.115162
1 1.4339583
1 0.5369907
100.00% Yes
100.00% Yes
100.00% Yes
100.00% Yes
100.00% Yes
136
136
143
129
129
1
1
1
1
1
1
1
1
1
1
TRUE
1 0.3925463
100.00% Yes
129
1
1
TRUE
1 0.0003588
100.00% Yes
129
1
1
TRUE
TRUE
1 15.918715
1 1.0529745
100.00% Yes
100.00% Yes
129
143
1
1
1
1
TRUE
1 4.4122569
100.00% Yes
143
1
1
TRUE
1 0.0001736
100.00% Yes
129
1
1
TRUE
1
33.33978
100.00% Yes
129
1
1
TRUE
1 0.0004514
100.00% Yes
129
1
1
TRUE
1 0.0003009
100.00% Yes
129
1
1
TRUE
1 0.0003472
100.00% Yes
129
1
1
TRUE
TRUE
TRUE
1 1.1826273
1 2.2065741
1 29.25669
100.00% Yes
100.00% Yes
100.00% Yes
129
136
136
1
1
1
1
1
1
TRUE
1 1.1511458
100.00% Yes
143
1
1
TRUE
1 0.0079282
100.00% Yes
129
1
1
TRUE
1 1.1864236
100.00% Yes
129
1
1
TRUE
TRUE
1 0.0055787
1 19.460706
100.00% Yes
100.00% Yes
130
130
1
1
1
1
TRUE
1
2.205625
100.00% Yes
136
1
1
TRUE
1 0.9092824
100.00% Yes
143
1
1
TRUE
1 4.1414005
100.00% Yes
143
1
1
TRUE
1 0.0079861
100.00% Yes
129
1
1
TRUE
1
0.000544
100.00% Yes
129
1
1
TRUE
1 0.3917593
100.00% Yes
129
1
1
TRUE
1 23.579664
100.00% Yes
129
1
1
TRUE
TRUE
1 13.082083
1 0.3691435
100.00% Yes
100.00% Yes
136
163
1
1
1
1
Block
Closed By Closed Date
Item
Panel
Source
3
EX1001
INFORM
3 z2x2047
4/3/2015 14:10
SV1001
INFORM
1 z2x2047
2/26/2015 14:45
MH7001
INFORM
1 C204075
8/17/2015 9:42
MH7001
INFORM
3 z2x2047
4/3/2015 14:06
DS1001
INFORM
2 z2x2047
2/6/2015 11:53
EX1001
INFORM
CM
z2x2047
4/2/2015 12:28
CM1001
INFORM
CM
z2x2047
4/2/2015 12:28
CM1001
INFORM
4/13/2015 13:05
MH7001
CIV1001
INFORM
INFORM
CM1001
INFORM
INFORM
INFORM
INFORM
1
1 z2x2047
CM
2
3 C204075
2/2/2015 6:14
EX1001
EX1001
3 z2x2047
2/27/2015 13:11
SV1001
2 z2x2047
4/3/2015 14:09
SV1001
INFORM
1 z2x2047
2 C204075
6/17/2015 16:28
6/3/2015 12:30
MH7001
EX1001
INFORM
INFORM
CM1001
INFORM
CM
CM
C204075
4/17/2015 7:09
CM1001
INFORM
1 z3x0603
5/19/2015 15:17
MH7001
INFORM
2 C204075
8/7/2015 5:21
EX1001
INFORM
3 C204075
3
3
3
4/27/2015 4:50
EX1001
EX1001
EX1001
EX1001
INFORM
INFORM
INFORM
INFORM
2 z2x2047
2/6/2015 11:57
EX1001
INFORM
z2x2047
4/2/2015 12:30
CM1001
INFORM
MH7001
INFORM
EX1001
INFORM
CM
1
2 C204075
2/9/2015 11:30
2 z2x2047
2/6/2015 11:58
EX1001
INFORM
z2x2047
4/2/2015 12:31
CM1001
INFORM
MH7001
INFORM
CM
1
1 z2x2047
1/16/2015 12:36
DS1001
INFORM
C204075
2/2/2015 6:14
CM1001
INFORM
2 z2x2047
2/2/2015 19:28
AESTEX1001INFORM
AE – Repeating
z2x2047
6/23/2015 14:49
AE3001A INFORM
CM
C204075
7/15/2015 4:16
CM1001
INFORM
5 C204075
6/23/2015 9:55
EX1001
INFORM
MH7001
INFORM
EX1001
INFORM
CM
1
2 z2x2047
5
2 C204075
3 C204075
3
3
CM
z2x2047
2/6/2015 11:53
6/3/2015 12:29
8/24/2015 6:54
4/19/2015 23:24
AESTEX1001INFORM
EX1001
INFORM
EX1001
INFORM
EX1001
INFORM
EX1001
INFORM
CM1001
INFORM
2 C204075
2/10/2015 12:52
EX1001
INFORM
2 C204075
3 C204075
2/10/2015 12:51
2/2/2015 6:13
EX1001
EX1001
INFORM
INFORM
C204075
2/9/2015 11:21
CM1001
INFORM
2 C204075
2/5/2015 9:20
EX1001
INFORM
3 z2x2047
4/13/2015 13:04
DS1001
INFORM
1 z2x2047
1/23/2015 14:14
VS1001
INFORM
2 C204075
2/10/2015 12:50
EX1001
INFORM
1 z2x2047
1/23/2015 14:17
SV1001
INFORM
MH7001
EX1001
EX1001
INFORM
INFORM
INFORM
CM
1
3
3
CM
C204075
2/6/2015 4:42
CM1001
INFORM
2 z2x2047
2/6/2015 11:54
EX1001
INFORM
1
2 z2x2047
1 z2x2047
4/9/2015 14:39
4/13/2015 13:05
2
MH7001
INFORM
EX1001
CIV1001
INFORM
INFORM
EX1001
INFORM
3 z2x2047
2/2/2015 19:28
AESTEX1001INFORM
C204075
7/2/2015 5:18
CM1001
INFORM
2 z2x2047
4/6/2015 14:13
EX1001
INFORM
CM
C204075
2/12/2015 10:37
CM1001
INFORM
CM
z2x2047
4/2/2015 12:27
CM1001
INFORM
CM
C204075
4/30/2015 10:09
CM1001
INFORM
2 C204075
2 C204075
2/20/2015 4:28
8/26/2015 5:55
EX1001
EX1001
INFORM
INFORM
CM
Description
First Response Date
Issued By
Double Blind Treatment phase ended on
10May2015 however Open Lable phase treatment
started on 09Jul2015. Please update the dates
correctly else clarify the gap between both
treatments. Thanks!
Date cannot be after the Date Subject Discontinued
from Study Treatment 22/Oct/2014(DSSTDAT).
Please review and amend or else, clarify. Thank
you.
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
Please delete the data from the form since the
patient was a SF
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please confirm the end date as this is the same as
the medication start date at V2.
Please confirm the end date as this is the same as
the medication start date at V2.
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
Please provide a response
The medication should have an end date prior to V2
visit data. Request you to review and update the
correct data. Thanks!
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please provide missing data
Date cannot be after the Date Subject Discontinued
from Study Treatment 20/Oct/2014(DSSTDAT).
Please review and amend or else, clarify. Thank
you.
8/20/2015 20:05 C204075
4/1/2015 21:25 privera
2/23/2015 18:35 z2x2047
8/13/2015 16:07 C204075
4/1/2015 22:13 z2x2047
1/21/2015 20:06 privera
2/6/2015 21:17 z2x2047
2/6/2015 21:17 z2x2047
7/8/2015 16:34 C204075
4/7/2015 12:43 z2x2047
7/10/2015 13:12 C204075
7/10/2015 13:14 sbellini
1/20/2015 12:34 C204075
2/23/2015 18:46 lvillegas
Date cannot be after the Date Subject Discontinued
from Study Treatment 22/Oct/2014(DSSTDAT).
Please review and amend or else, clarify. Thank
you.
Since this event is recorded as part of the
Prespecified Medical History Form it is not
expected to be recorded on this form. Please
review.
Please provide missing data
Please provie an end date for ATORVASTATIN
medication. Thanks!
CRA/SK: end date is needed, as per source this is
not ongoing.
CRA/SK: please instead note the specific
abnormality as the title of this event.
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
The treatment start date is overlapping with
previous Visit Treatment end date. Please review
and update the correct data else clarify in your
response. Thanks!
Please provide missing data
Please provide missing data
Please provide missing data
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please confirm the end date as this is the same as
the medication start date at V2.
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
4/1/2015 21:24 privera
4/16/2015 13:42 z2x2047
4/23/2015 14:44 C204075
8/14/2015 16:18 C204075
2/18/2015 12:06 z3x0603
4/7/2015 17:55 z3x0603
7/29/2015 17:29 rfranklin
4/1/2015 22:40 C204075
8/20/2015 19:57 C204075
8/20/2015 19:59 C204075
8/20/2015 20:02 C204075
1/21/2015 20:39 privera
2/6/2015 21:20 z2x2047
7/8/2015 16:34 C204075
1/22/2015 21:22 privera
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please confirm the end date as this is the same as
the medication start date at V2.
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
Please specify the Screen Fail Reason by providing
the appropriate number
Please provie an end date for ATORVASTATIN
medication. Thanks!
Yes is selected, but there is no Adverse Event with
an event start date on the prior visit date (when the
treatment was administered). Please review.
Please record the details of the AE by using the Add
Entry Feature on the form
Please provie an end date for ATORVASTATIN
medication. Thanks!
Date cannot be before or after the current visit
date of 09/Jun/2015; please correct
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
DQ: Per the above, response the AE did not start or
worsen after the first dose; therefore no AE ID is
expected. Please update.
Please provide missing data
Please provide missing data
Please provide missing data
Please provide missing data
Please confirm the end date as this is the same as
the medication start date at V2.
1/21/2015 23:13 privera
2/6/2015 21:18 z2x2047
7/8/2015 16:33 C204075
1/5/2015 14:38 z2x2047
1/13/2015 15:28 C204075
1/14/2015 13:26 dmotta
6/17/2015 11:22 z2x2047
7/6/2015 15:39 C204075
6/12/2015 17:30 privera
7/8/2015 16:34 C204075
1/21/2015 23:13 privera
8/21/2015 16:22 z1x3734
4/23/2015 14:44 C204075
8/19/2015 12:16 C204075
8/20/2015 19:52 C204075
8/20/2015 20:00 C204075
2/6/2015 21:19 z2x2047
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please provide missing data
Please provie an end date for LIPITOR medication.
Thanks!
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please confirm that the month of October is
correct. The month recorded on the DOV for visit 3
is December, please confirm the correct month and
update this SF Form.
No vitals collected is selected, but a vital sign is
recorded; please correct
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Current Date of Visit is not within the visit intervals
as specified in the protocol; please correct
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
Please provide missing data
Please provide missing data
The medication end date cannot be equal to or
greater than V2 visit date. Request you to review
and update the correct data. Thanks!
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
1/21/2015 22:19 privera
2/6/2015 21:22 privera
1/20/2015 12:33 C204075
1/13/2015 15:28 C204075
1/22/2015 20:59 privera
4/1/2015 21:21 z2x2047
1/19/2015 22:04 privera
1/22/2015 20:38 privera
1/21/2015 22:42 privera
7/8/2015 16:35 C204075
8/21/2015 16:25 C204075
7/10/2015 13:13 C204075
2/3/2015 13:31 C204075
1/21/2015 23:14 privera
This medical history event is captured on the
Prespecified Medical History Form so it does not
need to be recorded on this form as well. Please
update.
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please provide a response
3/12/2015 18:32 sdrummond
4/7/2015 12:44 z2x2047
Lead in phase ended on 05Feb2015 however
Double Blind phase treatment started on
25Feb2015. Please update the dates correctly else
clarify the gap between both treatments. Thanks!
8/21/2015 16:26 C204075
Yes is selected, but there is no Adverse Event with
an event start date on the visit date (when the
treatment was administered). Please review.
1/14/2015 13:26 dmotta
DQ: Treatment end date should be one day prior to
V2 Exposure: Lead-in Phase treatment start date
(13/May/2015). Please confirm the correct date in
your response, and if different, update
appropriately, else clarify.
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Month and Year are required. Year of Start Date is
on or after the year of informed consent date of
23/Oct/2014; please correct
Please confirm the medicaion end date as this is
the same as the study medication start datre at V2
please note the medication end date cannot be
equal to or greater than V2 visit date. Request you
to review and update the correct data. Thanks!
Ensure the patient has stopped their own
atorvastatin prior to the start of the lead-in
atorvastatin. Record the stop date of the
atorvastatin taken prior to the lead-in phase on the
con med CRF.
Please provide missing data
7/8/2015 16:34 C204075
5/26/2015 13:01 c191573
2/16/2015 21:59 privera
2/5/2015 18:59 lvillegas
2/6/2015 21:16 z2x2047
4/1/2015 22:29 C204075
2/9/2015 18:25 sbellini
8/20/2015 20:46 C204075
Issued Date
Original Value
Open Date
Page
Re-opened Date
8/20/2015 8:33
8/20/2015 8:33 EX1001_F3
8/24/2015 7:40
4/1/2015 21:21
4/1/2015 21:21 SV1001_F1
4/2/2015 12:26
2/16/2015 18:59
2/16/2015 18:59 MH7001_F1
2/23/2015 19:43
8/12/2015 6:59
8/12/2015 6:59 MH7001_F1
8/14/2015 5:34
3/20/2015 16:18
3/20/2015 16:18 DS1001_F3
4/2/2015 12:28
1/21/2015 20:03
1/21/2015 20:03 EX1001_F1
1/22/2015 19:08
2/6/2015 11:56
2/6/2015 11:56 CM1001_F1
2/11/2015 18:35
2/6/2015 11:53
2/6/2015 11:53 CM1001_F1
2/11/2015 18:34
7/7/2015 12:06
3/19/2015 1:40
7/7/2015 12:06 MH7001_F1
3/19/2015 1:40 CIV1001
7/9/2015 7:13
4/7/2015 20:45
7/10/2015 4:58
7/10/2015 4:58 CM1001_F1
8/3/2015 5:45
7/10/2015 13:03
1/19/2015 11:18
7/10/2015 13:03 EX1001_F1
1/19/2015 11:18 EX1001_F3
7/13/2015 4:19
1/29/2015 6:37
2/23/2015 18:46
2/23/2015 18:46 SV1001_F1
2/23/2015 19:52
4/1/2015 21:21
4/1/2015 21:21 SV1001_F1
4/2/2015 12:32
4/15/2015 17:24
4/23/2015 8:59
4/15/2015 17:24 MH7001_F1
4/23/2015 8:59 EX1001_F1
5/27/2015 17:09
4/27/2015 4:30
8/3/2015 10:35
8/3/2015 10:35 CM1001_F1
8/17/2015 9:41
2/17/2015 17:05
2/17/2015 17:05 CM1001_F1
3/9/2015 7:48
4/7/2015 16:01
4/7/2015 16:01 MH7001_F1
4/30/2015 21:08
7/29/2015 17:22
7/29/2015 17:22 EX1001_F1
8/3/2015 6:14
3/23/2015 6:16
8/20/2015 6:12
8/20/2015 6:18
8/20/2015 8:08
3/23/2015 6:16 EX1001_F3
8/20/2015 6:12 EX1001_F3
8/20/2015 6:18 EX1001_F3
8/20/2015 8:08 EX1001_F3
4/10/2015 7:22
8/24/2015 6:30
8/24/2015 6:31
8/24/2015 7:34
1/21/2015 20:36
1/21/2015 20:36 EX1001_F1
1/22/2015 19:10
2/6/2015 11:57
2/6/2015 11:57 CM1001_F1
2/11/2015 18:35
7/7/2015 12:11
7/7/2015 12:11 MH7001_F1
7/9/2015 7:13
1/22/2015 21:22
1/22/2015 21:22 EX1001_F1
1/23/2015 14:15
1/21/2015 23:12
1/21/2015 23:12 EX1001_F1
1/22/2015 19:09
2/6/2015 11:58
2/6/2015 11:58 CM1001_F1
2/11/2015 18:35
7/7/2015 12:05
7/7/2015 12:05 MH7001_F1
7/9/2015 7:13
12/5/2014 13:05
12/5/2014 13:05 DS1001_F2
1/6/2015 14:25
1/9/2015 6:27
1/9/2015 6:27 CM1001_F1
1/29/2015 6:34
1/13/2015 15:37
1/13/2015 15:37 AESTEX1001_F1
1/26/2015 16:49
6/16/2015 19:03
6/16/2015 19:03 AE3001_F1
6/17/2015 16:29
7/6/2015 9:15
7/6/2015 9:15 CM1001_F1
7/9/2015 7:34
6/12/2015 17:27
6/12/2015 17:27 EX1001_F5
6/15/2015 4:28
7/7/2015 12:12
7/7/2015 12:12 MH7001_F1
7/9/2015 7:12
1/21/2015 22:41
1/21/2015 22:41 EX1001_F1
1/22/2015 19:09
8/12/2015 18:40
4/23/2015 6:35
8/19/2015 9:30
8/19/2015 9:27
8/20/2015 7:07
8/12/2015 18:40 AESTEX1001_F1
4/23/2015 6:35 EX1001_F1
8/19/2015 9:30 EX1001_F3
8/19/2015 9:27 EX1001_F3
8/20/2015 7:07 EX1001_F3
8/24/2015 12:33
4/27/2015 4:29
8/20/2015 5:07
8/24/2015 6:30
8/24/2015 6:32
2/6/2015 11:54
2/6/2015 11:54 CM1001_F1
4/13/2015 13:09
1/21/2015 22:19
1/21/2015 22:19 EX1001_F1
2/9/2015 11:28
1/21/2015 23:19
1/19/2015 11:17
1/21/2015 23:19 EX1001_F1
1/19/2015 11:17 EX1001_F3
2/9/2015 11:26
1/29/2015 6:38
1/9/2015 5:34
1/9/2015 5:34 CM1001_F1
2/5/2015 9:11
1/22/2015 20:59
1/22/2015 20:59 EX1001_F1
1/23/2015 14:15
2/27/2015 13:12
2/27/2015 13:12 DS1001_F2
4/6/2015 13:18
1/19/2015 22:03
1/19/2015 22:03 VS1001_F2
1/20/2015 15:33
1/22/2015 20:38
1/22/2015 20:38 EX1001_F1
2/9/2015 11:29
1/21/2015 22:42
1/21/2015 22:42 SV1001_F1
1/22/2015 22:02
7/7/2015 12:12
8/19/2015 11:28
6/11/2015 7:04
7/7/2015 12:12 MH7001_F1
8/19/2015 11:28 EX1001_F3
6/11/2015 7:04 EX1001_F3
7/9/2015 7:12
8/24/2015 7:32
7/13/2015 5:03
2/2/2015 9:54
2/2/2015 9:54 CM1001_F1
2/5/2015 9:07
1/21/2015 23:02
1/21/2015 23:02 EX1001_F1
1/22/2015 19:10
7/7/2015 12:05
7/7/2015 12:05 MH7001_F1
7/9/2015 7:13
3/12/2015 18:24
3/19/2015 1:40
3/12/2015 18:24 EX1001_F1
3/19/2015 1:40 CIV1001
3/13/2015 6:08
4/7/2015 20:46
8/19/2015 11:30
8/19/2015 11:30 EX1001_F1
8/24/2015 7:43
1/13/2015 15:37
1/13/2015 15:37 AESTEX1001_F1
1/26/2015 16:54
5/22/2015 9:38
5/22/2015 9:38 CM1001_F1
5/27/2015 17:15
2/16/2015 21:47
2/16/2015 21:47 EX1001_F1
3/20/2015 5:52
2/5/2015 18:58
2/5/2015 18:58 CM1001_F1
2/9/2015 11:38
2/6/2015 11:52
2/6/2015 11:52 CM1001_F1
2/11/2015 18:34
3/9/2015 8:35
3/9/2015 8:35 CM1001_F1
4/27/2015 4:53
1/27/2015 16:26
8/20/2015 11:54
1/27/2015 16:26 EX1001_F1
8/20/2015 11:54 EX1001_F1
2/10/2015 12:49
8/24/2015 6:37
Response
Response Date
Query ID
Status
IIP Query IDSubject
6450 Open
14502- 27~US- 1490- C.3.01490
6450- EX1001- – 20
4/2/2015 22:14
2892 Closed
14502- 27~US- 1035- C.3.01035
2892- SV1001- – 20
2/25/2015 21:36
1387 Closed
14502- 27~US- 1035- C.1.01035
1387- MH7001- – 2
8/14/2015 15:58
6167 Closed
14502- 27~US- 1850- C.1.01850
6167- MH7001- – 2
4/2/2015 22:18
2510 Closed
14502- 27~US- 1097- C.3.01097
2510- DS1001- – 20
2/5/2015 18:18
775 Closed
14502- 27~US- 1052- C.2.01052
775- EX1001- – 201
4/1/2015 21:53
1170 Closed
14502- 27~US- 1086- C.CM.01086 1170- CM1001- –
4/1/2015 21:48
1168 Closed
14502- 27~US- 1085- C.CM.01085 1168- CM1001- –
4/9/2015 11:47
5261 Open
2363 Closed
14502- 27~US- 1733- C.1.01733
5261- MH7001- – 2
14502- 30~US- 1060- C.1.01060
2363- CIV1001- – 2
8/28/2015 15:26
5432 Answered 14502- 36~US- 1654- C.CM.01654 5432- CM1001- –
8/28/2015 15:28
1/29/2015 13:16
5436 Answered 14502- 36~US- 1803- C.2.01803
5436- EX1001- – 20
699 Closed
14502- 43~US- 1049- C.3.01049
699- EX1001- – 201
2/25/2015 21:37
1707 Closed
14502- 27~US- 1035- C.3.01035
1707- SV1001- – 20
4/2/2015 22:11
2891 Closed
14502- 27~US- 1035- C.2.01035
2891- SV1001- – 20
6/10/2015 14:09
6/2/2015 14:47
3213 Closed
3456 Closed
14502- 30~US- 1334- C.1.01334
3213- MH7001- – 2
14502- 36~US- 1449- C.2.01449
3456- EX1001- – 20
8/28/2015 15:27
5956 Answered 14502- 36~US- 1803- C.CM.01803 5956- CM1001- –
3/9/2015 11:34
1482 Closed
14502- 43~US- 1046- C.CM.01046 1482- CM1001- –
5/1/2015 11:21
3008 Closed
14502- 43~US- 1244- C.1.01244
3008- MH7001- – 2
8/3/2015 22:13
5882 Closed
14502- 63~US- 1880- C.2.01880
5882- EX1001- – 20
4/17/2015 18:26
2533 Closed
6412 Open
6417 Open
6429 Open
14502- 27~US- 1238- C.3.01238
2533- EX1001- – 20
14502- 27~US- 1362- C.3.01362
6412- EX1001- – 20
14502- 27~US- 1374- C.3.01374
6417- EX1001- – 20
14502- 27~US- 1407- C.3.01407
6429- EX1001- – 20
2/5/2015 19:21
787 Closed
14502- 27~US- 1097- C.2.01097
787- EX1001- – 201
4/1/2015 22:13
1171 Closed
14502- 27~US- 1097- C.CM.01097 1171- CM1001- –
5262 Open
14502- 27~US- 1714- C.1.01714
5262- MH7001- – 2
831 Closed
14502- 27~US- 1239- C.2.01239
831- EX1001- – 201
2/6/2015 21:20
2/5/2015 20:05
797 Closed
14502- 27~US- 1110- C.2.01110
797- EX1001- – 201
4/1/2015 22:14
1172 Closed
14502- 27~US- 1110- C.CM.01110 1172- CM1001- –
5259 Open
14502- 27~US- 1733- C.1.01733
5259- MH7001- – 2
1/16/2015 12:01
238 Closed
14502- 30~US- 1060- C.1.01060
238- DS1001- – 201
1/29/2015 13:15
560 Closed
14502- 43~US- 1049- C.CM.01049 560- CM1001- – 2
1/29/2015 13:08
613 Closed
14502- 43~US- 1046- C.2.01046
613- AESTEX1001-
6/18/2015 11:22
4848 Closed
14502- 43~US- 1244- C.AE
1244
– Repeating.0- 4848-
7/14/2015 21:12
5231 Closed
14502- 63~US- 1750- C.CM.01750 5231- CM1001- –
6/19/2015 20:26
4744 Closed
14502- 27~US- 1374- C.5.01374
4744- EX1001- – 20
5263 Open
14502- 27~US- 1714- C.1.01714
5263- MH7001- – 2
2/5/2015 19:03
793 Closed
14502- 27~US- 1085- C.2.01085
793- EX1001- – 201
8/28/2015 15:24
6/2/2015 14:47
8/20/2015 11:49
6180 Answered 14502- 36~US- 1291- C.5.01291
6180- AESTEX1001
3431 Closed
14502- 36~US- 1311- C.2.01311
3431- EX1001- – 20
6363 Closed
14502- 43~US- 1244- C.3.01244
6363- EX1001- – 20
6362 Open
14502- 27~US- 1238- C.3.01238
6362- EX1001- – 20
6424 Open
14502- 27~US- 1389- C.3.01389
6424- EX1001- – 20
4/17/2015 18:29
1169 Closed
14502- 27~US- 1093- C.CM.01093 1169- CM1001- –
2/9/2015 19:57
790 Closed
14502- 27~US- 1187- C.2.01187
790- EX1001- – 201
2/9/2015 17:34
1/29/2015 13:09
798 Closed
698 Closed
14502- 27~US- 1121- C.2.01121
798- EX1001- – 201
14502- 43~US- 1046- C.3.01046
698- EX1001- – 201
2/5/2015 13:15
515 Closed
14502- 43~US- 1046- C.CM.01046 515- CM1001- – 2
2/2/2015 23:42
824 Closed
14502- 27~US- 1238- C.2.01238
824- EX1001- – 201
4/10/2015 22:02
1864 Closed
14502- 27~US- 1035- C.3.01035
1864- DS1001- – 20
1/21/2015 20:18
722 Closed
14502- 27~US- 1035- C.1.01035
722- VS1001- – 201
2/9/2015 19:48
821 Closed
14502- 27~US- 1200- C.2.01200
821- EX1001- – 201
1/22/2015 22:02
794 Closed
14502- 27~US- 1085- C.1.01085
794- SV1001- – 201
8/28/2015 15:23
8/28/2015 15:23
5264 Open
14502- 27~US- 1481- C.1.01481
5264- MH7001- – 2
6371 Answered 14502- 36~US- 1291- C.3.01291
6371- EX1001- – 20
4691 Answered 14502- 36~US- 1311- C.3.01311
4691- EX1001- – 20
2/5/2015 13:24
1053 Closed
14502- 43~US- 1244- C.CM.01244 1053- CM1001- –
2/5/2015 19:35
796 Closed
14502- 27~US- 1093- C.2.01093
796- EX1001- – 201
5260 Open
14502- 27~US- 1733- C.1.01733
5260- MH7001- – 2
4/9/2015 11:48
4/9/2015 11:47
2215 Closed
2364 Closed
14502- 30~US- 1334- C.2.01334
2215- EX1001- – 20
14502- 30~US- 1184- C.1.01184
2364- CIV1001- – 2
8/28/2015 15:22
6372 Answered 14502- 36~US- 1291- C.2.01291
6372- EX1001- – 20
1/29/2015 13:08
612 Closed
14502- 43~US- 1046- C.3.01046
612- AESTEX1001-
5/28/2015 11:05
4152 Closed
14502- 43~US- 1676- C.CM.01676 4152- CM1001- –
4/1/2015 22:49
1466 Closed
14502- 27~US- 1389- C.2.01389
1466- EX1001- – 20
2/9/2015 20:30
1147 Closed
14502- 27~US- 1052- C.CM.01052 1147- CM1001- –
4/1/2015 21:29
1167 Closed
14502- 27~US- 1052- C.CM.01052 1167- CM1001- –
4/27/2015 22:02
2065 Closed
14502- 27~US- 1187- C.CM.01187 2065- CM1001- –
2/18/2015 14:05
8/24/2015 20:55
888 Closed
6496 Closed
14502- 36~US- 1291- C.2.01291
888- EX1001- – 201
14502- 63~US- 1750- C.2.01750
6496- EX1001- – 20
Study
I1V-MC-EIBH
I1V-MC-EIBH
I1V-MC-EIBH
I1V-MC-EIBH
I1V-MC-EIBH
I1V-MC-EIBH

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Live Chat+1(978) 822-0999EmailWhatsApp

Order your essay today and save 20% with the discount code LEMONADE