The Acronym Perrla Is Typically Recorded in What Part of the Review of Systems?
Abstract
The national adoption of electronic health records (EHR) promises to make an unprecedented amount of data available for clinical research, but the data are circuitous, inaccurate, and frequently missing, and the record reflects circuitous processes aside from the patient'southward physiological land. We believe that the path forrard requires studying the EHR as an object of interest in itself, and that new models, learning from data, and collaboration will pb to efficient use of the valuable information currently locked in health records.
Introduction
The national push for electronic health records (EHR)1 will make an unprecedented amount of clinical information available for research; approximately ane billion patient visits may be documented per year in the USA. These data may pb to discoveries that improve agreement of biological science, aid the diagnosis and handling of illness, and permit the inclusion of more diverse populations and rare diseases. EHR tin be used in much the same way that paper records have been used, with manual extraction and estimation of clinical information. The large promise, however, lies in big-scale use, automatically feeding clinical inquiry, quality comeback, public health, etc. Such uses require high-quality data, which are ofttimes lacking in EHR. In this paper, we investigate a path forward for exploiting EHR data.
Challenges
Unfortunately, the EHR carries many challenges.2
Abyss
The data are largely missing in several means. Data are occasionally missing by fault, in the sense that data that would unremarkably be expected to exist recorded are lacking. Data are frequently missing in the sense that patients move among institutions for their care and so that private institutional databases contain but part of their care, and wellness information commutation is comparatively pervasive to address the issue; the result is data fragmentation for inquiry and aperture of clinical intendance. Data are besides missing in the sense that they are only recorded during healthcare episodes, which usually correspond to illness. In improver, much information is implicit, under the assumption that the human reader will infer the missing information (eg, pertinent negative findings). The event is a fourth dimension serial that is very far from the rigorous data drove normally employed in formal experiments. Referring to the statistical taxonomy of missingness,three health tape data are certainly not missing at random, and might facetiously even be referred to as 'nigh completely missing'.
Accurateness
The data are frequently inaccurate,4 resulting in a loss of predictive power.5 Errors tin can occur anywhere in the process from observing the patient, to conceptualizing the results, to recording them in the record, and recording is influenced by billing requirements and avoidance of liability. Whereas some errors may be treated as random, many errors—such every bit influence from billing—are systematic. In addition, there is often mismatch between the nominal definition of a concept and the intent of the author. For case, PERRLA is an acronym unremarkably used in the centre examination that stands for 'pupils equal, round, and reactive to lite and adaptation'. It is unclear, yet, how frequently clinicians actually test for each of the backdrop. In the CUMC database, 2% of patients who were missing one eye were documented in a narrative note every bit beingness PERRLA—an impossibility because ii eyes are required to have equal pupils—and some other 8% of those patients were documented every bit being PERRLA on the left or on the right, which is a misuse of the term. A researcher looking for subjects whose pupils were equal or accommodated normally could non rely on a notation of PERRLA in the chart.
Complication
Healthcare is highly complex. It includes a mixture of many continuous variables and a large number of discrete concepts. For example, at CUMC, there are 136 035 different concepts that may be stored in the database. There is an enormous amount of work being washed to create knowledge structures to define the information, including formal definitions, classification hierarchies, and inter-concept relationships (eg, clinical chemical element model).6 Maintaining such a structure will remain a challenge, however. There may also be local variation both in structure7 and in definition and use,8 and even within an institution definitions vary over fourth dimension. Much of the about important information in the tape—such as symptoms and idea processes—are stored as narrative notes, which require natural language processing9 to generate a computable form. Temporal attributes are highly complex, with time scales from seconds to years and with different levels of uncertainty.10
Bias
The to a higher place challenges, including systematic errors, can event in significant bias when health record data are used naively for clinical research. For instance, in one EHR study of community-acquired pneumonia,xi patients who came to the emergency department and died chop-chop did not have many symptoms entered into the EHR. As a issue, an attempt to echo Fine'south pneumonia written report12 using EHR data showed that the obviously healthiest patients died at a higher rate than sicker patients. Ultimately, healthcare data reflect a circuitous set of processes13 (effigy one), with many feedback loops. For instance, physicians request tests relevant to the patient's current condition, and testing guides the diagnosis, which determines the treatment and time to come testing. Such feedback loops produce non-linear recording effects that do not reflect the underlying physiology that researchers may exist attempting to study. Put another way, EHR data are non only enquiry data with noise and missing values. The extent and bias of the noise and missingness are sufficient to require fundamentally different methods to analyse the information.
Figure 1
Figure 1
Country of the art
Fortunately, it appears that EHR practise incorporate sufficient data: clinicians generally use health records effectively. They learn to navigate the complexity of the record and to fill in implicit information. Reusing the information for research should be possible, but having a clinician interpret the record for every instance is infeasible for large studies.
To accost the challenges, the task is more often than not broken into two steps. The get-go stride, which tin can exist called phenotyping or characteristic extraction, transforms the raw EHR data into clinically relevant features. The second step uses these features for traditional research tasks—such as measuring associations for discovery or assessing eligibility for a trial—as if a research coordinator had manually entered and verified the features. For the most function, the EHR challenges are addressed in the first step so that big EHR databases tin can go big research databases that tin can then undergo traditional assay.
Studies employing large-calibration EHR information accept begun to appear,14–nineteen and most of them utilize this two-step approach. The state of the art in feature extraction is to use a heuristic, iterative approach to generate queries that run across the entire EHR database. For example, clinical experts may read each tape for a subset of subjects and create a curated dataset. A knowledge engineer generates a heuristic rule that maps record data to each variable in the report (eg, doc notes, billing codes, and medications may all be used to infer the presence of a disease). The rule is tested on the curated subset, and the rule is modified iteratively until sensitivity and specificity reach some threshold. The dominion is then practical to the entire cohort.
While this avoids almost case-by-case review, information technology still requires characteristic-by-feature authoring of queries. These methods are themselves fourth dimension consuming;20 furthermore, there is much potentially useful information that is not used, the queries may be time consuming to maintain, and cognition engineers and clinical experts bring their own biases. To draw an analogy with computational biology, imagine attempting loftier throughput enquiry in which each investigator had to spend months verifying each of thousands of variables earlier collecting data. As we move to large-calibration mining of the EHR, defining the queries has become a clogging. Efforts like eMERGE21 are showing significant progress in generating and sharing queries across institutions,22,23 only local variations remain, and defining fifty-fifty a minor number of phenotypes can take a group of institutions years. Despite advances in ontologies and language processing, the process remains largely unchanged since the earliest days,24 using detective work and alchemy to become gilded phenotypes from base data.
Adjacent-generation phenotyping
There are several means to improve on the current land. One approach improves on the electric current phenotyping process, either by making it more accurate or past reducing the noesis engineering effort. Nosotros refer to the latter as 'high-throughput phenotyping'. The term could be practical to the electric current state of the art because fifty-fifty a manually generated query can exist run on a large database, but we suggest reserving the term for truly loftier-throughput approaches that exercise non require years to generate a handful of phenotypes. A high-throughput arroyo should generate thousands of phenotypes with minimal homo intervention such that they could exist maintained over fourth dimension.
To amend phenotyping essentially, we believe that at that place needs to be a radical shift in approach and that the answer lies in a familiar place for informatics: a combination of acme-downward knowledge engineering science and lesser-upwardly learning from the information. In particular, we believe that nosotros need a meliorate understanding of the EHR. The EHR is not a directly reflection of the patient and physiology, but a reflection of the recording process inherent in healthcare with noise and feedback loops. We must written report the EHR as an object in itself, equally if it were a natural system. This better agreement will then naturally support both broad-based event-oriented research and physiological research.
One component is a healthcare process model that represents how processes occur and how data are recorded (effigy 2). Some aspects of the healthcare process model are existence defined, for example, through research related to SNOMED, Health Level 7, and the Clinical Element Model,25–27 only they do not directly address the recording process, then additional modeling efforts are probable to exist needed. Such efforts might group variables into types and might include temporal patterns of data capture. Given the complication of healthcare and the number of human being and organizational influences, a top-down model is unlikely to be sufficient. Therefore, a 2d component is also needed: we must mine the EHR data to learn the idiosyncrasies of the healthcare process and their effects on the recording process. That is, we believe that the interactions and dependencies are too circuitous to model and predict at a detailed level (eg, intention vs definition, squad interactions), and so empirical measurement of the relationships among information elements volition be essential.
Figure ii
Figure 2
A rigorous model populated with characteristics learned from the information could improve phenotyping in several ways. For example, it may be possible to map raw data, such every bit a fourth dimension serial of diagnosis codes, to a probability of disease.28 If biases tin be quantified—for example, the degree to which a given variable tends to over or underestimate a feature—then i could avoid sources that are most biased, or one could combine sources that have bias in contrary directions. The procedure of generating a phenotype query would then become less heuristic and more data driven.
A total review of the data mining methods appropriate to phenotyping is beyond the scope of a perspective, but the following are specially relevant. First is simply characterizing the raw data with frequencies, co-occurrences, and—when possible—predictive value with respect to desired phenotypes (eg, how accurate are International Classification of Disease, version 9 codes). Dimension reduction using algorithms like principal component analysis (empirical orthogonal functions)29,30 addresses the many disparate variables that comprise an EHR. Instead of top-down defined phenotypes, it may be advisable to ascertain latent variables that accept loftier predictive value using techniques such as latent Dirichlet allotment31,32 or other methods.33 The ability to find similar cases is often useful to define cohorts for machine learning, and has been washed with symbolic and computational techniques.34,35 Clinical databases can be stratified into more regular subsets, producing more than stable results.36 Natural language processing37 is of class essential to phenotype EHR data due to the narrative content.
While time has long been a research topic in informatics,38,39 further work may exist needed. This includes temporal modeling and brainchild40,41 (including temporal handling of narrative data),42 likewise as purely numeric approaches, including non-linear time series analysis drawn from the physics literature. The latter includes aggregation of brusque time series,43 peculiarly as practical to health record data and modified to accommodate not-equally spaced fourth dimension serial.44 Researchers have noted that missingness itself is a useful feature in producing phenotypes.45
We can besides improve the employ of EHR data at the second footstep, the discovery phase, which may include classification (eg, clinical trial eligibility), prediction (eg, readmission rate), understanding (eg, physiology), and intervention. Sensitivity to EHR bias may depend on the goal: prediction may be authentic even if important confounders are not measured in the EHR, simply unmeasured confounders could mislead our understanding of physiology.
Even if EHR bias or noise cannot be measured, it may be possible to factor it out. In one study, patient data were normalized to reduce interpatient variance, improving the estimation of the correlation among variables.46 In another, a derived holding (common information) was used in place of traditional parameters such as glucose because they had too much variation betwixt patients.47 In other cases, when the biases and noise cannot be eliminated, peradventure they tin can be understood. For case, information technology may be useful to characterize discovered associations as being due to the healthcare process (eg, md's intention) versus due to physiology.46 Although information technology is challenging, a number of techniques may be used to infer causation, including dynamic Bayesian networks,48 Granger causality,49 and logic-based paradigms.l Recent work demonstrated the control of the confounding effects of covariates51 with a demonstration of drugs' effects on electrocardiogram QT intervals.
Word
Nosotros believe that the full challenge of phenotyping is not broadly recognized. For example, one review of mining EHRs52 discusses interoperability and privacy as fundamental challenges, but otherwise focuses on the hope of the data rather than the information challenges, which are arguably more than difficult to solve.
Nosotros believe that the phenotyping procedure needs to go more data driven and that nosotros need to acquire more than near the recording process. We take sometimes used the phrase, 'the physics of the medical record', to signal out the likely direction forward. It will require report of the EHR as if it were a natural object worthy of study in itself, and it may exist helpful to utilize the general epitome of physics, which involves modeling and assemblage. It will be helpful to pull in expertise and algorithms from many fields, including non-linear time series analysis from physics,53 new directions in causality from philosophy,50 psychology, economics, of course our usual collaborators in informatics and statistics, and fifty-fifty new models of inquiry that engage the public.
Our hope is that by exploiting our ample data, we tin surpass human operation and produce even more than reliable phenotypes and accurate associations. To draw an analogy, a CT scanner uses data that are feasible to collect—namely external ten-ray images—and deconvolves them to produce an prototype that reflects clinically relevant simply hidden internal anatomical features. Similarly, we need to utilize data that are feasible to collect from EHR and deconvolve them to produce clinically relevant phenotypes that are but implicit in the raw data. Furthermore, the avant-garde use of EHR data, which are becoming both deep in content and broad in coverage of the nation'due south population, may open up new ways to wait at clinical research, studying detailed physiology (including fine laboratory measurements) over large populations, in what might exist called population physiology.47 To draw ane more than analogy from physics, we tin can move from studying weather—individual phenotypes—to studying climate—properties of phenotypes over populations and time.
Systematic changes in the adoption and use of EHR, such as those promoted past the HITECH incentive program (meaningful use),ane will probably have large effects on how EHR information get used in research. For example, structured data entry for meaningful use, quality measurement, or value-based purchasing should improve the volume and quality of data available to research. Variables that accept been notoriously hard to collect, such equally smoking history, may become more broadly available. On the other hand, forced data entry can introduce biases that are difficult to detect or correct. Wellness information exchange, which pulls together non only multiple EHR simply also new information sources such equally pharmacy fill up data, should reduce data fragmentation, although researchers will need to fence with heterogeneous information definitions and data entry cultures. Therefore, fifty-fifty in a new era of the increased utilise of EHR, a deep understanding of EHR data will be critical.
Furthermore, this must not be a one-mode street; improved understanding of the EHR must be fed back to improve the EHR. For example, ameliorate understanding of missing data, inaccuracies, and biases could pb to improved user interfaces, data definitions, and fifty-fifty workflows. The long-term vision of an EHR platform that supports clinical intendance, research, and public wellness will only be accomplished with better understanding and true innovation.
Contributions
The authors are responsible for: the conception and design, acquisition of data, and analysis and interpretation of information; drafting the article and revising it; and final approval of the version to be published.
Funding
This work was funded by a grant from the National Library of Medicine, 'Discovering and applying knowledge in clinical databases' (R01 LM006910).
Competing interests
None.
Provenance and peer review
Not commissioned; externally peer reviewed.
Correction observe
This article has been corrected since it was published Online First. It is now unlocked.
Open up Admission
This is an Open Access commodity distributed in accord with the Creative Commons Attribution Not Commercial (CC BY-NC iii.0) license, which permits others to distribute, remix, adapt, build upon this piece of work non-commercially, and license their derivative works on dissimilar terms, provided the original work is properly cited and the utilise is non-commercial. See: http://creativecommons.org/licenses/past-nc/iii.0/
References
one
Blumenthal
D
Tavenner M
The "meaningful use" regulation for electronic health records
.
Northward Engl J Med
2010
;
363
:
501
–
four
.
two
Weiskopf
NG
Weng C
Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical inquiry
.
J Am Med Inform Assoc
2013
;
20
:
144
–
51
.
iii
Heitjan
DF
Basu S
Distinguishing "missing at random" and "missing completely at random"
.
Am Statistician
1996
;
50
:
207
–
13
.
4
Hogan
WR
Wagner MM
Accurateness of data in computer-based patient records
.
J Am Med Inform Assoc
1997
;
4
:
342
–
55
.
five
Sagreiya
H
Altman RB
The utility of full general purpose versus specialty clinical databases for research: warfarin dose estimation from extracted clinical variables
.
J Biomed Inform
2010
;
43
:
747
–
51
.
half-dozen
Tao
C
Parker CG Oniki TA
An OWL meta-ontology for representing the clinical element model
.
AMIA Annu Symp Proc
2011
;
1372
–
81
.
7
Pryor
TA
Hripcsak Chiliad
Sharing MLM's: an experiment betwixt Columbia-Presbyterian and LDS Hospital
.
Proc Annu Symp Comput Appl Med Intendance
1993
:
399
–
403
.
viii
Hripcsak
1000
Kuperman GJ Friedman C
Extracting findings from narrative reports: software transferability and sources of physician disagreement
.
Methods Inf Med
1998
;
37
:
1
–
seven
.
9
Friedman
C
Hripcsak Chiliad
Tongue processing and its hereafter in medicine
.
Acad Med
1999
;
74
:
890
–
5
.
10
Hripcsak
Thou
Zhou L Parsons Southward
Modeling electronic discharge summaries equally a simple temporal constraint satisfaction problem
.
J Am Med Inform Assoc
2005
;
12
:
55
–
63
.
11
Hripcsak
G
Knirsch C Zhou L
Bias associated with mining electronic health records
.
J Biomed Discov Collab
2011
;
6
:
48
–
52
.
12
Fine
MJ
Auble TE Yealy DM
A prediction dominion to identify low-hazard patients with community-acquired pneumonia
.
N Engl J Med
1997
;
336
:
243
–
50
.
13
Boustani
MA
Munger S Gulati R
Selecting a modify and evaluating its impact on the performance of a circuitous adaptive health care delivery organization
.
Clin Interv Aging
2010
;
5
:
141
–
viii
.
14
Kurreeman
F
Liao K Chibnik 50
Genetic ground of autoantibody positive and negative rheumatoid arthritis take chances in a multi-ethnic cohort derived from electronic wellness records
.
Am J Hum Genet
2011
;
88
:
57
–
69
.
15
Brownstein
JS
Potato SN Goldfine AB
Rapid identification of myocardial infarction risk associated with diabetes medications using electronic medical records
.
Diabetes Care
2010
;
33
:
526
–
31
.
16
Denny
JC
Ritchie Medico Crawford DC
Identification of genomic predictors of atrioventricular conduction: using electronic medical records equally a tool for genome science
.
Apportionment
2010
;
122
:
2016
–
21
.
17
Chen
DP
Morgan AA Butte AJ
Validating pathophysiological models of aging using clinical electronic medical records
.
J Biomed Inform
2010
;
43
:
358
–
64
.
18
Kullo
IJ
Fan J Pathak J
Leveraging informatics for genetic studies: use of the electronic medical record to enable a genome-wide association study of peripheral arterial disease
.
J Am Med Inform Assoc
2010
;
17
:
568
–
74
.
19
Hripcsak
G
Austin JHM Alderson PO
Apply of tongue processing to translate clinical information from a database of 889,921 breast radiographic reports
.
Radiology
2002
;
224
:
157
–
63
.
20
Wilcox
AB
Hripcsak G
The part of domain knowledge in automating medical text report classification
.
J Am Med Inform Assoc
2003
;
10
:
330
–
viii
.
21
McCarty
CA
Chisholm RL Chute CG
sally Team
.
The sally Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies
.
BMC Med Genomics
2011
;
4
:
13
.
22
Conway
M
Berg RL Carrell D
Analyzing the heterogeneity and complexity of Electronic Wellness Tape oriented phenotyping algorithms
.
AMIA Annu Symp Proc
2011
;
2011
:
274
–
83
.
23
Kho
AN
Pacheco JA Peissig PL
Electronic medical records for genetic research: results of the sally consortium
.
Sci Transl Med
2011
;
iii
:
79re1
.
24
Warner
HR
.
Knowledge sectors for logical processing of patient information in the HELP organization
.
Proceedings of the International Conference on Interactive Techniques in Calculator-Aided Design
; Bologna, Italia.
New York
:
IEEE
,
1978
:
401
–
4
.
25
Scott
P
Worden R
Semantic mapping to simplify deployment of HL7 v3 Clinical Document Architecture
.
J Biomed Inform
2012
;
45
:
697
702
.
26
Heymans
Due south
McKennirey M Phillips J
Semantic validation of the apply of SNOMED CT in HL7 clinical documents
.
J Biomed Semantics
2011
;
ii
:
2
.
27
Tao
C
Parker CG Oniki TA
An OWL meta-ontology for representing the clinical element model
.
AMIA Annu Symp Proc
2011
;
2011
:
1372
–
81
.
28
Perotte
A
Hripcsak G
Using density estimates to amass patients and summarize illness evolution (poster) In
:
AMIA Elevation on Translational Bioinformatics
. San Francisco, CA: AMIA,
2011
;
138
.
29
Pearson
K
.
On lines and planes of closest fit to systems of points in infinite
.
Phil Mag
1901
;
2
:
559
–
72
.
thirty
Weare
BC
Navato AR Newell RE
Empirical orthogonal analysis of Pacific sea surface temperatures
.
J Phys Oceanography
1976
;
vi
:
671
–
nine
.
31
Blei
DM
Ng AY Hashemite kingdom of jordan MI
Latent dirichlet allocation
.
J Mach Learn Res
2003
;
3
:
993
–
1022
.
32
Perotte
A
Bartlett Due north Elhadad Northward
Hierarchically supervised Latent Dirichlet Allocation
.
Twenty-Fifth Almanac Briefing onNeural Information Processing Systems
;
12–fifteen December, 2011
,
Granada, Kingdom of spain
,
2011
:
2609
–
17
.
33
Tatonetti
NP
Denny JC Tater SN
Detecting drug interactions from adverse-issue reports: interaction between paroxetine and pravastatin increases claret glucose levels
.
Clin Pharmacol Ther
2011
;
90
:
133
–
42
. doi: 10.1038/clpt.2011.83.
34
Melton
GB
Parsons S Morrison FP
Inter-patient altitude metrics using SNOMED CT defining relationships
.
J Biomed Inform
2006
;
39
:
697
–
705
.
35
Hripcsak
G
Knirsch C Zhou 50
Using discordance to better classification in narrative clinical databases: an awarding to community-acquired pneumonia
.
Comput Biol Med
2007
;
37
:
296
–
304
.
36
Altiparmak
F
Ferhatosmanoglu H Erdal South
Information mining over heterogeneous and high-dimensional time-serial data in clinical trials databases
.
IEEE Trans Inf Technol Biomed
2006
;
10
:
254
–
63
.
37
Nadkarni
PM
Ohno-Machado L Chapman WW
Tongue processing: an introduction
.
J Am Med Inform Assoc
2011
;
18
:
544
–
51
.
38
Shahar
Y
Tu SW Musen MA
Temporal-abstraction mechanisms in direction of clinical protocols
.
Proc Annu Symp Comput Appl Med Care
1991
;
629
–
33
.
39
Kahn
MG
Fagan LM Tu S
Extensions to the time-oriented database model to support temporal reasoning in medical expert systems
.
Methods Inf Med
1991
;
xxx
:
4
–
14
.
40
Sacchi
L
Larizza C Combi C
Data mining with temporal sbstractions: learning rules from time serial
.
Data Mining and Knowledge Discov
2007
;
15
:
217
–
47
.
41
Moskovitch
R
Shahar Y
Medical temporal-knowledge discovery via temporal brainchild
.
AMIA Annu Symp Proc
2009
:
452
–
6
.
42
Zhou
L
Hripcsak Thousand
Temporal reasoning with medical data – a review with emphasis on medical natural language processing
.
J Biomed Inform
2007
;
forty
:
183
–
202
.
43
Komalapriya
C
Thiel Thousand Ramano MC
Reconstruction of a system's dynamics from short trajectories
.
Phys Rev East
2008
;
78
:
066217
.
44
Albers
DJ
Hripcsak Grand
A statistical dynamics approach to the report of human health data: resolving population scale diurnal variation in laboratory data
.
Physics Letters A
2010
;
374
:
1159
–
64
.
45
Lin
JH
Haug PJ
Exploiting missing clinical data in Bayesian network modeling for predicting medical problems
.
J Biomed Inform
2008
;
41
:
1
–
xiv
.
46
Hripcsak
Grand
Albers DJ Perotte A
Exploiting time in electronic health record correlations
.
J Am Med Inform Assoc
2011
;
18
:
i109
–
i115
.
47
Albers
DJ
Schmidt Chiliad Hripcsak M
Population physiology: conjoining EHR dynamics with physiological modeling (abstract). In
:
AMIA Elevation on Translational Bioinformatics
. San Francisco, CA: AMIA,
2011
:
1
.
48
van Gerven
Chiliad
Taal B Lucas P
Dynamic Bayesian networks every bit prognostic models for clinical patient management
.
J Biomed Inform
2008
;
41
:
515
–
29
.
49
Granger
CW
.
Investigating causal relations by econometric models and cross-spectral methods
.
Econometrica
1969
;
37
:
424
–
38
.
l
Kleinberg
South
Mishra B
The temporal logic of causal structures
. In:
Proceedings of the 25th Conference on Dubiousness in Artificial Intelligence (UAI)
.
eighteen–21 June 2009
;
Montreal, QC, Canada. Corvallis, Oregon
: AUAI Press,
2009
:
303
–
12
.
51
Tatonetti
NP
Ye PP Daneshjou R
Information-driven prediction of drug effects and interactions
.
Sci Transl Med
2012
;
4
:
125ra31
.
52
Jensen
PB
Jensen LJ Brunak S
Mining electronic wellness records: towards better research applications and clinical care
.
Nat Rev Genet
2012
. doi: 10.1038/nrg3208.
53
Jensen
PB
Jensen LJ Brunak S
Mining electronic wellness records: towards better enquiry applications and clinical care
.
Nat Rev Genet
2012
. Published Online Outset 2 May 2012. doi: 10.1038/nrg3208.
Published by the BMJ Publishing Grouping Limited. For permission to use (where not already granted nether a licence) please become to http://group.bmj.com/group/rights-licensing/permissions
This is an Open up Access article distributed nether the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/3.0/) which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is non altered or transformed in whatever fashion, and that the piece of work is properly cited. For commercial re-employ, please contact journals.permissions@oup.com
Source: https://academic.oup.com/jamia/article/20/1/117/2909152
0 Response to "The Acronym Perrla Is Typically Recorded in What Part of the Review of Systems?"
Post a Comment