-
Notifications
You must be signed in to change notification settings - Fork 3
/
overview.html
345 lines (293 loc) · 16.1 KB
/
overview.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
<!DOCTYPE html>
<html xmlns="https://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="generator" content="pandoc" />
<title>Analysis of GPWGv3 Data Using R</title>
<script src="site_libs/jquery-1.11.3/jquery.min.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link href="site_libs/bootstrap-3.3.5/css/united.min.css" rel="stylesheet" />
<script src="site_libs/bootstrap-3.3.5/js/bootstrap.min.js"></script>
<script src="site_libs/bootstrap-3.3.5/shim/html5shiv.min.js"></script>
<script src="site_libs/bootstrap-3.3.5/shim/respond.min.js"></script>
<script src="site_libs/navigation-1.1/tabsets.js"></script>
<style type="text/css">
h1 {
font-size: 34px;
}
h1.title {
font-size: 38px;
}
h2 {
font-size: 30px;
}
h3 {
font-size: 24px;
}
h4 {
font-size: 18px;
}
h5 {
font-size: 16px;
}
h6 {
font-size: 12px;
}
.table th:not([align]) {
text-align: left;
}
</style>
<link rel="stylesheet" href="SI-md-08.css" type="text/css" />
</head>
<body>
<style type = "text/css">
.main-container {
max-width: 940px;
margin-left: auto;
margin-right: auto;
}
code {
color: inherit;
background-color: rgba(0, 0, 0, 0.04);
}
img {
max-width:100%;
height: auto;
}
.tabbed-pane {
padding-top: 12px;
}
button.code-folding-btn:focus {
outline: none;
}
</style>
<style type="text/css">
/* padding for bootstrap navbar */
body {
padding-top: 51px;
padding-bottom: 40px;
}
/* offset scroll position for anchor links (for fixed navbar) */
.section h1 {
padding-top: 56px;
margin-top: -56px;
}
.section h2 {
padding-top: 56px;
margin-top: -56px;
}
.section h3 {
padding-top: 56px;
margin-top: -56px;
}
.section h4 {
padding-top: 56px;
margin-top: -56px;
}
.section h5 {
padding-top: 56px;
margin-top: -56px;
}
.section h6 {
padding-top: 56px;
margin-top: -56px;
}
</style>
<script>
// manage active state of menu based on current page
$(document).ready(function () {
// active menu anchor
href = window.location.pathname
href = href.substr(href.lastIndexOf('/') + 1)
if (href === "")
href = "index.html";
var menuAnchor = $('a[href="' + href + '"]');
// mark it active
menuAnchor.parent().addClass('active');
// if it's got a parent navbar menu mark it active as well
menuAnchor.closest('li.dropdown').addClass('active');
});
</script>
<div class="container-fluid main-container">
<!-- tabsets -->
<script>
$(document).ready(function () {
window.buildTabsets("TOC");
});
</script>
<!-- code folding -->
<div class="navbar navbar-default navbar-fixed-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="index.html">GCD Analysis</a>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li>
<a href="Overview.html">Overview</a>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">
Analyses
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="mdb-to-csv_07.html">Query and site .csv files from an Access database (mdb-to-csv.R)</a>
</li>
<li>
<a href="trans-and-zscore_07.html">Box-Cox Transformation and Z-Score Rescaling (trans-and-zscore.R)</a>
</li>
<li>
<a href="presample-bin_07.html">Presampling or binning of transformed data (presample-bin.R)</a>
</li>
<li>
<a href="smooth-curve_07.html">Composite curves and bootstrap C.I.'s using locfit() (smooth-curve.R)</a>
</li>
<li class="divider"></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">
Code
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="mdb-to-csv_07_code.html">mdb-to-csv_code.R</a>
</li>
<li>
<a href="trans-and-zscore_07_code.html">trans-and-zscore.R</a>
</li>
<li>
<a href="presample-bin_07_code.html">presample-bin.R</a>
</li>
<li>
<a href="smooth-curve_07_code.html">smooth-curve.R</a>
</li>
<li class="divider"></li>
<li>
<a href="mdb-to-csv_07_code_osx.html">mdb-to-csv_code_osx.R</a>
</li>
</ul>
</li>
</ul>
<ul class="nav navbar-nav navbar-right">
</ul>
</div><!--/.nav-collapse -->
</div><!--/.container -->
</div><!--/.navbar -->
<div class="fluid-row" id="header">
<h1 class="title toc-ignore">Analysis of GPWGv3 Data Using R</h1>
</div>
<div id="TOC">
<ul>
<li><a href="#introduction">Introduction</a></li>
<li><a href="#data">Data</a></li>
<li><a href="#analysis-steps">Analysis steps</a><ul>
<li><a href="#query-and-site-.csv-files">1 Query and site .csv files</a></li>
<li><a href="#transformation-and-standardizationanomalization">2 Transformation and standardization/anomalization</a></li>
<li><a href="#presamplingprebinning">3 Presampling/prebinning</a></li>
<li><a href="#composite-i.e.smooth-curves-and-bootstrap-c.i.s">4 Composite (i.e. smooth) curves and bootstrap C.I.’s</a></li>
</ul></li>
</ul>
</div>
<div id="introduction" class="section level1">
<h1>Introduction</h1>
<p>This is a set of web pages that describe the development of a composite curve of charcoal data drawn from the Global Charcoal Database v. 3, using a set of R scripts. The intention here is to explicitly document the analysis steps that were developed originally as a set of Fortran programs, and which are now implemented in the R <code>paleofire</code> package.</p>
<p>The data source for these examples is a Microsoft Access (<code>.mdb</code>) database, downloaded from <a href="https://www.gpwg.paleofire.org/">[https://www.gpwg.paleofire.org/]</a>, e.g., <code>GCDv03_Marlon_et_al_2015.mdb</code>. In the example here, the scripts aim to reproduce the “Globe” curve in Fig. 6 of Marlon et al. (2016).</p>
</div>
<div id="data" class="section level1">
<h1>Data</h1>
<p>The data used in this example are contained in two queries, saved as “database” views or internal tables in the Microsoft Access database named <code>GCDv03_Marlon_et_al_2015.mdb</code>, downloadable from the Global Charcoal Database <a href="https://paleofire.org/">[https://paleofire.org/]</a> by exporting the full database. The queries are named (for historical reasons) <code>ALL_BART_SITES</code> and <code>ALL_BART_DATA</code> and reside in the Access database as “view”. <code>ALL_BART_SITES</code> contains a list of sites, their names and locations, the depositional environment, and the units of measurement (i.e. influx, concentration, etc.) while <code>ALL_BART_DATA</code> contains the id, age, depth and quantity of charcoal in each sample.</p>
<p>For the examples here, the following folder structure for the data was used:</p>
<pre><code> /Projects/GPWG/GPWGv3/GCDv3Data/v3i/
v3i_curves/
v3i_debug/
v3i_mdb/
v3i_presamp_csv/
v3i_query/
v3i_sitelists/
v3i_sites_csv/
v3i_stats/
v3i_trans_csv/</code></pre>
<p>The the root folder and <code>v3i-mdb/</code> (into which the database should be copied) are created by the user, and the others are created during the analyses.</p>
</div>
<div id="analysis-steps" class="section level1">
<h1>Analysis steps</h1>
<p>There are four steps in the analysis:</p>
<ol style="list-style-type: decimal">
<li>reading the query results from the data base and making individual “site” .csv files</li>
<li>transforming and standardizing or normalizing (i.e. converting to anomalies) the individual records</li>
<li>implementing the presampling/prebinning step</li>
<li>composite-curve fitting using the R locfit() function, or via binning, and estimating uncertainties via bootstrapping</li>
</ol>
<div id="query-and-site-.csv-files" class="section level2">
<h2>1 Query and site .csv files</h2>
<p>(<code>mdb-to-csv.R</code>)</p>
<p>The first step in the analysis approach here involves examining the two query tables, checking for obvious issues in the data for individual sites, converting all charcoal data to both influx and concentration values, and finally extracting individual .csv files for each site. All of the scripts here begin by setting appropriate path and folder names. In the example script, the Access database files are read directly using the <code>RODBC</code> package. (Note that on Windows, compiled versions of this package exist, and a the appropriate database drivers are built into the operating system. On OS X, a third-party database driver must be used, and the <code>RODBC</code> package compiled from source. There is a separate script, <code>mdb-to-csv_osx.R</code> that illustrates this.</p>
<p>The main part of the script loops over the individual sites that are specified in the site query, and does various checks and calculations, including</p>
<ul>
<li>calculation of sedimentation rates and deposition times</li>
<li>checking for age or depth reversals, or other data issues</li>
<li>calculation of alternative quantities (e.g. influx, given concentrations)</li>
<li>writing out a .csv file for each site</li>
</ul>
<p>The calculation of sedimentation rates and deposition times is required for the conversion of concentration values to influx values and vice-versa. Various checks for age reversals, zero sedimentation rates, missing data are done. Typically, when a number of sites are added to the database, there will be issues, when are flagged by this step and resolved. In the example, such is is the case, but the script illustrates those checks in any case. The last part of this analysis step involves writing out one “site” .csv file for each site.</p>
</div>
<div id="transformation-and-standardizationanomalization" class="section level2">
<h2>2 Transformation and standardization/anomalization</h2>
<p>(<code>trans-and-zscore.R</code> & <code>trans-and-norman.R</code>)</p>
<p>Charcoal data are reported in units that range over thirteen orders of magnitude, and charcoal records typically have “long-tailed” distributions (Power et al., 2010). In order to compare or combine records, the data must therefore be transformed to approach normality (to reduce the impacts of non-constant variance) and rescaled to some common basis or range. In one example here (<code>trans-and-zcore.R</code>), we apply the variance-stabilzing Box-Cox transformation, and rescale the data to “z-scores”. (For historical reasons, the transformed data are also rescaled using the “minimax” transformation so that all values lie between 0 and 1. This reduces astonishment over transformed charcoal-influx or concentration values that may wind up being negative after transformation.) This approach also requires the specification of a base period or time interval over which the transformation parameters are estimated and the mean and standard deviation used for calculating z-scores are calculated. Further discussion of this approach can be found in Power et al. (2010) and Daniau et al. (2012).</p>
<p>A second example (<code>trans-and-norman.R</code>), illustrates the use of “normalized” anomalies, in which the deviations of the transformed charcoal influx values from a base period mean value are divided by that mean value, to produce a relative deviation, scaled by the overall level of the data. This approach is useful for last-millennium type analyses, where records with few samples can produce standard deviations that are not robust, and hence z-scores that vary dramatically.</p>
<p>The specific tasks implemented by the script include, for each site:</p>
<ul>
<li>censoring of samples with missing ages or ages after 2020 CE</li>
<li>maximum likelihood estimation of of the Box-Cox transformation parameter <code>lambda</code></li>
<li>Box-Cox transformation of data</li>
<li>minimax rescaling of the transformed data<code>tall</code></li>
<li>calculation z-scores <code>ztrans</code> or normalized anomalies <code>normans</code></li>
<li>writing out the transformed data for this site as a .csv file.</li>
</ul>
</div>
<div id="presamplingprebinning" class="section level2">
<h2>3 Presampling/prebinning</h2>
<p>(<code>presample-bin.R</code>)</p>
<p>Charcoal data are available at all kinds of “native” resolutions, from samples that represent decades or centuries (or longer) to those that represent annual deposition. Further, some records have been interpolated to pseudo-annual time steps. In developing composite curves, those records with higher resolutions will contribute disproportionately to the curve. There are two general approaches for dealing with this: 1) weighting individual charcoal (influx or concentration) values according to their resolution, with lower-resolution records receiving higher weights, and vice-versa, or 2) reducing the sampling frequency of the records to some common interval (without interpolating or creating pseudo data). We adopted the latter approach for its simplicity and transparency.</p>
<p>The binning is done by establishing a set of evenly spaced target points or bins, and then for each charcoal record, binning the individual observations. If more that one observation falls in the same bin, the average (of he transformed and standardized data) is taken as the binned value. No effort is made to interpolate between observations, to avoid pseudo-replication. A .csv file is written out for each site.</p>
</div>
<div id="composite-i.e.smooth-curves-and-bootstrap-c.i.s" class="section level2">
<h2>4 Composite (i.e. smooth) curves and bootstrap C.I.’s</h2>
<p>(<code>smooth-curve.R</code> & <code>bin-boot.R</code>)</p>
<p>The first script (<code>smooth-curve.R</code>) uses <code>locfit()</code> to get a (smooth) composite curve of the (presampled/binned) charcoal z-scores or normans for a set of sites specified by an input “sitelist” (ultimately based on all or a subset of the sites listed in the <code>ALL_BART_SITES</code> query). The smoothness of the curve is determined by the width of the smoothing window, customarily specified by the “half-width” (<code>hw</code>). First, a “global” curve (in the sense of using all of the data from a particular list of sites is determined, and the number of sites with samples (<code>ndec_tot</code>, for historical reasons) and the number of samples that contribute to each fitted value (<code>ninwin_tot</code>) are also calculated.</p>
<p>Then, over <code>nreps</code> replications, the data are sampled by site (with replacement) to calculate bootstrap confidence intervals, and the upper and lower 95th-percentile confidence intervals are determined. In the example here, the curves produced by individual bootstrap samples are plotted (in transparent gray), and the “global” curve is overplotted in red to give a visual indication of the uncertainty in the composite curve arising from the particular sample of sites.</p>
<p>The second script (<code>bin-boot.R</code>) creates a composite “curve” by directly binning each charcoal influx value in non-overlapping bins, and then calculating a simple average of the values in each bin. This approach can be used over the past few millennia, where the median sample density is generally less than 20 years, with an appropriate bin width, also around 20 years. This approach yields a composite curve that is more temporally variable than that provided by the local regression approach.</p>
</div>
</div>
</div>
<script>
// add bootstrap table styles to pandoc tables
function bootstrapStylePandocTables() {
$('tr.header').parent('thead').parent('table').addClass('table table-condensed');
}
$(document).ready(function () {
bootstrapStylePandocTables();
});
</script>
<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
(function () {
var script = document.createElement("script");
script.type = "text/javascript";
script.src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
document.getElementsByTagName("head")[0].appendChild(script);
})();
</script>
</body>
</html>