X-Git-Url: https://git.octo.it/?p=rrdtool.git;a=blobdiff_plain;f=doc%2Frrdcreate.pod;h=b321de6329bd8509d373196c92c0a28f07853dbc;hp=2ecf8d1224f41788137342c968a765cac08b4366;hb=8f9c2c2b3c3f8a65e24dd9d8d612eafe48ccfb2e;hpb=ac29bccef8036fb1a7ba55458c86d8601fcc56f4 diff --git a/doc/rrdcreate.pod b/doc/rrdcreate.pod index 2ecf8d1..b321de6 100644 --- a/doc/rrdcreate.pod +++ b/doc/rrdcreate.pod @@ -4,9 +4,10 @@ rrdcreate - Set up a new Round Robin Database =head1 SYNOPSIS -B B I -S<[B<--start>|B<-b> I]> -S<[B<--step>|B<-s> I]> +B B I +S<[B<--start>|B<-b> I]> +S<[B<--step>|B<-s> I]> +S<[B<--no-overwrite>]> S<[BIB<:>IB<:>I]> S<[BIB<:>I]> @@ -16,15 +17,13 @@ The create function of RRDtool lets you set up new Round Robin Database (B) files. The file is created at its final, full size and filled with I<*UNKNOWN*> data. -=over 8 - -=item I +=head2 I The name of the B you want to create. B files should end with the extension F<.rrd>. However, B will accept any filename. -=item B<--start>|B<-b> I (default: now - 10s) +=head2 B<--start>|B<-b> I (default: now - 10s) Specifies the time in seconds since 1970-01-01 UTC when the first value should be added to the B. B will not accept @@ -33,12 +32,16 @@ any data timed before or at the time specified. See also AT-STYLE TIME SPECIFICATION section in the I documentation for other ways to specify time. -=item B<--step>|B<-s> I (default: 300 seconds) +=head2 B<--step>|B<-s> I (default: 300 seconds) Specifies the base interval in seconds with which data will be fed into the B. -=item BIB<:>IB<:>I +=head2 B<--no-overwrite> + +Do not clobber an existing file of the same name. + +=head2 BIB<:>IB<:>I A single B can accept input from several data sources (B), for example incoming and outgoing traffic on a specific communication @@ -63,9 +66,9 @@ In order to decide which data source type to use, review the definitions that follow. Also consult the section on "HOW TO MEASURE" for further insight. -=over 4 +=over -=item B +=item B is for things like temperatures or number of people in a room or the value of a RedHat share. @@ -89,9 +92,7 @@ room. Internally, derive works exactly like COUNTER but without overflow checks. So if your counter does not reset at 32 or 64 bit you might want to use DERIVE and combine it with a MIN value of 0. -=over - -=item NOTE on COUNTER vs DERIVE +B by Don Baarda Edon.baarda@baesystems.comE @@ -110,9 +111,7 @@ probably preferable. If you are using a 64bit counter, just about any max setting will eliminate the possibility of mistaking a reset for a counter wrap. -=back - -=item B +=item B is for counters which get reset upon reading. This is used for fast counters which tend to overflow. So instead of reading them normally you reset them @@ -134,7 +133,7 @@ to as "virtual" or "computed" columns. =back I defines the maximum number of seconds that may pass -between two updates of this data source before the value of the +between two updates of this data source before the value of the data source is assumed to be I<*UNKNOWN*>. I and I define the expected range values for data supplied by a @@ -159,11 +158,10 @@ names of data source listed previously in the create command. This is similar to the restriction that Bs must refer only to Bs and Bs previously defined in the same graph command. -=item BIB<:>I - +=head2 BIB<:>I The purpose of an B is to store data in the round robin archives -(B). An archive consists of a number of data values or statistics for +(B). An archive consists of a number of data values or statistics for each of the defined data-sources (B) and is defined with an B line. When data is entered into an B, it is first fit into time slots @@ -173,21 +171,50 @@ data point>. The data is also processed with the consolidation function (I) of the archive. There are several consolidation functions that consolidate primary data points via an aggregate function: B, -B, B, B. The format of B line for these +B, B, B. + +=over + +=item AVERAGE + +the average of the data points is stored. + +=item MIN + +the smallest of the data points is stored. + +=item MAX + +the largest of the data points is stored. + +=item LAST + +the last data points is used. + +=back + +Note that data aggregation inevitably leads to loss of precision and +information. The trick is to pick the aggregate function such that the +I properties of your data is kept across the aggregation +process. + + +The format of B line for these consolidation functions is: BIB<:>IB<:>IB<:>I I The xfiles factor defines what part of a consolidation interval may be made up from I<*UNKNOWN*> data while the consolidated value is still -regarded as known. +regarded as known. It is given as the ratio of allowed I<*UNKNOWN*> PDPs +to the number of PDPs in the interval. Thus, it ranges from 0 to 1 (exclusive). + I defines how many of these I are used to build a I which then goes into the archive. I defines how many generations of data values are kept in an B. - -=back +Obviously, this has to be greater than zero. =head1 Aberrant Behavior Detection with Holt-Winters Forecasting @@ -200,15 +227,19 @@ flagging aberrant behavior in the data source time series: =item * -BIB<:>IB<:>IB<:>IB<:>IB<:>I +BIB<:>IB<:>IB<:>IB<:>I[B<:>I] =item * -BIB<:>IB<:>IB<:>I +BIB<:>IB<:>IB<:>IB<:>I[B<:>I] =item * -BIB<:>IB<:>IB<:>I +BIB<:>IB<:>IB<:>I[B<:smoothing-window=>I] + +=item * + +BIB<:>IB<:>IB<:>I[B<:smoothing-window=>I] =item * @@ -223,19 +254,32 @@ BIB<:>IB<:>IB<:>IB<:>I These B differ from the true consolidation functions in several ways. First, each of the Bs is updated once for every primary data point. Second, these B are interdependent. To generate real-time confidence -bounds, a matched set of HWPREDICT, SEASONAL, DEVSEASONAL, and -DEVPREDICT must exist. Generating smoothed values of the primary data points -requires both a HWPREDICT B and SEASONAL B. Aberrant behavior -detection requires FAILURES, HWPREDICT, DEVSEASONAL, and SEASONAL. - -The actual predicted, or smoothed, values are stored in the HWPREDICT -B. The predicted deviations are stored in DEVPREDICT (think a standard -deviation which can be scaled to yield a confidence band). The FAILURES -B stores binary indicators. A 1 marks the indexed observation as -failure; that is, the number of confidence bounds violations in the -preceding window of observations met or exceeded a specified threshold. An -example of using these B to graph confidence bounds and failures -appears in L. +bounds, a matched set of SEASONAL, DEVSEASONAL, DEVPREDICT, and either +HWPREDICT or MHWPREDICT must exist. Generating smoothed values of the primary +data points requires a SEASONAL B and either an HWPREDICT or MHWPREDICT +B. Aberrant behavior detection requires FAILURES, DEVSEASONAL, SEASONAL, +and either HWPREDICT or MHWPREDICT. + +The predicted, or smoothed, values are stored in the HWPREDICT or MHWPREDICT +B. HWPREDICT and MHWPREDICT are actually two variations on the +Holt-Winters method. They are interchangeable. Both attempt to decompose data +into three components: a baseline, a trend, and a seasonal coefficient. +HWPREDICT adds its seasonal coefficient to the baseline to form a prediction, whereas +MHWPREDICT multiplies its seasonal coefficient by the baseline to form a +prediction. The difference is noticeable when the baseline changes +significantly in the course of a season; HWPREDICT will predict the seasonality +to stay constant as the baseline changes, but MHWPREDICT will predict the +seasonality to grow or shrink in proportion to the baseline. The proper choice +of method depends on the thing being modeled. For simplicity, the rest of this +discussion will refer to HWPREDICT, but MHWPREDICT may be substituted in its +place. + +The predicted deviations are stored in DEVPREDICT (think a standard deviation +which can be scaled to yield a confidence band). The FAILURES B stores +binary indicators. A 1 marks the indexed observation as failure; that is, the +number of confidence bounds violations in the preceding window of observations +met or exceeded a specified threshold. An example of using these B to graph +confidence bounds and failures appears in L. The SEASONAL and DEVSEASONAL B store the seasonal coefficients for the Holt-Winters forecasting algorithm and the seasonal deviations, respectively. @@ -295,6 +339,13 @@ If SEASONAL and DEVSEASONAL B are created explicitly, I need not be the same for both. Note that I can also be changed via the B I command. +I specifies the fraction of a season that should be +averaged around each point. By default, the value of I is +0.05, which means each value in SEASONAL and DEVSEASONAL will be occasionally +replaced by averaging it with its (I*0.05) nearest neighbors. +Setting I to zero will disable the running-average smoother +altogether. + I provides the links between related B. If HWPREDICT is specified alone and the other B are created implicitly, then there is no need to worry about this argument. If B are created @@ -311,11 +362,11 @@ requiring the I argument is listed here: HWPREDICT I is the index of the SEASONAL B. -=item * +=item * SEASONAL I is the index of the HWPREDICT B. -=item * +=item * DEVPREDICT I is the index of the DEVSEASONAL B. @@ -323,7 +374,7 @@ DEVPREDICT I is the index of the DEVSEASONAL B. DEVSEASONAL I is the index of the HWPREDICT B. -=item * +=item * FAILURES I is the index of the DEVSEASONAL B. @@ -345,28 +396,24 @@ Here is an explanation by Don Baarda on the inner workings of RRDtool. It may help you to sort out why all this *UNKNOWN* data is popping up in your databases: -RRDtool gets fed samples at arbitrary times. From these it builds Primary -Data Points (PDPs) at exact times on every "step" interval. The PDPs are -then accumulated into RRAs. +RRDtool gets fed samples/updates at arbitrary times. From these it builds Primary +Data Points (PDPs) on every "step" interval. The PDPs are +then accumulated into the RRAs. The "heartbeat" defines the maximum acceptable interval between -samples. If the interval between samples is less than "heartbeat", +samples/updates. If the interval between samples is less than "heartbeat", then an average rate is calculated and applied for that interval. If the interval between samples is longer than "heartbeat", then that entire interval is considered "unknown". Note that there are other things that can make a sample interval "unknown", such as the rate -exceeding limits, or even an "unknown" input sample. +exceeding limits, or a sample that was explicitly marked as unknown. The known rates during a PDP's "step" interval are used to calculate -an average rate for that PDP. Also, if the total "unknown" time during -the "step" interval exceeds the "heartbeat", the entire PDP is marked +an average rate for that PDP. If the total "unknown" time accounts for +more than B the "step", the entire PDP is marked as "unknown". This means that a mixture of known and "unknown" sample -times in a single PDP "step" may or may not add up to enough "unknown" -time to exceed "heartbeat" and hence mark the whole PDP "unknown". So -"heartbeat" is not only the maximum acceptable interval between -samples, but also the maximum acceptable amount of "unknown" time per -PDP (obviously this is only significant if you have "heartbeat" less -than "step"). +times in a single PDP "step" may or may not add up to enough "known" +time to warrant a known PDP. The "heartbeat" can be short (unusual) or long (typical) relative to the "step" interval between PDPs. A short "heartbeat" means you @@ -378,6 +425,44 @@ sample. An extreme example of this might be a "step" of 5 minutes and a result in all the PDPs for that entire day period being set to the same average rate. I<-- Don Baarda Edon.baarda@baesystems.comE> + time| + axis| + begin__|00| + |01| + u|02|----* sample1, restart "hb"-timer + u|03| / + u|04| / + u|05| / + u|06|/ "hbt" expired + u|07| + |08|----* sample2, restart "hb" + |09| / + |10| / + u|11|----* sample3, restart "hb" + u|12| / + u|13| / + step1_u|14| / + u|15|/ "swt" expired + u|16| + |17|----* sample4, restart "hb", create "pdp" for step1 = + |18| / = unknown due to 10 "u" labled secs > 0.5 * step + |19| / + |20| / + |21|----* sample5, restart "hb" + |22| / + |23| / + |24|----* sample6, restart "hb" + |25| / + |26| / + |27|----* sample7, restart "hb" + step2__|28| / + |22| / + |23|----* sample8, restart "hb", create "pdp" for step1, create "cdp" + |24| / + |25| / + +graphics by I. + =head1 HOW TO MEASURE @@ -397,7 +482,7 @@ together with the time. =item Mail Messages Assume you have a method to count the number of messages transported by -your mailserver in a certain amount of time, giving you data like '5 +your mail server in a certain amount of time, giving you data like '5 messages in the last 65 seconds'. If you look at the count of 5 like an B data type you can simply update the RRD with the number 5 and the end time of your monitoring period. RRDtool will then record the number of @@ -447,10 +532,10 @@ average temperature, respectively. =head1 EXAMPLE 2 - rrdtool create monitor.rrd --step 300 \ - DS:ifOutOctets:COUNTER:1800:0:4294967295 \ + rrdtool create monitor.rrd --step 300 \ + DS:ifOutOctets:COUNTER:1800:0:4294967295 \ RRA:AVERAGE:0.5:1:2016 \ - RRA:HWPREDICT:1440:0.1:0.0035:288 + RRA:HWPREDICT:1440:0.1:0.0035:288 This example is a monitor of a router interface. The first B tracks the traffic flow in octets; the second B generates the specialized @@ -473,27 +558,27 @@ the FAILURES B. The same RRD file and B are created with the following command, which explicitly creates all specialized function B. - rrdtool create monitor.rrd --step 300 \ - DS:ifOutOctets:COUNTER:1800:0:4294967295 \ - RRA:AVERAGE:0.5:1:2016 \ - RRA:HWPREDICT:1440:0.1:0.0035:288:3 \ - RRA:SEASONAL:288:0.1:2 \ - RRA:DEVPREDICT:1440:5 \ - RRA:DEVSEASONAL:288:0.1:2 \ - RRA:FAILURES:288:7:9:5 + rrdtool create monitor.rrd --step 300 \ + DS:ifOutOctets:COUNTER:1800:0:4294967295 \ + RRA:AVERAGE:0.5:1:2016 \ + RRA:HWPREDICT:1440:0.1:0.0035:288:3 \ + RRA:SEASONAL:288:0.1:2 \ + RRA:DEVPREDICT:1440:5 \ + RRA:DEVSEASONAL:288:0.1:2 \ + RRA:FAILURES:288:7:9:5 Of course, explicit creation need not replicate implicit create, a number of arguments could be changed. =head1 EXAMPLE 3 - rrdtool create proxy.rrd --step 300 \ - DS:Total:DERIVE:1800:0:U \ - DS:Duration:DERIVE:1800:0:U \ - DS:AvgReqDur:COMPUTE:Duration,Requests,0,EQ,1,Requests,IF,/ \ - RRA:AVERAGE:0.5:1:2016 + rrdtool create proxy.rrd --step 300 \ + DS:Total:DERIVE:1800:0:U \ + DS:Duration:DERIVE:1800:0:U \ + DS:AvgReqDur:COMPUTE:Duration,Requests,0,EQ,1,Requests,IF,/ \ + RRA:AVERAGE:0.5:1:2016 -This example is monitoring the average request duration during each 300 sec +This example is monitoring the average request duration during each 300 sec interval for requests processed by a web proxy during the interval. In this case, the proxy exposes two counters, the number of requests processed since boot and the total cumulative duration of all processed @@ -510,4 +595,4 @@ RPN expression handles the divide by zero case. =head1 AUTHOR -Tobias Oetiker Eoetiker@ee.ethz.chE +Tobias Oetiker Etobi@oetiker.chE