X-Git-Url: https://git.octo.it/?p=rrdtool.git;a=blobdiff_plain;f=doc%2Frrdcreate.pod;h=38585422c258f0baa3cd2d701b1a06748dda0073;hp=27ef702afd5097ca94e08e13c2b49cb01c3d0559;hb=825f213df6f751b4e48fe7dcd64feeb27b10cc59;hpb=342b22c3e74a10d7049285c2cea7383676bcfc95 diff --git a/doc/rrdcreate.pod b/doc/rrdcreate.pod index 27ef702..3858542 100644 --- a/doc/rrdcreate.pod +++ b/doc/rrdcreate.pod @@ -7,6 +7,8 @@ rrdcreate - Set up a new Round Robin Database B B I S<[B<--start>|B<-b> I]> S<[B<--step>|B<-s> I]> +S<[B<--no-overwrite>]> +S<[B<--daemon> I
]> S<[BIB<:>IB<:>I]> S<[BIB<:>I]> @@ -16,15 +18,13 @@ The create function of RRDtool lets you set up new Round Robin Database (B) files. The file is created at its final, full size and filled with I<*UNKNOWN*> data. -=over 8 - -=item I +=head2 I The name of the B you want to create. B files should end with the extension F<.rrd>. However, B will accept any filename. -=item B<--start>|B<-b> I (default: now - 10s) +=head2 B<--start>|B<-b> I (default: now - 10s) Specifies the time in seconds since 1970-01-01 UTC when the first value should be added to the B. B will not accept @@ -33,12 +33,23 @@ any data timed before or at the time specified. See also AT-STYLE TIME SPECIFICATION section in the I documentation for other ways to specify time. -=item B<--step>|B<-s> I (default: 300 seconds) +=head2 B<--step>|B<-s> I (default: 300 seconds) Specifies the base interval in seconds with which data will be fed into the B. -=item BIB<:>IB<:>I +=head2 B<--no-overwrite> + +Do not clobber an existing file of the same name. + +=item B<--daemon> I
+ +Address of the L daemon. For a list of accepted formats, see +the B<-l> option in the L manual. + + rrdtool create --daemon unix:/var/run/rrdcached.sock /var/lib/rrd/foo.rrd I + +=head2 BIB<:>IB<:>I A single B can accept input from several data sources (B), for example incoming and outgoing traffic on a specific communication @@ -63,7 +74,7 @@ In order to decide which data source type to use, review the definitions that follow. Also consult the section on "HOW TO MEASURE" for further insight. -=over 4 +=over =item B @@ -89,9 +100,7 @@ room. Internally, derive works exactly like COUNTER but without overflow checks. So if your counter does not reset at 32 or 64 bit you might want to use DERIVE and combine it with a MIN value of 0. -=over - -=item NOTE on COUNTER vs DERIVE +B by Don Baarda Edon.baarda@baesystems.comE @@ -110,8 +119,6 @@ probably preferable. If you are using a 64bit counter, just about any max setting will eliminate the possibility of mistaking a reset for a counter wrap. -=back - =item B is for counters which get reset upon reading. This is used for fast counters @@ -138,7 +145,7 @@ between two updates of this data source before the value of the data source is assumed to be I<*UNKNOWN*>. I and I define the expected range values for data supplied by a -data source. If I and/or I any value outside the defined range +data source. If I and/or I are specified any value outside the defined range will be regarded as I<*UNKNOWN*>. If you do not know or care about min and max, set them to U for unknown. Note that min and max always refer to the processed values of the DS. For a traffic-B type DS this would be @@ -159,8 +166,7 @@ names of data source listed previously in the create command. This is similar to the restriction that Bs must refer only to Bs and Bs previously defined in the same graph command. -=item BIB<:>I - +=head2 BIB<:>I The purpose of an B is to store data in the round robin archives (B). An archive consists of a number of data values or statistics for @@ -173,7 +179,35 @@ data point>. The data is also processed with the consolidation function (I) of the archive. There are several consolidation functions that consolidate primary data points via an aggregate function: B, -B, B, B. The format of B line for these +B, B, B. + +=over + +=item AVERAGE + +the average of the data points is stored. + +=item MIN + +the smallest of the data points is stored. + +=item MAX + +the largest of the data points is stored. + +=item LAST + +the last data points is used. + +=back + +Note that data aggregation inevitably leads to loss of precision and +information. The trick is to pick the aggregate function such that the +I properties of your data is kept across the aggregation +process. + + +The format of B line for these consolidation functions is: BIB<:>IB<:>IB<:>I @@ -188,8 +222,7 @@ I defines how many of these I are used to build a I which then goes into the archive. I defines how many generations of data values are kept in an B. - -=back +Obviously, this has to be greater than zero. =head1 Aberrant Behavior Detection with Holt-Winters Forecasting @@ -206,11 +239,15 @@ BIB<:>IB<:>IB<:>IB<:>I[B<:> =item * -BIB<:>IB<:>IB<:>I +BIB<:>IB<:>IB<:>IB<:>I[B<:>I] =item * -BIB<:>IB<:>IB<:>I +BIB<:>IB<:>IB<:>I[B<:smoothing-window=>I] + +=item * + +BIB<:>IB<:>IB<:>I[B<:smoothing-window=>I] =item * @@ -225,19 +262,32 @@ BIB<:>IB<:>IB<:>IB<:>I These B differ from the true consolidation functions in several ways. First, each of the Bs is updated once for every primary data point. Second, these B are interdependent. To generate real-time confidence -bounds, a matched set of HWPREDICT, SEASONAL, DEVSEASONAL, and -DEVPREDICT must exist. Generating smoothed values of the primary data points -requires both a HWPREDICT B and SEASONAL B. Aberrant behavior -detection requires FAILURES, HWPREDICT, DEVSEASONAL, and SEASONAL. - -The actual predicted, or smoothed, values are stored in the HWPREDICT -B. The predicted deviations are stored in DEVPREDICT (think a standard -deviation which can be scaled to yield a confidence band). The FAILURES -B stores binary indicators. A 1 marks the indexed observation as -failure; that is, the number of confidence bounds violations in the -preceding window of observations met or exceeded a specified threshold. An -example of using these B to graph confidence bounds and failures -appears in L. +bounds, a matched set of SEASONAL, DEVSEASONAL, DEVPREDICT, and either +HWPREDICT or MHWPREDICT must exist. Generating smoothed values of the primary +data points requires a SEASONAL B and either an HWPREDICT or MHWPREDICT +B. Aberrant behavior detection requires FAILURES, DEVSEASONAL, SEASONAL, +and either HWPREDICT or MHWPREDICT. + +The predicted, or smoothed, values are stored in the HWPREDICT or MHWPREDICT +B. HWPREDICT and MHWPREDICT are actually two variations on the +Holt-Winters method. They are interchangeable. Both attempt to decompose data +into three components: a baseline, a trend, and a seasonal coefficient. +HWPREDICT adds its seasonal coefficient to the baseline to form a prediction, whereas +MHWPREDICT multiplies its seasonal coefficient by the baseline to form a +prediction. The difference is noticeable when the baseline changes +significantly in the course of a season; HWPREDICT will predict the seasonality +to stay constant as the baseline changes, but MHWPREDICT will predict the +seasonality to grow or shrink in proportion to the baseline. The proper choice +of method depends on the thing being modeled. For simplicity, the rest of this +discussion will refer to HWPREDICT, but MHWPREDICT may be substituted in its +place. + +The predicted deviations are stored in DEVPREDICT (think a standard deviation +which can be scaled to yield a confidence band). The FAILURES B stores +binary indicators. A 1 marks the indexed observation as failure; that is, the +number of confidence bounds violations in the preceding window of observations +met or exceeded a specified threshold. An example of using these B to graph +confidence bounds and failures appears in L. The SEASONAL and DEVSEASONAL B store the seasonal coefficients for the Holt-Winters forecasting algorithm and the seasonal deviations, respectively. @@ -297,6 +347,13 @@ If SEASONAL and DEVSEASONAL B are created explicitly, I need not be the same for both. Note that I can also be changed via the B I command. +I specifies the fraction of a season that should be +averaged around each point. By default, the value of I is +0.05, which means each value in SEASONAL and DEVSEASONAL will be occasionally +replaced by averaging it with its (I*0.05) nearest neighbors. +Setting I to zero will disable the running-average smoother +altogether. + I provides the links between related B. If HWPREDICT is specified alone and the other B are created implicitly, then there is no need to worry about this argument. If B are created @@ -347,28 +404,24 @@ Here is an explanation by Don Baarda on the inner workings of RRDtool. It may help you to sort out why all this *UNKNOWN* data is popping up in your databases: -RRDtool gets fed samples at arbitrary times. From these it builds Primary -Data Points (PDPs) at exact times on every "step" interval. The PDPs are -then accumulated into RRAs. +RRDtool gets fed samples/updates at arbitrary times. From these it builds Primary +Data Points (PDPs) on every "step" interval. The PDPs are +then accumulated into the RRAs. The "heartbeat" defines the maximum acceptable interval between -samples. If the interval between samples is less than "heartbeat", +samples/updates. If the interval between samples is less than "heartbeat", then an average rate is calculated and applied for that interval. If the interval between samples is longer than "heartbeat", then that entire interval is considered "unknown". Note that there are other things that can make a sample interval "unknown", such as the rate -exceeding limits, or even an "unknown" input sample. +exceeding limits, or a sample that was explicitly marked as unknown. The known rates during a PDP's "step" interval are used to calculate -an average rate for that PDP. Also, if the total "unknown" time during -the "step" interval exceeds the "heartbeat", the entire PDP is marked +an average rate for that PDP. If the total "unknown" time accounts for +more than B the "step", the entire PDP is marked as "unknown". This means that a mixture of known and "unknown" sample -times in a single PDP "step" may or may not add up to enough "unknown" -time to exceed "heartbeat" and hence mark the whole PDP "unknown". So -"heartbeat" is not only the maximum acceptable interval between -samples, but also the maximum acceptable amount of "unknown" time per -PDP (obviously this is only significant if you have "heartbeat" less -than "step"). +times in a single PDP "step" may or may not add up to enough "known" +time to warrant a known PDP. The "heartbeat" can be short (unusual) or long (typical) relative to the "step" interval between PDPs. A short "heartbeat" means you @@ -400,7 +453,7 @@ same average rate. I<-- Don Baarda Edon.baarda@baesystems.comE> u|15|/ "swt" expired u|16| |17|----* sample4, restart "hb", create "pdp" for step1 = - |18| / = unknown due to 10 "u" labled secs > "hb" + |18| / = unknown due to 10 "u" labled secs > 0.5 * step |19| / |20| / |21|----* sample5, restart "hb" @@ -437,7 +490,7 @@ together with the time. =item Mail Messages Assume you have a method to count the number of messages transported by -your mailserver in a certain amount of time, giving you data like '5 +your mail server in a certain amount of time, giving you data like '5 messages in the last 65 seconds'. If you look at the count of 5 like an B data type you can simply update the RRD with the number 5 and the end time of your monitoring period. RRDtool will then record the number of