From 7f4b880d9d5c7a48e273f52e8dfa54fae39006e3 Mon Sep 17 00:00:00 2001 From: oetiker Date: Tue, 26 Apr 2005 22:04:25 +0000 Subject: [PATCH] more fixes by fritz git-svn-id: svn://svn.oetiker.ch/rrdtool/branches/1.2/program@449 a5681a0c-68f1-0310-ab6d-d61299d08faa --- doc/cdeftutorial.pod | 133 ++++++++++++------------ doc/rrd-beginners.pod | 276 ++++++++++++++++++++++++++------------------------ doc/rrdcreate.pod | 195 ++++++++++++++++++----------------- doc/rrdgraph.pod | 147 ++++++++++++++------------- 4 files changed, 389 insertions(+), 362 deletions(-) diff --git a/doc/cdeftutorial.pod b/doc/cdeftutorial.pod index 49f8001..25ded60 100644 --- a/doc/cdeftutorial.pod +++ b/doc/cdeftutorial.pod @@ -5,13 +5,14 @@ cdeftutorial - Alex van den Bogaerdt's CDEF tutorial =head1 DESCRIPTION If you provide a question, I will try to provide an answer in the next -release of this tutorial. No feedback equals no changes! Additions to this document are also welcome. --- Alex van den Bogaerdt Ealex@ergens.op.het.netE +release of this tutorial. No feedback equals no changes! Additions to +this document are also welcome. -- Alex van den Bogaerdt +Ealex@ergens.op.het.netE -=head2 Why this tutorial ? +=head2 Why this tutorial? One of the powerful parts of RRDtool is its ability to do all sorts -of calculations on the data retrieved from it's databases. However +of calculations on the data retrieved from its databases. However, RRDtool's many options and syntax make it difficult for the average user to understand. The manuals are good at explaining what these options do; however they do not (and should not) explain in detail @@ -20,14 +21,14 @@ simple document in simple language you should read this tutorial. If you are happy with the official documentation, you may find this document too simple or even boring. If you do choose to read this tutorial, I also expect you to have read and fully understand my -other tutorial. +other tutorial. =head2 More reading If you have difficulties with the way I try to explain it please read Steve Rader's L. It may help you understand how this all works. -=head1 What are CDEFs ? +=head1 What are CDEFs? When retrieving data from an RRD, you are using a "DEF" to work with that data. Think of it as a variable that changes over time (where @@ -57,12 +58,12 @@ instead of the original: CDEF:inbits=inbytes,8,* -It tells to multiply inbytes by eight to get inbits. I'll explain later -how this works. In the graphing or printing functions, you can now use -inbits where you would use inbytes otherwise. +This tells RRDtool to multiply inbytes by eight to get inbits. I'll +explain later how this works. In the graphing or printing functions, +you can now use inbits where you would use inbytes otherwise. -Note that variable in the CDEF (inbits) must not be the same as the -variable (inbytes) in the DEF! +Note that the variable name used in the CDEF (inbits) must not be the +same as the variable named in the DEF (inbytes)! =head1 RPN-expressions @@ -138,13 +139,13 @@ Processing the stack (step 5) will retrieve one value from the stack (from the right at step 4). This is the operation multiply and this takes two values off the stack as input. The result is put back on the stack (the value 80 in this case). For multiplication the order doesn't -matter but for other operations like subtraction and division it does. +matter, but for other operations like subtraction and division it does. Generally speaking you have the following order: y = A - B --> y=minus(A,B) --> CDEF:y=A,B,- This is not very intuitive (at least most people don't think so). For -the function f(A,B) you reverse the position of "f" but you do not +the function f(A,B) you reverse the position of "f", but you do not reverse the order of the variables. =head1 Converting your wishes to RPN @@ -228,16 +229,16 @@ evaluated differently: and process it: S (where S == a+R) As you can see the RPN expression C will evaluate in -C<((((d+e)+c)+b)+a)> and it has the same outcome as C -According to Steve Rader this is called the commutative law of addition +C<((((d+e)+c)+b)+a)> and it has the same outcome as C. +This is called the commutative law of addition, but you may forget this right away, as long as you remember what it -represents. +means. Now look at an expression that contains a multiplication: First in normal math: C. In this case you can't choose the order yourself, you have to start with the multiplication -and then add a to it. You may alter the position of b and c, you may +and then add a to it. You may alter the position of b and c, you must not alter the position of a and b. You have to take this in consideration when converting this expression @@ -253,7 +254,7 @@ similar to one of the expressions in the previous paragraph, only the multiplication and the addition changed places. When you have problems with RPN or when RRDtool is complaining, it's -usually a Good Thing to write down the stack on a piece of paper +usually a good thing to write down the stack on a piece of paper and see what happens. Have the manual ready and pretend to be RRDtool. Just do all the math by hand to see what happens, I'm sure this will solve most, if not all, problems you encounter. @@ -264,7 +265,7 @@ solve most, if not all, problems you encounter. Sometimes collecting your data will fail. This can be very common, especially when querying over busy links. RRDtool can be configured -to allow for one (or even more) unknown value and calculate the missing +to allow for one (or even more) unknown value(s) and calculate the missing update. You can, for instance, query your device every minute. This is creating one so called PDP or primary data point per minute. If you defined your RRD to contain an RRA that stores 5-minute values, you need @@ -275,7 +276,7 @@ These PDPs can become unknown in two cases: =item 1. -The updates are too far apart. This is tuned using the "heartbeat" setting +The updates are too far apart. This is tuned using the "heartbeat" setting. =item 2. @@ -294,12 +295,12 @@ Suppose the counter increments with one per second and you retrieve it every minute: counter value resulting rate - 10000 - 10060 1; (10060-10000)/60 == 1 - 10120 1; (10120-10060)/60 == 1 - unknown unknown; you don't know the last value - 10240 unknown; you don't know the previous value - 10300 1; (10300-10240)/60 == 1 + 10'000 + 10'060 1; (10'060-10'000)/60 == 1 + 10'120 1; (10'120-10'060)/60 == 1 + unknown unknown; you don't know the last value + 10'240 unknown; you don't know the previous value + 10'300 1; (10'300-10'240)/60 == 1 If the CDP was to be calculated from the last five updates, it would get two unknown PDPs and three known PDPs. If xff would have been set to 0.5 @@ -335,14 +336,14 @@ data into zero. The counters of the device were unknown (after all, it wasn't installed yet!) but you know that the data rate through the device had to be zero (because of the same reason: it was not installed). -There are some examples further on that make this change. +There are some examples below that make this change. =head2 Infinity -Infinite data is another form of a special number. It cannot be graphed -because by definition you would never reach the infinite value. You could -think of positive and negative infinity (I'm not sure if mathematicians -will agree) depending on the position relative to zero. +Infinite data is another form of a special number. It cannot be +graphed because by definition you would never reach the infinite +value. You can think of positive and negative infinity depending on +the position relative to zero. RRDtool is capable of representing (-not- graphing!) infinity by stopping at its current maximum (for positive infinity) or minimum (for negative @@ -384,14 +385,14 @@ the other database. =item * -Alternately you could use CDEF and alter unknown data to zero. +Alternatively, you could use CDEF and alter unknown data to zero. =back Both methods have their pros and cons. The first method is troublesome and if you want to do that you have to figure it out yourself. It is not possible to create a database filled with zeros, you have to put them in -on purpose. Implementing the second method is described next: +manually. Implementing the second method is described next: What we want is: "if the value is unknown, replace it with zero". This could be written in pseudo-code as: if (value is unknown) then (zero) @@ -443,7 +444,7 @@ to remove this rule so that unknown data is properly displayed. =head2 Example: better handling of unknown data, by using time -Above example has one drawback. If you do log unknown data in +The above example has one drawback. If you do log unknown data in your database after installing your new equipment, it will also be translated into zero and therefore you won't see that there was a problem. This is not good and what you really want to do is: @@ -452,28 +453,28 @@ problem. This is not good and what you really want to do is: =item * -If there is unknown data, look at the time that this sample was taken +If there is unknown data, look at the time that this sample was taken. =item * -If the unknown value is before time xxx, make it zero +If the unknown value is before time xxx, make it zero. =item * -If it is after time xxx, leave it as unknown data +If it is after time xxx, leave it as unknown data. =back This is doable: you can compare the time that the sample was taken to some known time. Assuming you started to monitor your device on -Friday September 17, 00:35:57 MET DST. Translate this time in seconds -since 1970-01-01 and it becomes 937521357. If you process unknown values +Friday September 17, 1999, 00:35:57 MET DST. Translate this time in seconds +since 1970-01-01 and it becomes 937'521'357. If you process unknown values that were received after this time, you want to leave them unknown and if they were "received" before this time, you want to translate them into zero (so you can effectively ignore them while adding them to your other routers counters). -Translating Friday September 17, 00:35:57 MET DST into 937521357 can +Translating Friday September 17, 1999, 00:35:57 MET DST into 937'521'357 can be done by, for instance, using gnu date: date -d "19990917 00:35:57" +%s @@ -489,11 +490,11 @@ This is a three step process: =item 1. -If the timestamp of the value is after 937521357, leave it as is +If the timestamp of the value is after 937'521'357, leave it as is. =item 2. -If the value is a known value, leave it as is +If the value is a known value, leave it as is. =item 3. @@ -535,13 +536,13 @@ so lets do it quick: We end up with: C -This looks very complex however as you can see it was not too hard to +This looks very complex, however, as you can see, it was not too hard to come up with. =head2 Example: Pretending weird data isn't there Suppose you have a problem that shows up as huge spikes in your graph. -You know this happens and why so you decide to work around the problem. +You know this happens and why, so you decide to work around the problem. Perhaps you're using your network to do a backup at night and by doing so you get almost 10mb/s while the rest of your network activity does not produce numbers higher than 100kb/s. @@ -553,11 +554,11 @@ There are two options: =item 1. If the number exceeds 100kb/s it is wrong and you want it masked out -by changing it into unknown +by changing it into unknown. =item 2. -You don't want the graph to show more than 100kb/s +You don't want the graph to show more than 100kb/s. =back @@ -573,7 +574,7 @@ the numbers to display maxima they will be set to 100kb/s. We use "IF" and "GT" again. "if (x) then (y) else (z)" is written down as "CDEF:result=x,y,z,IF"; now fill in x, y and z. For x you fill in "number greater than 100kb/s" becoming -"number,100000,GT" (kilo is 1000 and b/s is what we measure!). +"number,100000,GT" (kilo is 1'000 and b/s is what we measure!). The "z" part is "number" in both cases and the "y" part is either "UNKN" for unknown or "100000" for 100kb/s. @@ -585,7 +586,7 @@ The two CDEF expressions would be: =head2 Example: working on a certain time span If you want a graph that spans a few weeks, but would only want to -see some routers data for one week, you need to "hide" the rest of +see some routers' data for one week, you need to "hide" the rest of the time frame. Don't ask me when this would be useful, it's just here for the example :) @@ -695,11 +696,11 @@ if you like. But there are good reasons for writing two CDEFS: =item * -It improves the readability of the script +It improves the readability of the script. =item * -It can be used inside GPRINT to display the total number of users +It can be used inside GPRINT to display the total number of users. =back @@ -755,13 +756,15 @@ enough for this purpose and it saves a calculation. AREA:agginput#00cc00:Input Aggregate \ LINE1:aggoutput#0000FF:Output Aggregate -These two CDEFs are built from several functions. It helps to -split them when viewing what they do. -Starting with the first CDEF we would get: - idat1,UN --> a - 0 --> b - idat1 --> c - if (a) then (b) else (c) +These two CDEFs are built from several functions. It helps to split +them when viewing what they do. Starting with the first CDEF we would +get: + + idat1,UN --> a + 0 --> b + idat1 --> c + if (a) then (b) else (c) + The result is therefore "0" if it is true that "idat1" equals "UN". If not, the original value of "idat1" is put back on the stack. Lets call this answer "d". The process is repeated for the next @@ -797,10 +800,10 @@ to see what happens in the "background" CDEF. This RPN takes the value of "val4" as input and then immediately removes it from the stack using "POP". The stack is now empty but -as a side result we now know the time that this sample was taken. +as a side effect we now know the time that this sample was taken. This time is put on the stack by the "TIME" function. -"TIME,7200,%" takes the modulo of time and 7200 (which is two hours). +"TIME,7200,%" takes the modulo of time and 7'200 (which is two hours). The resulting value on the stack will be a number in the range from 0 to 7199. @@ -821,7 +824,8 @@ won't do that here. Now you can draw the different layers. Start with the background that is either unknown (nothing to see) or infinite (the whole positive part of the graph gets filled). -Next you draw the data on top of this background. It will overlay + +Next you draw the data on top of this background, it will overlay the background. Suppose one of val1..val4 would be unknown, in that case you end up with only three bars stacked on top of each other. You don't want to see this because the data is only valid when all @@ -861,10 +865,11 @@ You may do some complex data filtering: =head1 Out of ideas for now -This document was created from questions asked by either myself or -by other people on the list. Please let me know if you find errors -in it or if you have trouble understanding it. If you think there -should be an addition, mail me: Ealex@ergens.op.het.netE +This document was created from questions asked by either myself or by +other people on the RRDtool mailing list. Please let me know if you +find errors in it or if you have trouble understanding it. If you +think there should be an addition, mail me: +Ealex@ergens.op.het.netE Remember: B diff --git a/doc/rrd-beginners.pod b/doc/rrd-beginners.pod index 1ad8c34..37145e6 100644 --- a/doc/rrd-beginners.pod +++ b/doc/rrd-beginners.pod @@ -1,6 +1,6 @@ =head1 NAME -rrd-beginners - RRDtool Beginners guide +rrd-beginners - RRDtool Beginners' Guide =head1 SYNOPSIS @@ -10,35 +10,35 @@ Helping new RRDtool users to understand the basics of RRDtool This manual is an attempt to assist beginners in understanding the concepts of RRDtool. It sheds a light on differences between RRDtool and other -databases. With help of an example, it explains structure of RRDtool +databases. With help of an example, it explains the structure of RRDtool database. This is followed by an overview of the "graph" feature of RRDtool. -At the end, it has sample scripts that illustrates the +At the end, it has sample scripts that illustrate the usage/wrapping of RRDtool within Shell or Perl scripts. =head2 What makes RRDtool so special? RRDtool is GNU licensed software developed by Tobias Oetiker, a system manager at the Swiss Federal Institute of Technology. Though it is a -database, there are distinct differences between RRDtool database and other +database, there are distinct differences between RRDtool databases and other databases as listed below: =over =item * -RRDtool stores data; that makes it a back end tool. The RRDtool command set -allows the creation of graphs; that makes it a front end tool as well. Other -databases just stores data and can not create graphs. +RRDtool stores data; that makes it a back-end tool. The RRDtool command set +allows the creation of graphs; that makes it a front-end tool as well. Other +databases just store data and can not create graphs. =item * In case of linear databases, new data gets appended at the bottom of -the database table. Thus its size keeps on increasing, whereas size of an RRDtool -database is determined at creation time. Imagine an RRDtool database as the -perimeter of a circle. Data is added along the perimeter. When new data -reaches the starting point, it overwrites existing data. This way, the size of -an RRDtool database always remains constant. The name "Round Robin" stems from this -attribute. +the database table. Thus its size keeps on increasing, whereas the size of +an RRDtool database is determined at creation time. Imagine an RRDtool +database as the perimeter of a circle. Data is added along the +perimeter. When new data reaches the starting point, it overwrites +existing data. This way, the size of an RRDtool database always +remains constant. The name "Round Robin" stems from this behavior. =item * @@ -52,24 +52,25 @@ Other databases get updated when values are supplied. The RRDtool database is structured in such a way that it needs data at predefined time intervals. If it does not get a new value during the interval, it stores an UNKNOWN value for that interval. So, when using the RRDtool database, it is -imperative to use scripts that runs at regular intervals to ensure a constant +imperative to use scripts that run at regular intervals to ensure a constant data flow to update the RRDtool database. =back -RRDtool has a lot to do with time. With every data update, it also needs to -know the time when that update occurred. Time is always expressed in -seconds passed since epoch (01-01-1971). RRDtool can be installed on Unix as -well as Windows. It has command set to carry out various -operations on RRD database. This command set can be accessed from the command line, -and from Shell or Perl scripts. The scripts -act as wrappers for accessing data stored in RRDtool database. +RRDtool is designed to store time series of data. With every data +update, an assosiated time stamp is stored. Time is always expressed +in seconds passed since epoch (01-01-1971). RRDtool can be installed +on Unix as well as Windows. It comes with a command set to carry out +various operations on RRD databases. This command set can be accessed +from the command line, as well as from Shell or Perl scripts. The +scripts act as wrappers for accessing data stored in RRDtool +databases. =head2 Understanding by an example The structure of an RRD database is different than other linear databases. Other databases define tables with columns, and many other parameters. These -definitions sometime are very complex, especially in large databases. +definitions sometimes are very complex, especially in large databases. RRDtool databases are primarily used for monitoring purposes and hence are very simple in structure. The parameters that need to be defined are variables that hold values and archives of those @@ -90,45 +91,47 @@ best explained with an example. RRA:AVERAGE:0.5:12:24 \ RRA:AVERAGE:0.5:288:31 -This example creates a database named F. Start time (1023654125) is -specified in total number of seconds since epoch (time in seconds since -01-01-1970). While updating the database, update time is also specified. -This update time MUST occur after start time and MUST be in seconds since -epoch. +This example creates a database named F. Start time +(1'023'654'125) is specified in total number of seconds since epoch +(time in seconds since 01-01-1970). While updating the database, the +update time is also specified. This update time MUST be large (later) +then start time and MUST be in seconds since epoch. The step of 300 seconds indicates that database expects new values every 300 seconds. The wrapper script should be scheduled to run every B seconds so that it updates the database every B seconds. DS (Data Source) is the actual variable which relates to the parameter on -the device that has to be monitored. Its syntax is +the device that is monitored. Its syntax is DS:variable_name:DST:heartbeat:min:max B is a key word. C is a name under which the parameter is -saved in database. There can be as many DSs in a database as needed. After +saved in the database. There can be as many DSs in a database as needed. After every step interval, a new value of DS is supplied to update the database. -This value is also called as Primary Data Point B<(PDP)>. In our example +This value is also called Primary Data Point B<(PDP)>. In our example mentioned above, a new PDP is generated every 300 seconds. Note, that if you do NOT supply new datapoints exactly every 300 seconds, -this is not problem, RRDtool will interpolate the data accordingly. - -B (Data Source Type) defines type of DS. It can be COUNTER, DERIVE, -ABSOLUTE, GAUGE. A DS declared as COUNTER will save the rate of change of -the value over a step period. This assumes that the value is always -increasing (difference between last two values is more than 0). Traffic -counters on a router is an ideal candidate for using COUNTER as DST. DERIVE -is same as COUNTER but it allows negative values as well. If you want to see -the rate of I in free diskspace on your server, then you might want to -use the DERIVE data type. ABSOLUTE also saves the rate of change but it assumes -that previous value is set to 0. The difference between current and previous -value is always equal to the current value. So, it stores the current value divided -by step interval (300 seconds in our example). GAUGE does not save the rate of -change. It saves the actual value itself. There are no -divisions/calculations. Memory consumption in a server is an ideal -example of gauge. Difference among different types DSTs can be explained -better with following example: +this is not a problem, RRDtool will interpolate the data accordingly. + +B (Data Source Type) defines the type of the DS. It can be +COUNTER, DERIVE, ABSOLUTE, GAUGE. A DS declared as COUNTER will save +the rate of change of the value over a step period. This assumes that +the value is always increasing (the difference between the current and +the previous value is greater than 0). Traffic counters on a router +are an ideal candidate for using COUNTER as DST. DERIVE is the same as +COUNTER, but it allows negative values as well. If you want to see the +rate of I in free diskspace on your server, then you might +want to use the DERIVE data type. ABSOLUTE also saves the rate of +change, but it assumes that the previous value is set to 0. The +difference between the current and the previous value is always equal +to the current value. Thus it just stores the current value divided by +the step interval (300 seconds in our example). GAUGE does not save +the rate of change. It saves the actual value itself. There are no +divisions or calculations. Memory consumption in a server is a typical +example of gauge. The difference between the different types DSTs can be +explained better with the following example: Values = 300, 600, 900, 1200 Step = 300 seconds @@ -138,88 +141,95 @@ better with following example: GAUGE DS = 300, 600, 900, 1200 The next parameter is B. In our example, heartbeat is 600 -seconds. If database does not get a new PDP within 300 -seconds, it will wait for another 300 seconds (total 600 seconds). -If it doesn't receive any PDP with in 600 seconds, it will save an UNKNOWN value -into database. This UNKNOWN value is a special feature of RRDtool - it is -much better than to assume a missing value was 0 (zero). -For example, the traffic flow counter on a router -keeps on increasing. Lets say, a value is missed for an interval and 0 is stored -instead of UNKNOWN. Now when next value becomes available, it will calculate -difference between current value and previous value (0) which is not -correct. So, inserting value UNKNOWN makes much more sense here. - -The next two parameters are the minimum and maximum value respectively. If variable -to be stored has predictable maximum and minimum value, this should be -specified here. Any update value falling out of this range will be saved as -UNKNOWN. - -The next line declares a round robin archive (RRA). The syntax for declaring an RRA is +seconds. If the database does not get a new PDP within 300 seconds, it +will wait for another 300 seconds (total 600 seconds). If it doesn't +receive any PDP within 600 seconds, it will save an UNKNOWN value into +the database. This UNKNOWN value is a special feature of RRDtool - it +is much better than to assume a missing value was 0 (zero) or any +other number which might also be a valid data value. For example, the +traffic flow counter on a router keeps increasing. Lets say, a value +is missed for an interval and 0 is stored instead of UNKNOWN. Now when +hte next value becomes available, it will calculate the difference +between the current value and the previous value (0) which is not +correct. So, inserting the value UNKNOWN makes much more sense here. + +The next two parameters are the minimum and maximum value, +respectively. If the variable to be stored has predictable maximum and +minimum values, this should be specified here. Any update value +falling out of this range will be stored as UNKNOWN. + +The next line declares a round robin archive (RRA). The syntax for +declaring an RRA is RRA:CF:xff:step:rows -RRA is the keyword to declare RRAs. The consolidation function (CF) can be -AVERAGE, MINIMUM, MAXIMUM, and LAST. The concept of the consolidated data point (CDP) -comes into the picture here. A CDP is CFed (averaged, maximum/minimum value or -last value) from I number of PDPs. This RRA will hold I CDPs. - -Lets have a look at the example above. For the first RRA, 12 (steps) PDPs -(DS variables) are AVERAGEed (CF) to form one CDP. 24 (rows) of theses CDPs -are archived. Each PDP occurs at 300 seconds. 12 PDPs represent 12 times 300 -seconds which is 1 hour. It means 1 CDP (which is equal to 12 PDPs) -represents data worth 1 hour. 24 such CDPs represent 1 day (1 hour times 24 -CDPs). It means, this RRA is an archive for one day. After 24 CDPs, CDP -number 25 will replace the 1st CDP. Second RRA saves 31 CDPs; each CPD -represents an AVERAGE value for a day (288 PDPs, each covering 300 seconds = -24 hours). Therefore this RRA is an archive for one month. A single database -can have many RRAs. If there are multiple DSs, each individual RRA will save -data for all the DSs in the database. For example, if a database has 3 DSs; -and daily, weekly, monthly, and yearly RRAs are declared, then each RRA will -hold data from all 3 data sources. +RRA is the keyword to declare RRAs. The consolidation function (CF) +can be AVERAGE, MINIMUM, MAXIMUM, and LAST. The concept of the +consolidated data point (CDP) comes into the picture here. A CDP is +CFed (averaged, maximum/minimum value or last value) from I +number of PDPs. This RRA will hold I CDPs. + +Lets have a look at the example above. For the first RRA, 12 (steps) +PDPs (DS variables) are AVERAGEed (CF) to form one CDP. 24 (rows) of +theses CDPs are archived. Each PDP occurs at 300 seconds. 12 PDPs +represent 12 times 300 seconds which is 1 hour. It means 1 CDP (which +is equal to 12 PDPs) represents data worth 1 hour. 24 such CDPs +represent 1 day (1 hour times 24 CDPs). This means, this RRA is an +archive for one day. After 24 CDPs, CDP number 25 will replace the 1st +CDP. The second RRA saves 31 CDPs; each CPD represents an AVERAGE +value for a day (288 PDPs, each covering 300 seconds = 24 +hours). Therefore this RRA is an archive for one month. A single +database can have many RRAs. If there are multiple DSs, each +individual RRA will save data for all the DSs in the database. For +example, if a database has 3 DSs and daily, weekly, monthly, and +yearly RRAs are declared, then each RRA will hold data from all 3 data +sources. =head2 Graphical Magic -Another important feature of RRDtool is its ability to create graphs. The -"graph" command uses "fetch" command internally to retrieve values from the -database. With the retrieved values, it draws graphs as defined by the -parameters supplied on the command line. A single graph can show different -DS (Data Sources0) from a database. It is also possible to show the -values from more than one databases into a single graph. Often, it is -necessary to perform some math on the values retrieved from database, before -plotting them. For example, in SNMP replies, memory consumption values are -usually specified in KBytes and traffic flow on interfaces is specified in -Bytes. Graphs for these values will be more senseful if values are -represented in MBytes and mbps. the RRDtool graph command allows to define -such conversions. Apart from mathematical calculations, it is also possible -to perform logical operations such as greater than, less than, and if then -else. If a database contains more than one RRA archive, then a question may -arise - how does RRDtool decide which RRA archive to use for retrieving the -values? RRDtool takes looks at several things when making its choice. First -it makes sure that the RRA covers as much of the graphing time frame as -possible. Second it looks at the resolution of the RRA compared to the -resolution of the graph. It tries to find one which has the same or better -resolution. With the "-r" option you can force RRDtool to assume a different -resolution than the one calculated from the pixel width of the graph. - -Values of different variables can be presented in 5 different shapes in a -graph - AREA, LINE1, LINE2, LINE3, and STACK. AREA is represented by a solid -colored area with values as the boundary of this area. LINE1/2/3 (increasing -width) are just plain lines representing the values. STACK is also an area -but it is "stack"ed on AREA or LINE1/2/3. Another important thing to note, -is that variables are plotted in the order they are defined in graph -command. So, care must be taken to define STACK only after defining -AREA/LINE. It is also possible to put formatted comments within the graph. -Detailed instructions be found under graph manual. +Another important feature of RRDtool is its ability to create +graphs. The "graph" command uses the "fetch" command internally to +retrieve values from the database. With the retrieved values it draws +graphs as defined by the parameters supplied on the command line. A +single graph can show different DS (Data Sources) from a database. It +is also possible to show the values from more than one database in a +single graph. Often, it is necessary to perform some math on the +values retrieved from the database before plotting them. For example, +in SNMP replies, memory consumption values are usually specified in +KBytes and traffic flow on interfaces is specified in Bytes. Graphs +for these values will be more meaningful if values are represented in +MBytes and mbps. The RRDtool graph command allows to define such +conversions. Apart from mathematical calculations, it is also possible +to perform logical operations such as greater than, less than, and +if/then/else. If a database contains more than one RRA archive, then a +question may arise - how does RRDtool decide which RRA archive to use +for retrieving the values? RRDtool looks at several things when making +its choice. First it makes sure that the RRA covers as much of the +graphing time frame as possible. Second it looks at the resolution of +the RRA compared to the resolution of the graph. It tries to find one +which has the same or higher better resolution. With the "-r" option +you can force RRDtool to assume a different resolution than the one +calculated from the pixel width of the graph. + +Values of different variables can be presented in 5 different shapes +in a graph - AREA, LINE1, LINE2, LINE3, and STACK. AREA is represented +by a solid colored area with values as the boundary of this +area. LINE1/2/3 (increasing width) are just plain lines representing +the values. STACK is also an area but it is "stack"ed on top AREA or +LINE1/2/3. Another important thing to note is that variables are +plotted in the order they are defined in the graph command. Therefore +care must be taken to define STACK only after defining AREA/LINE. It +is also possible to put formatted comments within the graph. Detailed +instructions can be found in the graph manual. =head2 Wrapping RRDtool within Shell/Perl script -After understanding RRDtool, it is now a time to actually use RRDtool in -scripts. Tasks involved in network management are data collection, data -storage, and data retrieval. In the following example, -the previously created target.rrd database is used. Data collection and data -storage is done using Shell scrip. Data retrieval -and report generation is done using Perl script. These -scripts are as shown below: +After understanding RRDtool it is now a time to actually use RRDtool +in scripts. Tasks involved in network management are data collection, +data storage, and data retrieval. In the following example, the +previously created target.rrd database is used. Data collection and +data storage is done using Shell scripts. Data retrieval and report +generation is done using Perl scripts. These scripts are shown below: =head3 Shell script (collects data, updates database) @@ -241,21 +251,21 @@ scripts are as shown below: =head3 Perl script (retrieves data from database and generates graphs and statistics) #!/usr/bin/perl -w - #This script fetch data from target.rrd, creates graph of memory consumption - on target (Dual P3 Processor 1 GHz, 656 MB RAM) + # This script fetches data from target.rrd, creates a graph of memory + # consumption on the target (Dual P3 Processor 1 GHz, 656 MB RAM) - #calling RRD perl module + # call the RRD perl module use lib qw( /usr/local/rrdtool-1.0.41/lib/perl ../lib/perl ); use RRDs; - my $cur_time = time(); # setting current time - my $end_time = $cur_time - 86400; # setting end time 24 hours ago - my $start_time = $end_time - 2592000; # setting start 30 days in the future + my $cur_time = time(); # set current time + my $end_time = $cur_time - 86400; # set end time to 24 hours ago + my $start_time = $end_time - 2592000; # set start 30 days in the past - #fetching average values from RRD database between start and end time + # fetch average values from the RRD database between start and end time my ($start,$step,$ds_names,$data) = RRDs::fetch("target.rrd", "AVERAGE", "-r", "600", "-s", "$start_time", "-e", "$end_time"); - #saving fetched values in 2-dimensional array + # save fetched values in a 2-dimensional array my $rows = 0; my $columns = 0; my $time_variable = $start; @@ -269,19 +279,19 @@ scripts are as shown below: } my $tot_time = 0; my $count = 0; - #saving values from 2-dimensional into 1-dimensional array + # save the values from the 2-dimensional into a 1-dimensional array for $i ( 0 .. $#vals ) { $tot_mem[$count] = $vals[$i][1]; $count++; } my $tot_mem_sum = 0; - #calculating total of all values + # calculate the total of all values for $i ( 0 .. ($count-1) ) { $tot_mem_sum = $tot_mem_sum + $tot_mem[$i]; } - #calculating average of array + # calculate the average of the array my $tot_mem_ave = $tot_mem_sum/($count); - #creating graph + # create the graph RRDs::graph ("/images/mem_$count.png", \ "--title= Memory Usage", \ "--vertical-label=Memory Consumption (MB)", \ @@ -305,7 +315,7 @@ scripts are as shown below: "AREA:tot_mem_cor#6699CC:Total memory consumed in MB"); my $err=RRDs::error; if ($err) {print "problem generating the graph: $err\n";} - #printing the output + # print the output print "Average memory consumption is "; printf "%5.2f",$tot_mem_ave/1024; print " MB. Graphical representation can be found at /images/mem_$count.png."; diff --git a/doc/rrdcreate.pod b/doc/rrdcreate.pod index 80f3970..5c44562 100644 --- a/doc/rrdcreate.pod +++ b/doc/rrdcreate.pod @@ -12,10 +12,9 @@ S<[BIB<:>I]> =head1 DESCRIPTION -The create function of the RRDtool lets you set up new -Round Robin Database (B) files. -The file is created at its final, full size and filled -with I<*UNKNOWN*> data. +The create function of RRDtool lets you set up new Round Robin +Database (B) files. The file is created at its final, full size +and filled with I<*UNKNOWN*> data. =over 8 @@ -32,7 +31,7 @@ value should be added to the B. B will not accept any data timed before or at the time specified. See also AT-STYLE TIME SPECIFICATION section in the -I documentation for more ways to specify time. +I documentation for other ways to specify time. =item B<--step>|B<-s> I (default: 300 seconds) @@ -41,17 +40,17 @@ into the B. =item BIB<:>IB<:>I -A single B can accept input from several data sources (B). -(e.g. Incoming and Outgoing traffic on a specific communication -line). With the B configuration option you must define some basic -properties of each data source you want to use to feed the B. +A single B can accept input from several data sources (B), +for example incoming and outgoing traffic on a specific communication +line. With the B configuration option you must define some basic +properties of each data source you want to store in the B. I is the name you will use to reference this particular data source from an B. A I must be 1 to 19 characters long in the characters [a-zA-Z0-9_]. I defines the Data Source Type. The remaining arguments of a -data source entry depend upon the data source type. For GAUGE, COUNTER, +data source entry depend on the data source type. For GAUGE, COUNTER, DERIVE, and ABSOLUTE the format for a data source entry is: BIB<:>IB<:>IB<:>IB<:>I @@ -60,24 +59,26 @@ For COMPUTE data sources, the format is: BIB<:>IB<:>I -To decide on a data source type, review the definitions that follow. -Consult the section on "HOW TO MEASURE" for further insight. +In order to decide which data source type to use, review the +definitions that follow. Also consult the section on "HOW TO MEASURE" +for further insight. =over 4 =item B -is for things like temperatures or number of people in a -room or value of a RedHat share. +is for things like temperatures or number of people in a room or the +value of a RedHat share. =item B -is for continuous incrementing counters like the -ifInOctets counter in a router. The B data source assumes that -the counter never decreases, except when a counter overflows. The update -function takes the overflow into account. The counter is stored as a -per-second rate. When the counter overflows, RRDtool checks if the overflow happened at -the 32bit or 64bit border and acts accordingly by adding an appropriate value to the result. +is for continuous incrementing counters like the ifInOctets counter in +a router. The B data source assumes that the counter never +decreases, except when a counter overflows. The update function takes +the overflow into account. The counter is stored as a per-second +rate. When the counter overflows, RRDtool checks if the overflow +happened at the 32bit or 64bit border and acts accordingly by adding +an appropriate value to the result. =item B @@ -115,19 +116,20 @@ wrap. is for counters which get reset upon reading. This is used for fast counters which tend to overflow. So instead of reading them normally you reset them -after every read to make sure you have a maximal time available before the +after every read to make sure you have a maximum time available before the next overflow. Another usage is for things you count like number of messages since the last update. =item B -is for storing the result of a formula applied to other data sources in -the B. This data source is not supplied a value on update, but rather -its Primary Data Points (PDPs) are computed from the PDPs of the data sources -according to the rpn-expression that defines the formula. Consolidation -functions are then applied normally to the PDPs of the COMPUTE data source -(that is the rpn-expression is only applied to generate PDPs). In database -software, these are referred to as "virtual" or "computed" columns. +is for storing the result of a formula applied to other data sources +in the B. This data source is not supplied a value on update, but +rather its Primary Data Points (PDPs) are computed from the PDPs of +the data sources according to the rpn-expression that defines the +formula. Consolidation functions are then applied normally to the PDPs +of the COMPUTE data source (that is the rpn-expression is only applied +to generate PDPs). In database software, such data sets are referred +to as "virtual" or "computed" columns. =back @@ -139,23 +141,24 @@ I and I are optional entries defining the expected range of the data supplied by this data source. If I and/or I are defined, any value outside the defined range will be regarded as I<*UNKNOWN*>. If you do not know or care about min and max, set them -to U for unknown. Note that min and max always refer to the processed values -of the DS. For a traffic-B type DS this would be the max and min -data-rate expected from the device. +to U for unknown. Note that min and max always refer to the processed +values of the DS. For a traffic-B type DS this would be the +maximum and minimum data-rate expected from the device. I -I defines the formula used to compute the PDPs of a COMPUTE -data source from other data sources in the same . It is similar to defining -a B argument for the graph command. Please refer to that manual page -for a list and description of RPN operations supported. For -COMPUTE data sources, the following RPN operations are not supported: COUNT, PREV, -TIME, and LTIME. In addition, in defining the RPN expression, the COMPUTE -data source may only refer to the names of data source listed previously -in the create command. This is similar to the restriction that Bs must -refer only to Bs and Bs previously defined in the same graph command. +I defines the formula used to compute the PDPs of a +COMPUTE data source from other data sources in the same . It is +similar to defining a B argument for the graph command. Please +refer to that manual page for a list and description of RPN operations +supported. For COMPUTE data sources, the following RPN operations are +not supported: COUNT, PREV, TIME, and LTIME. In addition, in defining +the RPN expression, the COMPUTE data source may only refer to the +names of data source listed previously in the create command. This is +similar to the restriction that Bs must refer only to Bs +and Bs previously defined in the same graph command. =item BIB<:>I @@ -164,13 +167,15 @@ The purpose of an B is to store data in the round robin archives (B). An archive consists of a number of data values or statistics for each of the defined data-sources (B) and is defined with an B line. -When data is entered into an B, it is first fit into time slots of -the length defined with the B<-s> option becoming a I. +When data is entered into an B, it is first fit into time slots +of the length defined with the B<-s> option, thus becoming a I. -The data is also processed with the consolidation function (I) -of the archive. There are several consolidation functions that consolidate -primary data points via an aggregate function: B, B, B, B. -The format of B line for these consolidation functions is: +The data is also processed with the consolidation function (I) of +the archive. There are several consolidation functions that +consolidate primary data points via an aggregate function: B, +B, B, B. The format of B line for these +consolidation functions is: BIB<:>IB<:>IB<:>I @@ -189,8 +194,8 @@ I defines how many generations of data values are kept in an B. In addition to the aggregate functions, there are a set of specialized functions that enable B to provide data smoothing (via the -Holt-Winters forecasting algorithm), confidence bands, and the flagging -aberrant behavior in the data source time series: +Holt-Winters forecasting algorithm), confidence bands, and the +flagging aberrant behavior in the data source time series: =over @@ -219,28 +224,28 @@ BIB<:>IB<:>IB<:>IB<:>I These B differ from the true consolidation functions in several ways. First, each of the Bs is updated once for every primary data point. Second, these B are interdependent. To generate real-time confidence -bounds, then a matched set of HWPREDICT, SEASONAL, DEVSEASONAL, and +bounds, a matched set of HWPREDICT, SEASONAL, DEVSEASONAL, and DEVPREDICT must exist. Generating smoothed values of the primary data points requires both a HWPREDICT B and SEASONAL B. Aberrant behavior detection requires FAILURES, HWPREDICT, DEVSEASONAL, and SEASONAL. The actual predicted, or smoothed, values are stored in the HWPREDICT -B. The predicted deviations are store in DEVPREDICT (think a standard +B. The predicted deviations are stored in DEVPREDICT (think a standard deviation which can be scaled to yield a confidence band). The FAILURES -B stores binary indicators. A 1 marks the indexed observation a +B stores binary indicators. A 1 marks the indexed observation as failure; that is, the number of confidence bounds violations in the preceding window of observations met or exceeded a specified threshold. An example of using these B to graph confidence bounds and failures appears in L. The SEASONAL and DEVSEASONAL B store the seasonal coefficients for the -Holt-Winters forecasting algorithm and the seasonal deviations respectively. +Holt-Winters forecasting algorithm and the seasonal deviations, respectively. There is one entry per observation time point in the seasonal cycle. For -example, if primary data points are generated every five minutes, and the -seasonal cycle is 1 day, both SEASONAL and DEVSEASONAL with have 288 rows. +example, if primary data points are generated every five minutes and the +seasonal cycle is 1 day, both SEASONAL and DEVSEASONAL will have 288 rows. In order to simplify the creation for the novice user, in addition to -supporting explicit creation the HWPREDICT, SEASONAL, DEVPREDICT, +supporting explicit creation of the HWPREDICT, SEASONAL, DEVPREDICT, DEVSEASONAL, and FAILURES B, the B create command supports implicit creation of the other four when HWPREDICT is specified alone and the final argument I is omitted. @@ -253,7 +258,7 @@ default number of rows is the same as the HWPREDICT I argument. If the FAILURES B is implicitly created, I will be set to the I argument of the HWPREDICT B. Of course, the B I command is available if these defaults are not sufficient and the -create wishes to avoid explicit creations of the other specialized function +creator wishes to avoid explicit creations of the other specialized function B. I specifies the number of primary data points in a seasonal @@ -266,8 +271,8 @@ I is the adaption parameter of the intercept (or baseline) coefficient in the Holt-Winters forecasting algorithm. See L for a description of this algorithm. I must lie between 0 and 1. A value closer to 1 means that more recent observations carry greater weight in -predicting the baseline component of the forecast. A value closer to 0 mean -that past history carries greater weight in predicted the baseline +predicting the baseline component of the forecast. A value closer to 0 means +that past history carries greater weight in predicting the baseline component. I is the adaption parameter of the slope (or linear trend) coefficient @@ -292,13 +297,14 @@ be the same for both. Note that I can also be changed via the B I command. I provides the links between related B. If HWPREDICT is -specified alone and the other B created implicitly, then there is no -need to worry about this argument. If B are created explicitly, then -pay careful attention to this argument. For each B which includes this -argument, there is a dependency between that B and another B. The -I argument is the 1-based index in the order of B creation -(that is, the order they appear in the I command). The dependent -B for each B requiring the I argument is listed here: +specified alone and the other B are created implicitly, then +there is no need to worry about this argument. If B are created +explicitly, then carefully pay attention to this argument. For each +B which includes this argument, there is a dependency between +that B and another B. The I argument is the 1-based +index in the order of B creation (that is, the order they appear +in the I command). The dependent B for each B +requiring the I argument is listed here: =over @@ -341,7 +347,7 @@ It may help you to sort out why all this *UNKNOWN* data is popping up in your databases: RRDtool gets fed samples at arbitrary times. From these it builds Primary -Data Points (PDPs) at exact times every "step" interval. The PDPs are +Data Points (PDPs) at exact times on every "step" interval. The PDPs are then accumulated into RRAs. The "heartbeat" defines the maximum acceptable interval between @@ -356,7 +362,7 @@ The known rates during a PDP's "step" interval are used to calculate an average rate for that PDP. Also, if the total "unknown" time during the "step" interval exceeds the "heartbeat", the entire PDP is marked as "unknown". This means that a mixture of known and "unknown" sample -time in a single PDP "step" may or may not add up to enough "unknown" +times in a single PDP "step" may or may not add up to enough "unknown" time to exceed "heartbeat" and hence mark the whole PDP "unknown". So "heartbeat" is not only the maximum acceptable interval between samples, but also the maximum acceptable amount of "unknown" time per @@ -383,17 +389,17 @@ Here are a few hints on how to measure: =item Temperature -Normally you have some type of meter you can read to get the temperature. +Usually you have some type of meter you can read to get the temperature. The temperature is not really connected with a time. The only connection is that the temperature reading happened at a certain time. You can use the -B data source type for this. RRDtool will the record your reading +B data source type for this. RRDtool will then record your reading together with the time. =item Mail Messages Assume you have a method to count the number of messages transported by -your mailserver in a certain amount of time, this give you data like '5 -messages in the last 65 seconds'. If you look at the count of 5 like and +your mailserver in a certain amount of time, giving you data like '5 +messages in the last 65 seconds'. If you look at the count of 5 like an B data type you can simply update the RRD with the number 5 and the end time of your monitoring period. RRDtool will then record the number of messages per second. If at some later stage you want to know the number of @@ -404,16 +410,17 @@ precision should be acceptable. =item It's always a Rate -RRDtool stores rates in amount/second for COUNTER, DERIVE and ABSOLUTE data. -When you plot the data, you will get on the y axis amount/second which you -might be tempted to convert to absolute amount volume by multiplying by the -delta-time between the points. RRDtool plots continuous data, and as such is -not appropriate for plotting absolute volumes as for example "total bytes" -sent and received in a router. What you probably want is plot rates that you -can scale to for example bytes/hour or plot volumes with another tool that -draws bar-plots, where the delta-time is clear on the plot for each point -(such that when you read the graph you see for example GB on the y axis, -days on the x axis and one bar for each day). +RRDtool stores rates in amount/second for COUNTER, DERIVE and ABSOLUTE +data. When you plot the data, you will get on the y axis +amount/second which you might be tempted to convert to an absolute +amount by multiplying by the delta-time between the points. RRDtool +plots continuous data, and as such is not appropriate for plotting +absolute amounts as for example "total bytes" sent and received in a +router. What you probably want is plot rates that you can scale to +bytes/hour, for example, or plot absolute amounts with another tool +that draws bar-plots, where the delta-time is clear on the plot for +each point (such that when you read the graph you see for example GB +on the y axis, days on the x axis and one bar for each day). =back @@ -430,12 +437,12 @@ days on the x axis and one bar for each day). This sets up an B called F which accepts one temperature value every 300 seconds. If no new data is supplied for more than 600 seconds, the temperature becomes I<*UNKNOWN*>. The -minimum acceptable value is -273 and the maximum is 5000. +minimum acceptable value is -273 and the maximum is 5'000. -A few archives areas are also defined. The first stores the -temperatures supplied for 100 hours (1200 * 300 seconds = 100 +A few archive areas are also defined. The first stores the +temperatures supplied for 100 hours (1'200 * 300 seconds = 100 hours). The second RRA stores the minimum temperature recorded over -every hour (12 * 300 seconds = 1 hour), for 100 days (2400 hours). The +every hour (12 * 300 seconds = 1 hour), for 100 days (2'400 hours). The third and the fourth RRA's do the same for the maximum and average temperature, respectively. @@ -449,23 +456,23 @@ average temperature, respectively. This example is a monitor of a router interface. The first B tracks the traffic flow in octets; the second B generates the specialized functions B for aberrant behavior detection. Note that the I -argument of HWPREDICT is missing, so the other B will be implicitly be +argument of HWPREDICT is missing, so the other B will implicitly be created with default parameter values. In this example, the forecasting algorithm baseline adapts quickly; in fact the most recent one hour of -observations (each at 5 minute intervals) account for 75% of the baseline +observations (each at 5 minute intervals) accounts for 75% of the baseline prediction. The linear trend forecast adapts much more slowly. Observations -made in during the last day (at 288 observations per day) account for only +made during the last day (at 288 observations per day) account for only 65% of the predicted linear trend. Note: these computations rely on an -exponential smoothing formula described in a forthcoming LISA 2000 paper. +exponential smoothing formula described in the LISA 2000 paper. The seasonal cycle is one day (288 data points at 300 second intervals), and the seasonal adaption parameter will be set to 0.1. The RRD file will store 5 -days (1440 data points) of forecasts and deviation predictions before wrap +days (1'440 data points) of forecasts and deviation predictions before wrap around. The file will store 1 day (a seasonal cycle) of 0-1 indicators in the FAILURES B. -The same RRD file and B are created with the following command, which explicitly -creates all specialized function B. +The same RRD file and B are created with the following command, +which explicitly creates all specialized function B. rrdtool create monitor.rrd --step 300 \ DS:ifOutOctets:COUNTER:1800:0:4294967295 \ @@ -476,8 +483,8 @@ creates all specialized function B. RRA:DEVSEASONAL:288:0.1:2 \ RRA:FAILURES:288:7:9:5 -Of course, explicit creation need not replicate implicit create, a number of arguments -could be changed. +Of course, explicit creation need not replicate implicit create, a +number of arguments could be changed. =head1 EXAMPLE 3 diff --git a/doc/rrdgraph.pod b/doc/rrdgraph.pod index ff54d6f..865e256 100644 --- a/doc/rrdgraph.pod +++ b/doc/rrdgraph.pod @@ -1,6 +1,6 @@ =head1 NAME -rrdgraph - About drawing pretty graphs with rrdtool +rrdgraph - Round Robin Database tool grapher functions =head1 SYNOPSIS @@ -16,35 +16,34 @@ B I The B function of B is used to present the data from an B to a human viewer. Its main purpose is to -create a nice graphical representation but it can also generate +create a nice graphical representation, but it can also generate a numerical report. =head1 OVERVIEW -B needs data to work with, use one or more +B needs data to work with, so you must use one or more B> statements to collect this data. You are not limited to one database, it's perfectly legal to -collect data from two or more databases (one per statement though). +collect data from two or more databases (one per statement, though). -If you want to display averages, maxima, percentiles etcetera +If you want to display averages, maxima, percentiles, etcetera it is best to collect them now using the B> statement. -Currently this makes no difference but in a future version +Currently this makes no difference, but in a future version of rrdtool you may want to collect these values before consolidation. The data fetched from the B is then B so that there is exactly one datapoint per pixel in the graph. If you do not take care yourself, B will expand the range slightly -if necessary (in that case the first and/or last pixel may very -well become unknown!). - -Sometimes data is not exactly as you would like to display it. For -instance, you might be collecting B per second but want to -display B per second. This is where the -B> command is designed for. -After B the data, a copy is made and this copy is -modified using a rather flexible B> command -set. +if necessary. Note, in that case the first and/or last pixel may very +well become unknown! + +Sometimes data is not exactly in the format you would like to display +it. For instance, you might be collecting B per second, but +want to display B per second. This is what the B> command is designed for. After +B the data, a copy is made and this copy is modified +using a rather powerful B> command set. When you are done fetching and processing the data, it is time to graph it (or print it). This ends the B sequence. @@ -56,10 +55,10 @@ graph it (or print it). This ends the B sequence. =item filename The name and path of the graph to generate. It is recommended to -end this in C<.png>, C<.svg> or C<.eps> but B does not enforce this. +end this in C<.png>, C<.svg> or C<.eps>, but B does not enforce this. I can be 'C<->' to send the image to C. In -that case, no other output is generated. +this case, no other output is generated. =item Time range @@ -67,7 +66,7 @@ that case, no other output is generated. [B<-e>|B<--end> I