Programming/Math Question: Precision, Scale, and ...

I don’t understand the reluctance to use those terms. They already have well-defined meanings in Oracle (that’s what you’re using, right?) and any database developer should understand them. The schema specifications use those terms and, indeed, demand that you use those exact values. There’s no need to specify the number of digits before the decimal point because that is already implied by the existing two terms. Inventing your own term will simply lead to more confusion.

Oh, I’m sorry, Terminus Est. I think I unintentionally misled you. I learned those terms in a database class (in which we used Oracle), but this particular chunk of code is not in Oracle, and the developer is not a database developer. (Although we use Oracle–and SQL–and have database developers, for whom I do write specs from time to time.)

I was kind of kidding about inventing my own term … although words have to come from somewhere, right? :wink:

Gotcha.

Many terms used in computer science/programming/development are either well-defined or have well-understood implications. The meaning and usage may vary depending on the environment you are in. You shouldn’t use a dictionary but the relevant computer texts and manuals. For example, C (and related languages) has no concept of precision and scale as defined in Oracle. However, you can apply the functions int(), which returns the digits before the decimal point, and frac(), which returns the digits after the decimal point. So you can talk to a C programmer about the int part and the frac part of a float, and they should be able to understand you.

If you want our help, you’ll need to give us more details about your development environment.

:slight_smile:

Well, let’s see. We use both Oracle and SQL environments. We use Java, VB, and Perl. Unix scripts, Jasper for reporting, ASP for web pages, and we will probably begin using .NET sometime soon. I’m sure there are other languages they’re using that I don’t even know about.

Usually, though, I don’t know exactly what they’re going to use, and our IT director doesn’t want that stuff in the specs anyway. He basically wants requirements that are understandable from a business perspective, yet clear and defined enough so that the developers can work from them. Hence my search for an “English” term.

The digits before the dp make up a whole number. Those after the dp make up a fractional part of a whole number or digit.

Hope that helps. If i runn across anything more definitive it will be posted.

For 30 some years, COBOL programmers have been using something called a “Picture Clause” to communicate this between programmers, system designeres, etc.

An example of this the number you gave would be: PIC 9999.99 or for an internal field, where the decimal point is not stored or displayed: PIC 9999v99. And a common shorthand, especially for large numbers, would be PIC 9(4)v99.

I’d suggest you use something like this to communicate to your programmers. It’s pretty simple and straight-forward, and should be easy for them to understand, and difficult for anyone involved to misunderstand.

As a database programmer/designer of over a decade, I highly recommend t-bonham’s approach. Every programmer is familiar with this, and will immediately understand and be able to implement your desired specs.

So, if you want 4 digits to the left and 2 digits to the right, you would list your spec as “9999.99”. Keep in mind that a “-” (minus sign) uses up one of those places, so the picture “9999.99” will allow the range of -999.99 to 9999.99.

If you really want to go harcore, there are some old xBase formats that differentiate between forced leading and trailing zeroes. I believe (but don’t quote me on this) that 9999.99 will display 1 as 1.00, while ####.## will display 1 as 0001.00.

Also, very early on, the comma became standard, so a picture of 9,999.99 is perfectly valid, and will not show a comma in numbers less than one thousand.

There are several ways to do this, but if you expect to do this sort of thing a lot you might want to use a notation called BNF (Backus-Naur Form) which is essentially a grammar for defining other grammars.

For example, let’s say we define a set of metasymbols that we can use to describe the allowable contents of a given form field:

‘’ := literal character (‘0’, ‘-’, ‘#’, etc.)
… := range of literal characters (‘0’…‘9’, ‘A’…‘Z’, etc.)
| := alternative (‘A’ | ‘B’ == A or B, etc.)

  • := repetition (zero or more instances)
  • := repetition (one or more instances)
    := zero or one instance only
    () := logical group

Using these metasymbols, we can define

n := ‘0’…‘9’
a := ‘A’…‘Z’

and then

ZipCode := nnnnn[’-'nnnn] == 60613, 53247-1218, etc.

CanadaPostalCode := ana[’-’]nan == L9T-3P7, L5A4A1, etc.

DollarAmount := [’$’]n+’.'nn == 0.37, $119.99, etc.

ArbitraryPrecisionNumber := nn ‘.’ nnn == 00.000, 23.987, etc.

The notation takes a bit of practice to pick up (it was designed to specify the syntax of programming languages) but it’s extremely useful for describing data fields. Once you’ve defined what an ArbitraryPrecisionNumber is supposed to be, you can refer to it as a noun in your text rather than defining its meaning over and over again.

Usually, I do use an approach similar to a COBOL pic clause, although I had forgotten about the 9(4)v99 style of notation. That WOULD come in handy when discussing larger numbers.

I will discuss these ideas with my programmers and implement whichever one they prefer (within reason).

FYI: The reason this came up is that we had one particular number that it took a programmer three or four times to get right, so I wanted to make sure I was communicating the concept as clearly as possible. You all have given me some great ideas.