#### Do you have a question? Post it now! No Registration Necessary. Now with pictures!

**posted on**

- PerlFAQ Server

July 19, 2008, 7:03 pm

comes with the standard Perl distribution. These postings aim to

reduce the number of repeated questions as well as allow the community

to review and update the answers. The latest version of the complete

perlfaq is at http://faq.perl.org .

--------------------------------------------------------------------

4.2: Why is int() broken?

Your "int()" is most probably working just fine. It's the numbers that

aren't quite what you think.

First, see the answer to "Why am I getting long decimals (eg,

19.9499999999999) instead of the numbers I should be getting (eg,

19.95)?".

For example, this

print int(0.6/0.2-2), "\n";

will in most computers print 0, not 1, because even such simple numbers

as 0.6 and 0.2 cannot be presented exactly by floating-point numbers.

What you think in the above as 'three' is really more like

2.9999999999999995559.

--------------------------------------------------------------------

The perlfaq-workers, a group of volunteers, maintain the perlfaq. They

are not necessarily experts in every domain where Perl might show up,

so please include as much information as possible and relevant in any

corrections. The perlfaq-workers also don't have access to every

operating system or platform, so please include relevant details for

corrections to examples that do not work on particular platforms.

Working code is greatly appreciated.

If you'd like to help maintain the perlfaq, see the details in

perlfaq.pod.

## Re: FAQ 4.2 Why is int() broken?

Maybe I'm just tired, but where exactly do you get these particular

rounding errors, as I cannot reproduce them:

$ perl5.10.0 -e 'print 19.95, qq;'

19.95

$ perl5.10.0 -e 'print 20.0 - 0.05, qq;'

19.95

$ perl5.10.0 -e 'print 9.975 * 2, qq;'

19.95

$ perl5.10.0 -e 'print 39.9 / 2.0, qq;'

19.95

I get the same results on each one of those command-lines on 5.6.1,

5.8.0, 5.8.2, and 5.8.8.

Curious:

$ perl5.10.0 -e 'print int(0.6/0.2-2), qq;'

1

$ perl5.8.8 -e 'print int(0.6/0.2-2), qq;'

1

But older Perl's I still have around for testing purposes, which have no

64 bit float or int support all zero:

$ perl5.8.2 -e 'print int(0.6/0.2-2), qq;'

0

$ perl5.8.0 -e 'print int(0.6/0.2-2), qq;'

0

$ perl5.6.1 -e 'print int(0.6/0.2-2), qq;'

0

I take it this is because I compiled perl5.8.8 and perl5.10.0 with 64bit

float (and int) support?

$ perl5.10.0 -e 'print 3.0, qq;'

3

Or am I missing something?

$ perl5.10.0 -e 'print sprintf(qq, 3.0), qq;'

3.00000000000000000000000000000000000000000000000000

I would of expected some rounding errors, but this works on all the Perl

5's on my system just the same (only 5.8.8 and 5.10.0 are compiled with

64 bit floats and ints.)

Again I am a bit tired from a long trip so it's entirely conceivable

that I'm missing something obvious here.

--

szr

## Re: FAQ 4.2 Why is int() broken?

You are not outputting with enough precision to see the difference.

Try:

perl5.10.0 -e 'printf "%30.20f\n", 19.95'

perl -e 'printf "%30.20f\n", (0.6/0.2)'

s/would of/would've/

or

s/would of/would have/

That could explain the multiple reveals seen recently.

--

Tad McClellan

email: perl -le "print scalar reverse qq/moc.noitatibaher0cmdat/"

## Re: FAQ 4.2 Why is int() broken?

Tad J McClellan wrote:

[...]

Indeed, this gives me: " 19.95000000000000000069".

Note that the "30" in "%30.20f\n" isn't needed. That just sets the

minumum field width.

it seems the results of that are not uniform:

$ perl5.10.0 -e 'printf "%30.20f\n", (0.6/0.2)'

3.00000000000000000000

$ perl5.8.8 -e 'printf "%30.20f\n", (0.6/0.2)'

3.00000000000000000000

$ perl5.8.2 -e 'printf "%30.20f\n", (0.6/0.2)'

2.99999999999999955591

$ perl5.8.0 -e 'printf "%30.20f\n", (0.6/0.2)'

2.99999999999999955591

$ perl5.6.1 -e 'printf "%30.20f\n", (0.6/0.2)'

2.99999999999999955591

What is it with you and typos? I was rather tired when composing that

post so excuse me for not having perfect grammar.

I'm not sure what you are getting at here. It's just a typing error, one

which is made daily by an awful lot of people.

--

szr

[...]

Indeed, this gives me: " 19.95000000000000000069".

Note that the "30" in "%30.20f\n" isn't needed. That just sets the

minumum field width.

it seems the results of that are not uniform:

$ perl5.10.0 -e 'printf "%30.20f\n", (0.6/0.2)'

3.00000000000000000000

$ perl5.8.8 -e 'printf "%30.20f\n", (0.6/0.2)'

3.00000000000000000000

$ perl5.8.2 -e 'printf "%30.20f\n", (0.6/0.2)'

2.99999999999999955591

$ perl5.8.0 -e 'printf "%30.20f\n", (0.6/0.2)'

2.99999999999999955591

$ perl5.6.1 -e 'printf "%30.20f\n", (0.6/0.2)'

2.99999999999999955591

What is it with you and typos? I was rather tired when composing that

post so excuse me for not having perfect grammar.

I'm not sure what you are getting at here. It's just a typing error, one

which is made daily by an awful lot of people.

--

szr

## Re: FAQ 4.2 Why is int() broken?

No, floats in perl are normally "double", i.e., 64 bit. But maybe you've

built perl long "long double" (80, 96, or 128 bit, depending on

platform) support?

% perl -V:nvtype -V:nvsize

nvtype='double';

nvsize='8';

% perl -e 'print int(0.6/0.2-2), qq;'

0

For printing 3? I wouldn't. 3 is exactly representable in binary

(1

*** 2^1 + 1 ***2^0). There is no rounding error. But neither 0.6 nor 0.2

are exactly representable (0.6 is 3/5, and you cannot represent 1/5 in

a finite number of binary digits, just like you cannot represent 1/3 in

a finite number of decimal digits), so when you write 0.6/0.2 in your

source code, perl will really compute round

___bin(0.6) / round___bin(0.2)

where round_bin stands for "round to nearest representable number". For

64 bit floats,

round_bin(0.6) = 0.59999999999999997779553950749686919152736663818359375

round_bin(0.2) = 0.200000000000000011102230246251565404236316680908203125

and since 0.6 has been rounded down and 0.2 has been rounded up, the

result of the division must be slightly smaller than 3.

But note that for a different number of binary digits, it is possible

that 0.6 is rounded up, and 0.2 is rounded down, and then the result

would be slightly larger than 3 and then int(0.6/0.2-2) is of course 1.

Only about 95 bazillion previous discussions of this topic in this group

;-).

hp

## Re: FAQ 4.2 Why is int() broken?

Peter J. Holzer wrote:

You are right. Built with 64 bit int, but 96 bit float, the size of a

long double in c:

$ perl5.10.0 -V:nvtype -V:nvsize

nvtype='long double';

nvsize='12';

$ perl5.8.8 -V:nvtype -V:nvsize

nvtype='long double';

nvsize='12';

Seems you need the extra precision that building Perl with "long double"

support (or Math::BigFloat) affords in order to get the expected

(mathematical) result.

Alas, I see the light on this now. Thank you.

Yes, I hate those sort of edge cases.

Good point, once again :-)

--

szr

You are right. Built with 64 bit int, but 96 bit float, the size of a

long double in c:

$ perl5.10.0 -V:nvtype -V:nvsize

nvtype='long double';

nvsize='12';

$ perl5.8.8 -V:nvtype -V:nvsize

nvtype='long double';

nvsize='12';

Seems you need the extra precision that building Perl with "long double"

support (or Math::BigFloat) affords in order to get the expected

(mathematical) result.

Alas, I see the light on this now. Thank you.

Yes, I hate those sort of edge cases.

Good point, once again :-)

--

szr

## Re: FAQ 4.2 Why is int() broken?

No. The extra precision doesn't help. As I argued below it's just

coincidence that error isn't noticable in this case. if you use other

numbers instead of 0.6 and 0.2, you will discover some where the result

is "wrong" even with 96 bits. Indeed you may find some where the result

is correct for 64 bits and wrong for 96 bits.

hp

## Re: FAQ 4.2 Why is int() broken?

Please get it into your head that extra precision

***does***

***not***solve

this problem. To express 1/10 in binary you need an infinite number of

digits, just like you need an infinite number of digits to express 1/3

in decimal.

If you need to express decimals exactly, use decimal numbers. If you

need to express rational numbers exactly, use rational numbers.

You mean "bc". I'm not sure what you mean by "moving" calculations. Bc

does decimal fixed point arithmetic with arbitrary precision. There are

a number of perl modules which do aribtrary precision: Math::BigInt,

Math::BigFloat and Math::BigRat.

hp

## Re: FAQ 4.2 Why is int() broken?

Peter J. Holzer wrote:

Maybe you meant to write something else, as 1/10 is 0.1 and does not

require an infinite number of digits; it needs just one decimal digit

:-)

Secondly, for expressions like 1/3, yes, the floating point (decimal)

form cannot be fully expressed in binary. However, in the form of a

rational number, it can be stored in an accurate binary form, as you're

really storing integers - the numerator and the denominator - although

I'm not sure if that would be faster or slower than actual floating

point calculations; I've never done the numbers, and I'm sure it may

vary depending on the processor, but I have seen libraries, such as in C

or C++, that use rational instead of floating point numbers.

If what you're after is accurate math, than representing your numbers as

rational (n/d) seems to be a rather trivial way of achieving that; the

math part most people learned in basic arithmetic. I would think the

hardest part would be writing an efficient reduction algorithm, to, for

example, turn 10/18 into 5/9.

Hey, there you go; what I just wrote summed up nicely :-)

Yep, that's the one.

Yeah that's what I meant by "moving", as in it just keeps on calculating

on and on and on given how many places.

--

szr

Maybe you meant to write something else, as 1/10 is 0.1 and does not

require an infinite number of digits; it needs just one decimal digit

:-)

Secondly, for expressions like 1/3, yes, the floating point (decimal)

form cannot be fully expressed in binary. However, in the form of a

rational number, it can be stored in an accurate binary form, as you're

really storing integers - the numerator and the denominator - although

I'm not sure if that would be faster or slower than actual floating

point calculations; I've never done the numbers, and I'm sure it may

vary depending on the processor, but I have seen libraries, such as in C

or C++, that use rational instead of floating point numbers.

If what you're after is accurate math, than representing your numbers as

rational (n/d) seems to be a rather trivial way of achieving that; the

math part most people learned in basic arithmetic. I would think the

hardest part would be writing an efficient reduction algorithm, to, for

example, turn 10/18 into 5/9.

Hey, there you go; what I just wrote summed up nicely :-)

Yep, that's the one.

Yeah that's what I meant by "moving", as in it just keeps on calculating

on and on and on given how many places.

--

szr

## Re: FAQ 4.2 Why is int() broken?

Yes, it does.

Irrelevant because your typical computer does not use decimal numbers

but binary numbers, just like Peter said.

No, it cannot. Neither as a decimal number nor as a binary number.

That is not a number but an expresssion. Mathematically they may be the

same. However for actual computations they are not as any introduction

to Computer Numerics will tell you.

Yep, there are special mathematical packages out there for symbolic

arithmetic. They can handle fractions and typically much more from

simple sums all the way up to integrals and difference quotients.

Smallest common demoninator, rather simple. But any operations on such

symbolic numbers is time consuming because it is not supported by

hardware but needs to be emulated in software.

Again, your typical computer does not use decimal numbers but binary

numbers. There have been a few CPUs with build-in support for decimal

calculations, but they weren't very successful.

I suppose you meant "use fractions".

Again, except for maybe very specialized high-end computers fractions

are not supported by the hardware.

jue

## Re: FAQ 4.2 Why is int() broken?

It doesn't, but I didn't write that. I wrote it takes an infinite number

of

***binary***digits.

In binary, 1/10 is 0.00011001100110011...

Actually, I didn't write what "a typical computer" uses, just what

happens when a binary system is used (which is what perl uses on most

(all?) platforms - COBOL uses normally uses decimal).

Actually it can, at least if you translate the mathematical definition

of a rational number into the straightforward implementation of storing

two integers: (1, 3) in this case.

It isn't much different from a floating point number, which is also

stored as two integers (m, e) where the value is m*2^(e-b).

There is also one in Perl and I already mentioned it: Math::BigRat resp.

bigrat. That almost certainly is slow, because it not only uses rational

numbers, it also uses arbitrary precision. (But I've never really used

it so I don't know).

Yup, was probably my third or fourth BASIC program. Used as an example

to introduce GOTO ;-).

hp

## Re: FAQ 4.2 Why is int() broken?

Peter J. Holzer wrote:

For some reason I thought that applied to decimal expresses that

repeated infinately, like 1/3 => .3333..., not decimals with a fixed

number of dibits, which 1/0 => 0.1 is:

$ perl -e 'my $x = 1/10; print unpack("b64", pack("d", $x)), "\n"'

0101100110011001100110011001100110011001100110011001110111111100

$ perl -e 'my $x = 1/2; print unpack("b64", pack("d", $x)), "\n"'

0000000000000000000000000000000000000000000000000000011111111100

$ perl -e 'my $x = 1/4; print unpack("b64", pack("d", $x)), "\n"'

0000000000000000000000000000000000000000000000000000101111111100

$ perl -e 'my $x = 1/8; print unpack("b64", pack("d", $x)), "\n"'

0000000000000000000000000000000000000000000000000000001111111100

I guess you never know when something you've over looked or something

you thought you knew can surprise you. :-)

Even among different types of computers floating point calculations are

not all done the same. For instance, I have a graphing calculator, which

is essentially tiny computer with a 6 MHz cpu and yet it can do many

floating point calculations more accurately than my dual core cpu

desktop. Granted, it's a calculator, but I have often wondered why it

seems to be able to handle such calculations better than a CPU that's

over 400 times faster.

Yes this is what I meant.

Good to know.

Ah yes, I forgot about that one.

I said it would have been the hardest part, not that it was actually

difficult :-)

I think we've all had t owrite such algorithums at some point :-)

--

szr

For some reason I thought that applied to decimal expresses that

repeated infinately, like 1/3 => .3333..., not decimals with a fixed

number of dibits, which 1/0 => 0.1 is:

$ perl -e 'my $x = 1/10; print unpack("b64", pack("d", $x)), "\n"'

0101100110011001100110011001100110011001100110011001110111111100

$ perl -e 'my $x = 1/2; print unpack("b64", pack("d", $x)), "\n"'

0000000000000000000000000000000000000000000000000000011111111100

$ perl -e 'my $x = 1/4; print unpack("b64", pack("d", $x)), "\n"'

0000000000000000000000000000000000000000000000000000101111111100

$ perl -e 'my $x = 1/8; print unpack("b64", pack("d", $x)), "\n"'

0000000000000000000000000000000000000000000000000000001111111100

I guess you never know when something you've over looked or something

you thought you knew can surprise you. :-)

Even among different types of computers floating point calculations are

not all done the same. For instance, I have a graphing calculator, which

is essentially tiny computer with a 6 MHz cpu and yet it can do many

floating point calculations more accurately than my dual core cpu

desktop. Granted, it's a calculator, but I have often wondered why it

seems to be able to handle such calculations better than a CPU that's

over 400 times faster.

Yes this is what I meant.

Good to know.

Ah yes, I forgot about that one.

I said it would have been the hardest part, not that it was actually

difficult :-)

I think we've all had t owrite such algorithums at some point :-)

--

szr

## Re: FAQ 4.2 Why is int() broken?

<--><--><--><--><--><--><--><--><--><--><--><-->

Um, the repeating pattern is clearly visible here. The pattern is

broken on the left side because of rounding (you printed the pattern in

little endian, so the least significant digit is on the left and the,

er, binary point is just to the right of the rightmost ">" I made).

1/2, 1/4, 1/8 are all exactly representable in binary, because they are

integral multiples of powers of two. 1/10 is not, because it is not an

integral multiple of powers of two. 1/3 is not representable in decimal

because it is not an integral multiple of a power of 10. And 1/2 is not

representable in base-15 because it is not an integral multiple of a

power of 15 (but 1/3 and 1/5 are). For any given base, there are always

only a small (but infinite :-)) number of fraction which are

representable in that base.

I'm not up to date with calculators (the last one I bought was an HP-48

20 years ago), but frankly, I doubt that it is more accurate. It has

probably less than 15 digits of mantissa. It probably does its

computations in decimal, which makes them even less accurate, but the

errors

***match***

***your***

***expectations***, so you don't notice then.

Different target. Calculators can be slow - they do only primitive

computations, and if they finish them in a short but noticable time its

still "fast enough". A modern CPU is supposed to be able to do hundreds

of millions such computations per second. Calculators are also rarely

used for computations where accuracy is much of an issue - 8 or 12

digits are enough. But they are used by people who expect 0.1 * 10 to be

1.0 but aren't overly surprised if 1/3*3 is 0.9999999.

In short it is able to "handle such calculations" because it has been

designed to do so. Floating point hardware has been designed to give the

most accurate result in a very short time. You can always implement

decimal arithmetic yourself or use a library[1] - it will still be a lot

faster than your calculator.

hp

[1] Incidentally, I know of one modern processor (the IBM Power6) which

implements full decimal floating point arithmetic in hardware.

## Re: FAQ 4.2 Why is int() broken?

Peter J. Holzer wrote:

I put that one in to show a comparison between the above and the ones

below. But you covered it pretty well in your following paragraph.

True enough.

My point was, on graphing calculators I've used (which has mainly

consisted of Texas Instrument TI-8*'s), the results are what you'd

expect them to me math wise for the most part. That is, what you'd

expect if you were to do it using plain ol' pencil and paper.

Testing on a TI-89 (which does have a hefty amount of precision)

entering "1/3

give the same results. On what hardware, language, or such, do you end

up getting 0.99999.. from 1/3*3.0 ?

As I expect it would be.

I remember back in the days of the 486 processor you had a separate

"Math Co-Processor", I wonder what would happen if they reinstituted

that idea, but on a more powerful/modern scale? I know the GPU on modern

graphics cards in some ways fills that roll, because intense graphical

programs (like newer Games) make big use of FP calculations in real

time, but would it be so bad to have an extra math process along side

the CPU like in the days of yore, to give an overall boost in FP

efficiency?

--

szr

I put that one in to show a comparison between the above and the ones

below. But you covered it pretty well in your following paragraph.

True enough.

My point was, on graphing calculators I've used (which has mainly

consisted of Texas Instrument TI-8*'s), the results are what you'd

expect them to me math wise for the most part. That is, what you'd

expect if you were to do it using plain ol' pencil and paper.

Testing on a TI-89 (which does have a hefty amount of precision)

entering "1/3

***3.0" yields "1." and so does "0.1***10", and Perl seems togive the same results. On what hardware, language, or such, do you end

up getting 0.99999.. from 1/3*3.0 ?

As I expect it would be.

I remember back in the days of the 486 processor you had a separate

"Math Co-Processor", I wonder what would happen if they reinstituted

that idea, but on a more powerful/modern scale? I know the GPU on modern

graphics cards in some ways fills that roll, because intense graphical

programs (like newer Games) make big use of FP calculations in real

time, but would it be so bad to have an extra math process along side

the CPU like in the days of yore, to give an overall boost in FP

efficiency?

--

szr

## Re: FAQ 4.2 Why is int() broken?

Please note the words you are using yourself: "expect" and "for the most

part". A calcutor uses decimal arithmetic, just like you do when you use

pencil and paper. So it will produce the correct results in the same

cases and it will make the same errors - it will do what you expect. But

that doesn't mean it is more accurate in general - only for decimal

numbers.

For example on my HP-48 calculator. It uses 12 decimal places, so 1/3 is

0.333333333333. 0.333333333333 * 3 is clearly 0.999999999999.

I don't have my old TI-57 at hand (it's about 600 km away), but IIRC it

used 11 decimal digits and displayed only 8. So 1/3 would be

0.33333333333 (displayed as 0.3333333), multiplication with 3 would then

yield 0.99999999999 (displayed as 1). So, you see "1", but the error is

still there. If you subtract 1.0, you will get 1E-11.

(deja vu: I had almost exactly this discussion with my room mate 22

years ago: He claimed that his TI was more accurate than my HP because

1/3

***3 ***displayed* 1 instead of 0.999999999999. Changing the calculation

into 1/3*3-1 showed otherwise: He got 1E-11, I got 1E-12, which is

clearly the smaller error.)

For 64-bit binary floating point numbers, 1.0/3.0 is

0.333333333333333314829616256247390992939472198486328125

times 3 would be .999999999999999944488848768742172978818416595458984375

exactly, but that cannot be represented in 53 bits of mantissa, so it

needs to be rounded - up in this case, so the result is indeed 1.0

exactly. So here we have a case where a calculation in decimal cannot

possibly produce the correct result, but in binary it will (only by

chance, granted. But even if the result had been rounded down the error

would still be only ~ 1.1E-16, quite a bit lower than on the HP-48 or

TI-57).

You mean the days

***before***the 486 processor.

The 486 was the first intel processor which had an integrated FP

unit instead of a separate coprocessor.

The "math coprocessor" is still there. It's just on the same chip. And

the FP unit of a Pentium 4 is much, much more powerful than the 387 was.

hp

## Re: FAQ 4.2 Why is int() broken?

Peter J. Holzer wrote:

However, the 486SX didn't, unless you added a 487 (which was just a

sneaky way for Intel to sell spoiled 486s at a reduced price).

However, Intel is dragging its feet on adding decimal floating point,

per IEEE-754r.

--

John W. Kennedy

"Sweet, was Christ crucified to create this chat?"

-- Charles Williams. "Judgement at Chelmsford"

However, the 486SX didn't, unless you added a 487 (which was just a

sneaky way for Intel to sell spoiled 486s at a reduced price).

However, Intel is dragging its feet on adding decimal floating point,

per IEEE-754r.

--

John W. Kennedy

"Sweet, was Christ crucified to create this chat?"

-- Charles Williams. "Judgement at Chelmsford"

## Re: FAQ 4.2 Why is int() broken?

And z/Architecture.

The very existence of this thread (and the fact that it grows out of a

FAQ entry) is an illustration of the demand. At least since the first

release of MS BASIC, this has been a problem; mathematical naïfs believe

that reality is decimally quantized, and they always will, and, on the

other hand, even fairly sophisticated people, including program-language

designers, have problems dealing with noninteger fixed-point. (The

original COBOL designers screwed the pooch so badly on this point that

it took until 2002 to come up with an /optional/ fix.)

--

John W. Kennedy

"Only an idiot fights a war on two fronts. Only the heir to the

throne of the kingdom of idiots would fight a war on twelve fronts"

-- J. Michael Straczynski. "Babylon 5", "Ceremonies of Light and Dark"

## Re: FAQ 4.2 Why is int() broken?

z/Architecture isn't exactly mass-market.

I think the demand cannot have been that great or it would have been

met. How many programming languages provide a native decimal type? Or

even a well-integrated library ("use decimal" in Perl similar to "use

bigint" etc. wouldn't be a big deal - but does it exist?). The 8-bit and

16-bit microprocessors of the 1970's often had special instructions to

help with decimal (BCD) arithmetic. Later designs didn't have them, or

if they kept them for binary compatibility (like the x86 series) they

were very slow and never used. (IEEE-754r may change that, but I

wouldn't hold my breath).

One of the reasons why decimal arithmetic never really caught on in

computing may be that the mathematicaly (or rather numerically) naive

not only have a problem with binary floating-point, they don't think

about these matters at all (if they did, they wouldn't be naive). So

they don't ask for decimal arithmetic (which would be easy to provide).

Those who do think about these matters conclude that either there's an

easy workaround (like scaling by a power of 10) or the problem wouldn't

actually become any simpler with decimal arithmetic, So they don't ask

for it either, much less implement it.

What's screwed about COBOL fixed point arithmetic?

hp

#### Site Timeline

- » FAQ 1.3 Which version of Perl should I use?
- — Next thread in » PERL Discussions

- » FAQ 4.30 How do I capitalize all the words on one line?
- — Previous thread in » PERL Discussions

- » s suffix question
- — Newest thread in » PERL Discussions

- » Dell Battery Slice LED codes
- — The site's Newest Thread. Posted in » Laptop Computers Forum