#### Do you have a question? Post it now! No Registration Necessary. Now with pictures!

## Re: Perl bug ;

Yup. I learned that little trick from a tax advisor on my first

programming job, almost 30 years ago.

Yes, but this is irrelevant here, because:

Yes. Or rather, some integer values have exact representations in a

floating point format, just like some integers can be represented in a

fixed length int format (Since there are infinitely many integers, it is

impossible to represent all of them in a finite number of bits).

In both cases there is a contiguous range of representable integers. A

32 bit signed int can represent [-2147483648, 2147483647], a 32 bit

unsigned int can represent [0, 4294967295]. A 64 bit IEEE 754 FP number

can represent all integers in [-9007199254740992, +9007199254740992].

For larger ranges, there are holes: For [-2

******54, +2

******54], it can

represent all even integers, for [-2

***55, +2****54] it can represent all

integers divisible by 4, and so on. And for smaller ranges, it can

represent non-integers: For [-1, +1] it can represent all multiples of

2

******-53.

hp

--

_ | Peter J. Holzer | Fluch der elektronischen Textverarbeitung:

|_|_) | | Man feilt solange an seinen Text um, bis

| | | hjp@hjp.at | die Satzbestandteile des Satzes nicht mehr

__/ | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel

## Re: Perl bug ;

Devel::Peek::Dump will do this. Whats happens here type-wise is roughly

that $d starts out as scalar with an integer value. It acquires a

floating-point value when the division is performed. Since the

following subtraction is again an integer operation, the floating point

flags of the scalar are cleared and the integer value updated. This

integer value is then again converted to a floating point value for the

division and so on.

Yes. That's not irrelevant here because the whole idea behind this is to

scale the values the calculation is performed with such that floating

point arithmetic can be avoided.

Integers also have an exact representation (within the limit available

by the number of bits) in floating-point format because every integer

can be expressed as a sum of powers of 2 and a floating-point numbers is

encode as 1 + a sum of negative powers of 2 times some power of

two. Logically, this amounts to doing a left-shift of the bits in the

significant until they're all 'above the point' (x

*** (2 **** n) == x <<

n).

## Re: Perl bug ;

Because it wouldn't solve anything. There are many more rational numbers

than floating point numbers and that is not going to change, no matter

how many digits you are using.

Example from the dezimal world:

0.3333333333 + 0.3333333333 + 0.3333333333 = 0.9999999999

It doesn't matter if you are using 3 digits or 10 digits or 500 digits.

If you meant "use a specific precision", then I agree. That is the

standard solution.

Yes, this is the other standard solution for exact results.

Well, what do you expect when asking a questions that

___is___asked

frequently?

And should be very well-known anyway because it

___REALLY___has nothing to

do with Perl but is inherent to modern computers and applies to any

common programming language on any common computer using any common OS.

jue

## Re: Perl bug ;

Well, I expect more a response like because "are slower" or that

long doubles "take more space" but not the exceptions. The thing

is that a 32-bit computer using long doubles could be more

precise than a 64-bit computer. The better, must be the best.

But thanks for the explanation.

--

http://www.telecable.es/personales/gamo/

## Re: Perl bug ;

El 15/08/14 a las 01:00, gamo escribió:

More on reasons: standarization of results between versions and

machines. OK. It could be all debatable. But despite all, the

advantage is clear: the error is less important. Take your example

with decimals.

case a) 2 digits of precision

1/3 + 1/3 + 1/3 = 0.33 * 3 = 0.99 -> error = 0.01

case b) 3 digits of precision

1/3 + 1/3 + 1/3 = 0.333 * 3 = 0.999 -> error = 0.001

That's a big deal if for every digit we use, one digit less in error.

--

http://www.telecable.es/personales/gamo/

More on reasons: standarization of results between versions and

machines. OK. It could be all debatable. But despite all, the

advantage is clear: the error is less important. Take your example

with decimals.

case a) 2 digits of precision

1/3 + 1/3 + 1/3 = 0.33 * 3 = 0.99 -> error = 0.01

case b) 3 digits of precision

1/3 + 1/3 + 1/3 = 0.333 * 3 = 0.999 -> error = 0.001

That's a big deal if for every digit we use, one digit less in error.

--

http://www.telecable.es/personales/gamo/

## Re: Perl bug ;

No, it is not. It is rather luring the nincompoops into a false sense of

safety. Wrong is still wrong.

You have to understand how floating point computations work in order to

use them safely. Otherwise the next question will be "Why do I get

'false' when comparing 1/3 + 1/3 + 1/3 with 1?"

You have to know that in floating point arithmetic time-honoured

mathematical laws simply don't apply. Even something as simple as

Number1 plus ( a million times Number2) equals

Number1 (a million times plus) Number2

is not correct any longer. With number1 being very large and number2

being very small this becomes obvious very fast.

http://en.wikipedia.org/wiki/Floating

___point#Accuracy___problems has a

short overview of some well-known inherent limitations of floating point

numbers.There are good reasons why Computer Numerics has its own

scientific standing and you better have at least a basic understanding

or you shouldn't use floating point numbers to begin with.

Not to mention that there are extremely few cases where high precision

adds any value in real-world applications. Where do the values for your

floating point numbers are coming from? A number returned by some

measuring device? How many valid digits does that devise generate? 4? 5?

Then why would you decive your users by displaying 10 or 15 or 20

seemingly valid digits?

Do you really weight exactly 1.10231117 avoirdupois pounds if the recipe

calls for 1/2kg? Do you really need to know the distance between 2

cities to an accuracy of 20 valid digits?

I am truely hard pressed to find any everyday application where you need

more than maybe 5 valid digits and even that is a bit of a stretch

already. Science may be different but those people better know how to

handle computer numerics anyway.

jue

## Re: Perl bug ;

You have an example at hand: it's called clock. Any other

analogic-digital conversion, simulation will need precision. There are

cases

in which the error could be measured and it happens must be 0.0

For that, there are simple routines that calculate the EPS (epsilon)

of error in your language/compiler/machine combination and the minimum

it is, the better the computation. As expected, long doubles have

better, minor EPS errors.

I.e.

my $eps=1;

do {

$eps /= 2;

} until (1+$eps == 1);

print "$eps\n";

Mine is EPS = 1.11022302462516e-16

with a 64 bit machine. Any hillbilly

with perl compiled with long doubles

in a 32 bit machine will lower that.

Which remaind me to check what other options my distro-perl has.

Best regards.

--

http://www.telecable.es/personales/gamo/

## Re: Perl bug ;

That depends on what you want to measure. The time it takes me to get to

my office? 2 digits are sufficient for that. A downhill race? 5 digits

are sufficient, and one could argue that the 5th digit just adds

pseudo-precision.

The time between the epoch and the posting of this article with 1 second

precision? Yes, you need 10 digits for that, but:

* This time is not as precise as you would think: There have been a

number of leap seconds which aren't reflected in unix time.

* You very rarely need to determine a 40+ year time to 1 second

precision. You need that precision to get a common time scale

on which you can compare events which are much closer together.

Any measurement in the analog world has an inherent measurement error.

If the additional error introduced by rounding to a fp number is well

below this measurement error, any additional precision is pointless.

If you are measuring voltages in the ±1 V range and your voltmeter has

an accuracy of ±1 mV, single precision FP is more than enough. Sure, you

can't store 0.987 exactly, but only 0.986999988555908203125, but the

real value is somewhere between 0.986 and 0.988 and

0.986999988555908203125 is well within that range (this is something

that even electrical engineers, who should know better, sometimes get

wrong).

Yes, although you usually need that extra precision to keep the

accumulated error small, not for the result. This is the case where

higher precision fp numbers really shine. Add a billion values and you

have just increased your rounding error by 1 billion. But if you are

computing to 15 significant digits, you still have 6 digits left.

If you have to measure (as opposed to count) something, the error will

never be 0.0 because there will always be a measurement error. You

cannot measure anything in the real world to infinite precision.

hp

--

_ | Peter J. Holzer | Fluch der elektronischen Textverarbeitung:

|_|_) | | Man feilt solange an seinen Text um, bis

| | | hjp@hjp.at | die Satzbestandteile des Satzes nicht mehr

__/ | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel

## Re: Perl bug ;

El 16/08/14 a las 10:22, Peter J. Holzer escribió:

I don't get it. If I sum up one billion times (american billions)

a float number, constant, what I get is a pretty random number.

I can't see what digits are left. This is another argument in favor

of long doubles.

--

http://www.telecable.es/personales/gamo/

I don't get it. If I sum up one billion times (american billions)

a float number, constant, what I get is a pretty random number.

I can't see what digits are left. This is another argument in favor

of long doubles.

--

http://www.telecable.es/personales/gamo/

## Re: Perl bug ;

That depends on the number of digits in the float number, or ...

... using long doubles wouldn't help.

A double precision fp number has about 15 significant digits, so if you

lose the last 9 to rounding errors, your result is still accurate to 6

digits. An 80 bit "long double" has about 21 significant digits, so if

you lose the last 9, your result is accurate to 12 digits. Now ask

yourself: Are 6 digits enough or do you need 12? Or do you need even

more? That depends entirely on the problem.

Of course a single precision fp number has only 6 significant digits, so

if you lose 9 digits, your result is entirely random.

(Losing 9 digits from summing up 1 billion numbers is of course just a

ballpark number: That only happens if the result is of the same order as

the terms and all the errors point in the same direction. OTOH you can

lose all significant digits even with 3 terms: E.g. 1 + 1E30 - 1E30.)

Yes, I wrote that:

But note that while 80 bit fp numbers (almost) always give more precise

results than 64 bit fp numbers, 128 bit fp numbers would give even more

precise results. So should we always use 128 bit fp numbers? Or 256 bit

fp numbers, since those given even better results? Where do we stop?

In the end for any generic solution (such as hardware support in a

processore or the generic "number" type in a weakly typed language like

Perl or Javascript) you have to compromise between accuracy and cost.

hp

--

_ | Peter J. Holzer | Fluch der elektronischen Textverarbeitung:

|_|_) | | Man feilt solange an seinen Text um, bis

| | | hjp@hjp.at | die Satzbestandteile des Satzes nicht mehr

__/ | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel

## Re: Perl bug ;

We need to stop where technology stop us. The processor can handle

well a certain amount of length of bits of a fp. Over this amount,

it only can process a number with considerable overhead.

What? Perl is programmed in C, ergo I don't know a good reason why

it need to be more imprecise added to more slowly. This kind of

reason add another brick in the wall around Perl as a decent lenguage.

And it is a decent one.

--

http://www.telecable.es/personales/gamo/

## Re: Perl bug ;

Or you could simply say: if you are using floating point numbers then

better learn how to use floating point numbers. It is an urban myth that

they are easy to deal with. And no, the number of bits for FP numbers

doesn't matter(*) because the issues are inherent in the very concept of

floating point numbers.

*: doesn't matter any more today where ~15 digits are standard. Of

course in the distant past with shorter data types it was a different

story.

Still waiting for your real-world application that demands more than 15

valid digits.

jue

## Re: Perl bug ;

El 16/08/14 a las 19:41, jurgenex@hotmail.com escribió:

Time is continuous, a real variable. A realistic film, made of a integer

number of photograms, could serve to some purposes, others don't.

Call it lazyness but if you are scheduling in minutes, seconds are its

decimal numbers, which include miliseconds and so on. I don't know if

it's easy to expand the limited integer capacity and express all in its

integer version case by case. I doubt it.

--

http://www.telecable.es/personales/gamo/

Time is continuous, a real variable. A realistic film, made of a integer

number of photograms, could serve to some purposes, others don't.

Call it lazyness but if you are scheduling in minutes, seconds are its

decimal numbers, which include miliseconds and so on. I don't know if

it's easy to expand the limited integer capacity and express all in its

integer version case by case. I doubt it.

--

http://www.telecable.es/personales/gamo/

## Re: Perl bug ;

That sentence doesn't parse.

With double precision floating point numbers you can schedule any job

over a year with 10 ns precision. Did you really have to schedule them

more precisely? Was the system even able to keep time with such

precision (normally system clocks are only accurate to a few

milliseconds). Alternatively, you could schedule jobs over 100 years to

1 µs. It you schedule a job for 2114, will it matter whether it starts

at 2114-08-17T09:36:35.1234567 or 2114-08-17T09:36:35.1234562?

hp

--

_ | Peter J. Holzer | Fluch der elektronischen Textverarbeitung:

|_|_) | | Man feilt solange an seinen Text um, bis

| | | hjp@hjp.at | die Satzbestandteile des Satzes nicht mehr

__/ | http://www.hjp.at/ | zusammenpaßt. -- Ralph Babel

## Re: Perl bug ;

I can't remember the problem in detail, but was related with simulated

time, that is, adding 1/60 many times. The problem was solved both

using sprintf() to round or using bignum with a certain precision.

--

http://www.telecable.es/personales/gamo/

## Re: Perl bug ;

El 17/08/14 a las 10:31, gamo escribió:

Ah! and the problem does not appear if only one task is scheduled,

the problem arises when two tasks which must be in tie (or not)

depends incorrectly in the criteria of the fp over a more correct

criteria (FIFO, i.e.).

--

http://www.telecable.es/personales/gamo/

Ah! and the problem does not appear if only one task is scheduled,

the problem arises when two tasks which must be in tie (or not)

depends incorrectly in the criteria of the fp over a more correct

criteria (FIFO, i.e.).

--

http://www.telecable.es/personales/gamo/

#### Site Timeline

- » New topic: time flow simulation
- — Next thread in » PERL Discussions

- » perl6 too much pointless functionality
- — Previous thread in » PERL Discussions

- » s suffix question
- — Newest thread in » PERL Discussions

- » Dell Battery Slice LED codes
- — The site's Newest Thread. Posted in » Laptop Computers Forum