#### Do you have a question? Post it now! No Registration Necessary. Now with pictures!

## Re: how do I get more numbers past the decimal?

The exponent is not signed. It ranges from 0..255 with 0 reserved for

subnormals and 0.0 and 255 for Inf and NaN. The 'normal' range is 1..254.

True. I am explaining the 32-bit float. The 64-bit double has much more

precision with a sign bit, 11 exponent bits and 53 mantissa bits.

You got me there.

--

Joe Wright

"Memory is the second thing to go. I forget what the first is."

## Re: how do I get more numbers past the decimal?

Uh, I look at 16,777,215 and I count eight decimal digits, not nine.

Since that number is 16,777,215 and not 99,999,999 that means that we

can really only accurately represent 7 decimal digit accuracy.

Question: you said nine digits. Show me how to represent .987654321 in

the 23 bits available. That is, we would need to represent 987,654,321,

which would be the matissa, yet the largest number available for the

mantissa is 16,777,215.

The way I learned things, the other eight bits are used for the exponent

in decimal format. One of those bits is used to represent the sign.

Thus the largest (absolute value) number would have an exponent of 128

and the smallest (absolute value) number would have an exponent of -128.

Those exponents are in decimal notation (E128, E-128).

## Re: how do I get more numbers past the decimal?

I didn't say a float could hold any 9-digit number but that any float

can be represented by 9 digits.

The sign of the float value is the msb or b31 of the representation.

No accounting for memory. The exponent is not signed. Its range is from

0..255 with zero reserved for subnormals and 0.0 and 255 for Inf and

NaN. The 'normal' range is therefore 1..254 as below.

FLT_MAX

01111111 01111111 11111111 11111111

Exp = 254 (128)

10000000

Man = .11111111 11111111 11111111

3.40282347e+38

FLT_MIN

00000000 10000000 00000000 00000000

Exp = 1 (-125)

10000011

Man = .10000000 00000000 00000000

1.17549435e-38

--

Joe Wright

"Memory is the second thing to go. I forget what the first is."

## Re: how do I get more numbers past the decimal?

Joe Wright wrote:

Strictly speaking, no, the exponent is not signed. But it is offset by

127 - IOW, you need to subtract 127 from the exponent value to get the

"real" exponent. See http://en.wikipedia.org/wiki/Binary32 for details.

--

==================

Remove the "x" from my email address

Jerry Stuckle

JDS Computer Training Corp.

jstucklex@attglobal.net

==================

Strictly speaking, no, the exponent is not signed. But it is offset by

127 - IOW, you need to subtract 127 from the exponent value to get the

"real" exponent. See http://en.wikipedia.org/wiki/Binary32 for details.

--

==================

Remove the "x" from my email address

Jerry Stuckle

JDS Computer Training Corp.

jstucklex@attglobal.net

==================

## Re: how do I get more numbers past the decimal?

It depends on how you look at it, where you think the binary point is.

In my world the binary point is to the left of b23 such that the

mantissa is always less than 1. In this case I assume a bias of 126. If

you assume the imaginary binary point at b22 then the bias becomes 127.

00111111 10000000 00000000 00000000

Exp = 127 (1)

00000001

Man = .10000000 00000000 00000000

1.00000000e+00

--

Joe Wright

"Memory is the second thing to go. I forget what the first is."

## Re: how do I get more numbers past the decimal?

There is a subtle, but important, distinction between "floating point numbers

on a 32 bit machine" and "32 bit floating point numbers". The IBM System/360

and 370 mainframes were 32-bit machines, yet they supported both 32- and

64-bit floating point numbers.

## Re: how do I get more numbers past the decimal?

Doug Miller wrote:

And, in fact, the supported binary coded decimal, with values of

arbitrary length (up to 255 digits, IIRC).

--

==================

Remove the "x" from my email address

Jerry Stuckle

JDS Computer Training Corp.

jstucklex@attglobal.net

==================

And, in fact, the supported binary coded decimal, with values of

arbitrary length (up to 255 digits, IIRC).

--

==================

Remove the "x" from my email address

Jerry Stuckle

JDS Computer Training Corp.

jstucklex@attglobal.net

==================

## Re: how do I get more numbers past the decimal?

That's one format. Other common (IEEE standard) formats are 64

bits and 80 bits. Yes, on a 32-bit machine, with Intel and AMD x86

architectures being prominent examples. In fact the same floating-point

formats are used on an (ancient) 8086/8087 combination, which is

generally referred to as a

***16***bit processor. There's also the

(ancient) 8088/8087 combination, which by one attribute, the width

of the external data bus, makes it an

***8***bit processor (but still

with 32-bit, 64-bit, and 80-bit floating-point formats).

PHP implementations tend to use IEEE doubles on machines that have

IEEE doubles.

32-bit floating point numbers tend to have not enough precision to

do much with them, especially after taking into account cascading

errors in a complicated calculation. Ok, maybe you could calculate

(for some USA states) sales tax on something cheap, but not on a

company jet or high-end luxury car, without having to resort to

multi-precision arithmetic.

It probably doesn't matter.

Did you know it's possible to do arbitrary-precision math (a dozen

to thousands of digits) on an

***8***-bit machine like the Intel 8080?

Ok, it's not done in hardware, and it may be as slow as an old desk

calculator (partly because such a processor or something even wimpier

(e.g.

***4***bits) might be used in an old desk calculator). The

bit-ness of a machine is more a marketing attribute than something

you could calculate from the chip schematic.

The bit-ness of a machine is often determined from the following

attributes (which are sometimes arguable either way), but rarely

will they all agree with each other:

- Width of the general registers

- Width of the internal data bus

- Width of the external data bus (to access memory)

- Width of the internal address bus

- Width of the external address bus (some Pentiums are

***36***bits by this rule)

## Re: how do I get more numbers past the decimal?

gordonb.v6334@burditt.org (Gordon Burditt) wrote:

[snip]

The only one of these that is relevant is the width of the general

registers. Change any of the others, and the programmer won't notice.

And it's the programmer that's important in this context. The effect of

changing the others is to change the apparent speed of the machine (if

the data bus is 8 bit rather than 16, more cycles are needed to fetch

something), or the amount of memory that might be addressed.

--

Tim

"That excessive bail ought not to be required, nor excessive fines imposed,

nor cruel and unusual punishments inflicted" -- Bill of Rights 1689

[snip]

The only one of these that is relevant is the width of the general

registers. Change any of the others, and the programmer won't notice.

And it's the programmer that's important in this context. The effect of

changing the others is to change the apparent speed of the machine (if

the data bus is 8 bit rather than 16, more cycles are needed to fetch

something), or the amount of memory that might be addressed.

--

Tim

"That excessive bail ought not to be required, nor excessive fines imposed,

nor cruel and unusual punishments inflicted" -- Bill of Rights 1689

## Re: how do I get more numbers past the decimal?

Tim Streater wrote:

Depends on the programmer.

Many would notice speed or RAM space issue, and register size is usually

hidden by the language you are writing in. The whole point of a language

is to conceal the hardware from the programmer, including its native

integer size.

Depends on the programmer.

Many would notice speed or RAM space issue, and register size is usually

hidden by the language you are writing in. The whole point of a language

is to conceal the hardware from the programmer, including its native

integer size.

## Re: how do I get more numbers past the decimal?

As I said.

Unless it's assembler, which is what I was referring to. Obviously, for

a higher level language,

***all***of the above is hidden, broadly speaking.

--

Tim

"That excessive bail ought not to be required, nor excessive fines imposed,

nor cruel and unusual punishments inflicted" -- Bill of Rights 1689

## Re: how do I get more numbers past the decimal?

wrote:

But the different languages reveal more or less of the underlying

mechanics, yes? C reveals more than Java which reveals more than PHP?

I'm curious if I should pursue this in PHP, or try to switch to

something else? My Java is rusty, but this might be a reason to dust

it off and try again? I recall that Java's BigDecimal class is well

documented and works the same on all machines. Would it be any more

reliable than PHP? I tend to reach for PHP first, whenever I want to

write a script, because it is what I work in most, though I realize

that mathematical equations is not its specialty. For now, I'm playing

with very simple equations, for instance:

x[next] = rx(1-x)

but I need to figure out how to get reliable numbers, in terms of the

precision past the decimal, otherwise these explorations are

meaningless.

That particular equation I copied out of James Gleick's book on Chaos,

an excerpt of which I just posted to my blog, which I repeat here in

case the context is interesting to anyone.

--------------------------------------

An ecologist imagining real fish in a real pond had to find a function

that matched the crude realities of life - for example, the reality of

hunger, or competition. When the fish proliferate, they start to run

out of food. A small fish population will grow rapidly. An overly

large population will dwindle (they will starve).

...In the Malthusian scenario of unrestrained growth, the linear

growth function rises forever upward. For a more realistic scenario,

an ecologist needs an equation with some extra term that restrains

growth when the population becomes large. The most natural function to

choose would rise steeply when the population is small, reduce growth

to near zero at intermediate values,and crash downward when the

population is very large. By repeating the process, an ecologist can

watch a population settle into its long-term behavior - presumably

reaching some steady state. A successful foray into mathematics for an

ecologist would let them say something like this: Here's an equation;

here's a variable representing the reproductive rate; here's a

variable representing the natural death rate; here's a variable

representing the additional death rate from starvation or predation;

and look - the population will rise at this speed until it reaches

that level of equilibrium.

How do you find such a function? Many different equations might work,

and possibly the simplest modification of the linear, Malthusian

version is this:

x[next] = rx(1-x)

x[next] is the population next year.

Again, the parameter r represents a rate of growth that can be set

higher or lower. The new term, 1-x, keeps the growth within bounds,

since as x rises, 1-x falls. Anyone with a calculator could pick some

starting value, pick some growth rate, and carry out the arithmetic to

derive next year's population.

For convenience, in this highly abstract model, "population" is

expressed as a fraction between zero and one, zero representing

extinction, one representing the greatest possible population of the

pond.

So begin: Choose an arbitrary value for r, say, 2.7, and a starting

population of .02. One minus .02 is .98. Multiply by 0.02 and you get .

0196. Multiply that by 2.7 and you get .0529. The very small starting

population has more than doubled. Repeat the process, using the new

population as the seed, and you get .1353. The population rises to .

3159, then .5835, then .6562 - the rate of increase is slowing. Then,

as starvation overtakes reproduction, .6092. Then .6428, then .6199,

then .6362, then .6249. The numbers seem to be bouncing back and

forth, but closing in on a fixed number: .6328, .6273, .6312, .6285, .

6304, .6291, .6300, .6294, .6299, .6295, .6297, .6296, .6296, .

6296, .6296, .6296, .6296. Success!

[skipping to page 69]

Robert May was a biologist. His interests at first tended toward the

abstract problems of stability and complexity, mathematical

explanations of what enables competitors to coexist. But he soon began

to focus on the simplest ecological questions of how single

populations behave over time.

...Once, in fact, on a corridor blackboard he wrote the equation out

as a problem for the graduate students. It was starting to annoy him.

"What the Christ happens when lambda gets bigger than the point of

accumulation?" What happened, that is, when a population's rate of

growth, its tendency toward boom and bust, passed a critical point. By

trying different values of this nonlinear parameter, May found that he

could dramatically change the system's character. Raising the

parameter meant raising the degree of nonlinearity, and that changed

not just the quantity of the outcome, but also it quality. It affected

not just the final population at equilibrium, but also whether the

population would reach equilibrium at all.

When the parameter was low, May's simple model settled at a steady

rate. When the parameter was higher, the steady state would break

apart, and the population would oscillate between two alternating

values. When the parameter was very high, the system - the very same

system - seemed to behave unpredictably. Why? What exactly happened at

the boundaries between the different kinds of behavior? May couldn't

figure it out. (Nor could the graduate students.)

May carried out a program of intense numerical exploration into the

behavior of this simplest of equations... It seemed incredible that

its possibilities for creating order and disorder had not long since

been exhausted. But they had not. He investigated hundreds of

different values of the parameter, setting the feedback loop in motion

and watching to see where - and whether - the string of numbers would

settle down to a fixed point. He focused more and more closely on the

critical boundary between steadiness and oscillation. It was as if he

had his own fish pond, where he could wield fine mastery over the

"boom-and-bustiness" of the fish. Still using the logistic equation x

[next] = rx(1-x), May increased the parameter as slowly as he could.

If the parameter was 2.7, then the population would be .6292. As the

parameter rose, the final population rose slightly too, making a line

that rose slightly as it moved left to right on the graph.

Suddenly, though, as the parameter passed 3, the line broke in two.

May's imaginary fish population refused to settle down to a single

value, but oscillated between 2 points in alternating years. Starting

at a low number, the population would rise and then fluctuate until it

was steadily flipping back and forth. Turning up the knob a bit more -

raising the parameter a bit more - would split the oscillation again,

producing a string of numbers that settled down to four different

values, each returning every fourth year. Now the population rose and

fell on a regular four-year schedule. The cycle had doubled again -

first from yearly to every two years, and now to four. Once again, the

resulting cyclical behavior was stable; different starting values for

the population would converge on the same four year cycle.

With parameters of 3.5, say, and a starting value of .4, then May

would see a string of numbers like this:

.4000, .8400, .4704, .8719,

.3908, .8332, .4862, .8743,

.3846, .8284, .4976, .8750,

.3829, .8270, .4976, .8750,

.3829, .8270, .5008, .8750,

.3828, .8269, .5009, .8750,

.3828, .8269, .5009, .8750,

.3828, .8269, .5009, .8750.

As the parameter rose further, the number of points doubled again,

then again, then again. It was dumbfounding - such a complex behavior,

and yet so tantalizingly regular. "The snake in the mathematical

grass," as May put it. The doublings themselves were bifurcations, and

each bifurcation meant that the pattern of repetition was breaking

down a step further. A population that had been stable would alternate

between different levels every other year. A population that had been

alternating on a two year cycle would now vary on the third and fourth

years, thus switching to period 4.

These bifurcations would come faster and faster - 4, 8, 16, 32... -

and suddenly break off. Beyond a certain point, the "point of

accumulation," periodicity gives way to chaos, fluctuations that

never settle down at all. Whole regions of the graph are completely

blacked in. If you were following an animal population governed by

this simplest of nonlinear equations, you would think the changes from

year to year were absolutely random, as though blown about by

environmental noise. Yet in the middle of this complexity, stable

cycles suddenly return. Even though the parameter is still rising,

meaning that the nonlinearity is driving the system harder and harder,

a window will suddenly appear with a regular period: an odd period,

like 3 or 7. The pattern of changing population repeats itself on a

three year or seven year cycle. Then period doubling bifurcations

begin again, at a faster rate, rapidly passing through cycles of 3, 6,

12... or 7, 14, 28..., and then breaking off once again to renewed

chaos.

But the different languages reveal more or less of the underlying

mechanics, yes? C reveals more than Java which reveals more than PHP?

I'm curious if I should pursue this in PHP, or try to switch to

something else? My Java is rusty, but this might be a reason to dust

it off and try again? I recall that Java's BigDecimal class is well

documented and works the same on all machines. Would it be any more

reliable than PHP? I tend to reach for PHP first, whenever I want to

write a script, because it is what I work in most, though I realize

that mathematical equations is not its specialty. For now, I'm playing

with very simple equations, for instance:

x[next] = rx(1-x)

but I need to figure out how to get reliable numbers, in terms of the

precision past the decimal, otherwise these explorations are

meaningless.

That particular equation I copied out of James Gleick's book on Chaos,

an excerpt of which I just posted to my blog, which I repeat here in

case the context is interesting to anyone.

--------------------------------------

An ecologist imagining real fish in a real pond had to find a function

that matched the crude realities of life - for example, the reality of

hunger, or competition. When the fish proliferate, they start to run

out of food. A small fish population will grow rapidly. An overly

large population will dwindle (they will starve).

...In the Malthusian scenario of unrestrained growth, the linear

growth function rises forever upward. For a more realistic scenario,

an ecologist needs an equation with some extra term that restrains

growth when the population becomes large. The most natural function to

choose would rise steeply when the population is small, reduce growth

to near zero at intermediate values,and crash downward when the

population is very large. By repeating the process, an ecologist can

watch a population settle into its long-term behavior - presumably

reaching some steady state. A successful foray into mathematics for an

ecologist would let them say something like this: Here's an equation;

here's a variable representing the reproductive rate; here's a

variable representing the natural death rate; here's a variable

representing the additional death rate from starvation or predation;

and look - the population will rise at this speed until it reaches

that level of equilibrium.

How do you find such a function? Many different equations might work,

and possibly the simplest modification of the linear, Malthusian

version is this:

x[next] = rx(1-x)

x[next] is the population next year.

Again, the parameter r represents a rate of growth that can be set

higher or lower. The new term, 1-x, keeps the growth within bounds,

since as x rises, 1-x falls. Anyone with a calculator could pick some

starting value, pick some growth rate, and carry out the arithmetic to

derive next year's population.

For convenience, in this highly abstract model, "population" is

expressed as a fraction between zero and one, zero representing

extinction, one representing the greatest possible population of the

pond.

So begin: Choose an arbitrary value for r, say, 2.7, and a starting

population of .02. One minus .02 is .98. Multiply by 0.02 and you get .

0196. Multiply that by 2.7 and you get .0529. The very small starting

population has more than doubled. Repeat the process, using the new

population as the seed, and you get .1353. The population rises to .

3159, then .5835, then .6562 - the rate of increase is slowing. Then,

as starvation overtakes reproduction, .6092. Then .6428, then .6199,

then .6362, then .6249. The numbers seem to be bouncing back and

forth, but closing in on a fixed number: .6328, .6273, .6312, .6285, .

6304, .6291, .6300, .6294, .6299, .6295, .6297, .6296, .6296, .

6296, .6296, .6296, .6296. Success!

[skipping to page 69]

Robert May was a biologist. His interests at first tended toward the

abstract problems of stability and complexity, mathematical

explanations of what enables competitors to coexist. But he soon began

to focus on the simplest ecological questions of how single

populations behave over time.

...Once, in fact, on a corridor blackboard he wrote the equation out

as a problem for the graduate students. It was starting to annoy him.

"What the Christ happens when lambda gets bigger than the point of

accumulation?" What happened, that is, when a population's rate of

growth, its tendency toward boom and bust, passed a critical point. By

trying different values of this nonlinear parameter, May found that he

could dramatically change the system's character. Raising the

parameter meant raising the degree of nonlinearity, and that changed

not just the quantity of the outcome, but also it quality. It affected

not just the final population at equilibrium, but also whether the

population would reach equilibrium at all.

When the parameter was low, May's simple model settled at a steady

rate. When the parameter was higher, the steady state would break

apart, and the population would oscillate between two alternating

values. When the parameter was very high, the system - the very same

system - seemed to behave unpredictably. Why? What exactly happened at

the boundaries between the different kinds of behavior? May couldn't

figure it out. (Nor could the graduate students.)

May carried out a program of intense numerical exploration into the

behavior of this simplest of equations... It seemed incredible that

its possibilities for creating order and disorder had not long since

been exhausted. But they had not. He investigated hundreds of

different values of the parameter, setting the feedback loop in motion

and watching to see where - and whether - the string of numbers would

settle down to a fixed point. He focused more and more closely on the

critical boundary between steadiness and oscillation. It was as if he

had his own fish pond, where he could wield fine mastery over the

"boom-and-bustiness" of the fish. Still using the logistic equation x

[next] = rx(1-x), May increased the parameter as slowly as he could.

If the parameter was 2.7, then the population would be .6292. As the

parameter rose, the final population rose slightly too, making a line

that rose slightly as it moved left to right on the graph.

Suddenly, though, as the parameter passed 3, the line broke in two.

May's imaginary fish population refused to settle down to a single

value, but oscillated between 2 points in alternating years. Starting

at a low number, the population would rise and then fluctuate until it

was steadily flipping back and forth. Turning up the knob a bit more -

raising the parameter a bit more - would split the oscillation again,

producing a string of numbers that settled down to four different

values, each returning every fourth year. Now the population rose and

fell on a regular four-year schedule. The cycle had doubled again -

first from yearly to every two years, and now to four. Once again, the

resulting cyclical behavior was stable; different starting values for

the population would converge on the same four year cycle.

With parameters of 3.5, say, and a starting value of .4, then May

would see a string of numbers like this:

.4000, .8400, .4704, .8719,

.3908, .8332, .4862, .8743,

.3846, .8284, .4976, .8750,

.3829, .8270, .4976, .8750,

.3829, .8270, .5008, .8750,

.3828, .8269, .5009, .8750,

.3828, .8269, .5009, .8750,

.3828, .8269, .5009, .8750.

As the parameter rose further, the number of points doubled again,

then again, then again. It was dumbfounding - such a complex behavior,

and yet so tantalizingly regular. "The snake in the mathematical

grass," as May put it. The doublings themselves were bifurcations, and

each bifurcation meant that the pattern of repetition was breaking

down a step further. A population that had been stable would alternate

between different levels every other year. A population that had been

alternating on a two year cycle would now vary on the third and fourth

years, thus switching to period 4.

These bifurcations would come faster and faster - 4, 8, 16, 32... -

and suddenly break off. Beyond a certain point, the "point of

accumulation," periodicity gives way to chaos, fluctuations that

never settle down at all. Whole regions of the graph are completely

blacked in. If you were following an animal population governed by

this simplest of nonlinear equations, you would think the changes from

year to year were absolutely random, as though blown about by

environmental noise. Yet in the middle of this complexity, stable

cycles suddenly return. Even though the parameter is still rising,

meaning that the nonlinearity is driving the system harder and harder,

a window will suddenly appear with a regular period: an odd period,

like 3 or 7. The pattern of changing population repeats itself on a

three year or seven year cycle. Then period doubling bifurcations

begin again, at a faster rate, rapidly passing through cycles of 3, 6,

12... or 7, 14, 28..., and then breaking off once again to renewed

chaos.

## Re: how do I get more numbers past the decimal?

Look at the printf() function (in PHP or C) if you want to know how

to

***PRINT***numbers to different precisions.

Decimal calculations are not necessarily more precise, but they

fit human expectations more when they are applied to things

that come in decimal, like dollars-and-cents amounts.

From a hardware point of view, many machines use IEEE doubles which

provide 15 significant digits. C, PHP, and Java will likely all

use the hardware.

If you need more than 15 significant digits, I'll suggest that your

problem is so unstable that your model is meaningless. One fish

getting struck by lightning could send the results in an entirely

different direction. Using infinite-precision math might fix the

problem with examining the

***MATH***, but it won't do much good in

studying the

***fish***.

This sort of resembles a problem taught in engineering school. But

you need to extend the model. The engineering school problem

involved deer and wolves. You model both the populations of deer

and of wolves. I don't recall the exact constants used. Your model

would need to model the food supply also, unless you're saying it's

so huge it doesn't matter..

Deer in the absence of wolves have a population growth of, say, 10%

per year. Wolves in the absence of deer are starving and the

population declines 20% per year. The variables D and W represent

the number of deer and wolves (in multiples of healthy adult deer

and wolves, so a D = 0.5 represents a young deer or a starving adult

deer). The model conveniently ignores the issue that you need two

deer of opposite sexes to reproduce.

D[next] = 1.10 * D;

W[next] = 0.80 * W;

The rate of deer being eaten by wolves is proportional to both the number

of deer and the number of wolves.

D[next] = 1.10

*** D - K1 ***D * W;

W[next] = 0.80

*** W + K2 ***D * W;

K1 and K2 depend on how good wolves are at hunting deer, and how

good deer are at evading wolves. The relationship of the two might

be based on how much deer biomass in deer-weights is lost when a

wolf kills a deer, and how much wolf biomass in wolf-weights is

gained when a wolf kills (and eats) a deer.

You can get anomolies from this model (the model, not the number

calculations). The number of deer or wolves can go negative in

extreme boom-and-bust cycles. Iterating the model over a year is

probably too long a time for accurate prediction. If deer or wolves

are not adults and able to reproduce after a year, you probably

have to track age bands.

However, I doubt it's an accurate model.

Is there any evidence that this behavior actually occurs in

***real***

biological systems? (Sure, boom-and-bust happens, but on fixed

cycles?) I'd suspect that a lot of this type of behavior is a defect

in the model, which doesn't represent reality close enough.

## Re: how do I get more numbers past the decimal?

On Aug 5, 10:27=A0pm, gordonb.vb...@burditt.org (Gordon Burditt) wrote:

My friend Lark Davis pointed me to the book Chaos, A Very Short

Introduction, by Leonard Smith. It is a good introduction, though it

has some technical points that I didn't get. I thought I might take a

step backwards, so I went and read James Gleick's book, Chaos, Make A

New Science. Gleick's book was aimed at a popular audience and, as

such, it is less technical. The above example is from Gleick's book. I

believe it is offered as an example of the absolutely simplest kind of

equation that can lead to deterministic disorder. It is not offered as

an accurate model of the boom-and-bust cycle of fish in a pond, but

rather, it is offered as an example of how simple an equation can be,

yet still produce chaotic results. I assume Gleick published it

because his book was for the general public, and the equation is easy

enough that anyone can follow it.

My main concern about the precision of the numbers I got out of my PHP

script was about the possibility of small errors compounding. If I

run an equation 1,000 times, in a loop, and the end result becomes the

starting result of the next iteration, then I worry that even small

amounts of rounding can throw the results off in a big way, over the

course of 1,000 iterations. In other types of equations, the rounding

should be harmless, because the equations move toward some kind of

steady state, and the effects of the rounding should balance out over

the course of enough iterations. But, my understanding is, equations

that lead to chaotic results are different, in that tiny differences

can lead off in very different directions. As The Natural Philosopher

mentioned upthread, it was errors arising from rounding that first

aroused an awareness that these equations were sensitive to very small

changes in input. This is from page 16 of Gleick's book:

--------

One day in the winter of 1961, wanting to examine one sequence at

greater length, Lorenz took a shortcut. Instead of starting the whole

run over, he started midway through. To give the machine its initial

conditions, he typed the numbers straight from the earlier printout.

Then he walked down the hall to get a cup of coffee. When he returned

an hour later, he saw something unexpected, something that planted a

seed for a new science.

This new run should have duplicated the old. Lorenz had copied the

numbers into the machine himself. The program had not changed. Yet as

he stared at the new printout, Lorenz saw his weather diverging so

rapidly from the pattern of the last run that, within a few

(simulated) months, all resemblance had disappeared. He looked at one

set of numbers, then back at the other. He might as well have chosen

two random weathers out of a hat. His first thought was that another

vacuum tube had gone bad.

Suddenly he realized the truth.There had been no malfunction. The

problem lay in the numbers he had typed. In the computer's memory, six

decimal places were stored: .506127. On the printout, to save space,

just 3 appeared: .506. Lorenz had entered the shorter, rounded-off

numbers, assuming that the difference - one part in a thousand - was

inconsequential.

Lorenz could have assumed something was wrong with his particular

machine or his particular model - probably should have assumed. It was

not as though he had mixed sodium and chlorine and gotten gold. But

for reasons of mathematical intuition that his colleagues would begin

to understand only later, Lorenz felt a jolt: something was

philosophically out of joint. Although his equations were gross

parodies of the earth's weather, he had a faith that they captured the

essence of the real atmosphere.

My friend Lark Davis pointed me to the book Chaos, A Very Short

Introduction, by Leonard Smith. It is a good introduction, though it

has some technical points that I didn't get. I thought I might take a

step backwards, so I went and read James Gleick's book, Chaos, Make A

New Science. Gleick's book was aimed at a popular audience and, as

such, it is less technical. The above example is from Gleick's book. I

believe it is offered as an example of the absolutely simplest kind of

equation that can lead to deterministic disorder. It is not offered as

an accurate model of the boom-and-bust cycle of fish in a pond, but

rather, it is offered as an example of how simple an equation can be,

yet still produce chaotic results. I assume Gleick published it

because his book was for the general public, and the equation is easy

enough that anyone can follow it.

My main concern about the precision of the numbers I got out of my PHP

script was about the possibility of small errors compounding. If I

run an equation 1,000 times, in a loop, and the end result becomes the

starting result of the next iteration, then I worry that even small

amounts of rounding can throw the results off in a big way, over the

course of 1,000 iterations. In other types of equations, the rounding

should be harmless, because the equations move toward some kind of

steady state, and the effects of the rounding should balance out over

the course of enough iterations. But, my understanding is, equations

that lead to chaotic results are different, in that tiny differences

can lead off in very different directions. As The Natural Philosopher

mentioned upthread, it was errors arising from rounding that first

aroused an awareness that these equations were sensitive to very small

changes in input. This is from page 16 of Gleick's book:

--------

One day in the winter of 1961, wanting to examine one sequence at

greater length, Lorenz took a shortcut. Instead of starting the whole

run over, he started midway through. To give the machine its initial

conditions, he typed the numbers straight from the earlier printout.

Then he walked down the hall to get a cup of coffee. When he returned

an hour later, he saw something unexpected, something that planted a

seed for a new science.

This new run should have duplicated the old. Lorenz had copied the

numbers into the machine himself. The program had not changed. Yet as

he stared at the new printout, Lorenz saw his weather diverging so

rapidly from the pattern of the last run that, within a few

(simulated) months, all resemblance had disappeared. He looked at one

set of numbers, then back at the other. He might as well have chosen

two random weathers out of a hat. His first thought was that another

vacuum tube had gone bad.

Suddenly he realized the truth.There had been no malfunction. The

problem lay in the numbers he had typed. In the computer's memory, six

decimal places were stored: .506127. On the printout, to save space,

just 3 appeared: .506. Lorenz had entered the shorter, rounded-off

numbers, assuming that the difference - one part in a thousand - was

inconsequential.

Lorenz could have assumed something was wrong with his particular

machine or his particular model - probably should have assumed. It was

not as though he had mixed sodium and chlorine and gotten gold. But

for reasons of mathematical intuition that his colleagues would begin

to understand only later, Lorenz felt a jolt: something was

philosophically out of joint. Although his equations were gross

parodies of the earth's weather, he had a faith that they captured the

essence of the real atmosphere.

#### Site Timeline

- » Create an image - thumbnail on the fly
- — Next thread in » PHP Scripting Forum

- » super substr()?
- — Previous thread in » PHP Scripting Forum

- » URL redirection
- — Newest thread in » PHP Scripting Forum

- » Seamless SSO
- — Last Updated thread in » PHP Scripting Forum

- » Dell Battery Slice LED codes
- — The site's Newest Thread. Posted in » Laptop Computers Forum