fivemack: (Default)
[personal profile] fivemack
The simplex method is a recipe which instructs an N-dimensional triangular amoeba in how to move to the position in N-dimensional space that it likes best. Thinking in N dimensions is difficult, but all you have to know is that an N-dimensional triangular amoeba has N+1 corners, and that if you look at all but one of the corners they define something which behaves a bit like the face of a solid polyhedron.

The recipe is:

Find the corner that you like least; call it W

Figure out the direction which takes W from its current position through the mid-point of the 'face' defined by all the other, better corners. Move along this direction; when you're half-way to the mid-point, call your position C. When you're at the mid-point, call it M, and say that you've moved 1 unit. When you've moved 2 units, call your position R; when you've moved four, call your position E.

If you liked R better than you liked W, check if you liked E even more; if so, move corner W to E, otherwise move corner W to R.

If R was inferior to W, check if C is nonetheless better than R; if so, move W to C.

And if E, R and C are all inferior to W, scrunch up by moving every one of your corners half-way towards your current absolute favourite corner G.

Repeat, and continue repeating until all your corners are immersed in equally high-quality happiness.

For the last month or so I've been wrestling with a particular problem at work, trying strategem after stratagem and cunning plan after cunning plan to find the best possible solution.

On Wednesday I gave up and let the amoeba crawl away at the problem starting from the best solution I'd had to date. By lunchtime it had managed to find a significantly better one, but by Thursday morning it had got stuck; so I started it off again, from the position that I'd started at back in September, and by teatime it had crawled to a position significantly better even that Wednesday's. It is still crawling; the simplex method is not very fast.

So, the simplex method is very useful, but it's slightly humiliating to be out-thought by so straightforwardly programmed an amoeba.

Date: 2006-10-19 05:17 pm (UTC)
From: [identity profile] del-c.livejournal.com
How many dimensions does your problem have?

Date: 2006-10-19 05:33 pm (UTC)
From: [identity profile] fivemack.livejournal.com
It's got 100 dimensions (in fact, it's image-processing so in some sense has millions of dimensions, but I can get down to a hundred which summarise the image sufficiently, which is the interesting part of the amoeba-farming problem); the evaluation function is a matter of running a program which itself solves quite a large optimisation problem, so there are no derivatives available for fancier optimisers to get a handle on.

Date: 2006-10-19 05:35 pm (UTC)
From: [identity profile] rysmiel.livejournal.com
It would seem to me that this method should be relatively congenial in terms of keeping a record of its path, and thereby possibly providing indicators as to shape of the underlying distibution of your... ow. Half my brain said "objective function" and the other said "metric of eudemony", depending on how metaphorical your usage of "happiness" is.

The one-unit/two-unit/four-unit measurements acts as a protection against local minima, yes ? But this algorithm has no inherent "memory" beyond that ?

Date: 2006-10-19 05:59 pm (UTC)
From: [identity profile] purpletigron.livejournal.com
I used this for my PhD.

Date: 2006-10-19 11:09 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
So, the simplex method is very useful, but it's slightly humiliating to be out-thought by so straightforwardly programmed an amoeba.

But the amoeba is merely a more subtle expression of your will... Would you say the Wright brothers were outflown by their plane? Don't answer that :)

The simplex method as interpretive dance

Date: 2006-10-19 11:19 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
A: So, I, like want to find the top? So I should go up, right?
B: Yes. So, by up, you mean...?
A: Uh... the line greatest dgoodness/dvariable?
A: *shades eyes in looking getsure, turns around examining ground, chooses a direction and marches confidently on the spot*
B: Right. Is it this way? *holds arm out straight* Or this way? *Holds arm out at right angles* Or this way? *holds first arm straight up* Or this? Or this? Or this? *points arms in all directions, simulating n-space*
A: Uh...
B: You are in n dimensions. You have n *basis* directions.
A: Ooh! Pick one of *them*! Pick one of them!
B: OK, right. Which?
A: The best one?
B: By which you mean?
A: Uhhh... the worst one?
B: What?
A: Uhhhh *makes a seesaw motion* the direction with greatest positive gradient, but assume d2 is approx constant over small distances, so equally well the opposite of the one with the greatest negative.
B: Good! So you're going to compare all these...
A: Wait.
B: Ooh, what have you spotted.
A: I don't want to test *all* of them.
B: Correct! If you change one at a time, you can work out the new ones from the old ones.
A: Cool. *bounces*
B: You've invented the simplex method *hyperpyramid gesture*

Apologies -- I'm rather tired and have mangled the method beyond all recognition. But I think that's the point, kinda...

At work we have verilog compiler/optimisers that work iteritively. You'd think they'd never make things *worse*, wouldn't you.

Date: 2006-10-28 07:43 pm (UTC)
From: [identity profile] ipsi-digit.livejournal.com
Are you using the Nelder-Meade amoeba huristic from Numerical Recipes?

Date: 2006-10-28 08:10 pm (UTC)
From: [identity profile] fivemack.livejournal.com
I know I coded up the algorithm from a description on a Web page; I think it was

http://mathews.ecs.fullerton.edu/n2003/NelderMeadMod.html

I think I decided that working from that description was easier than translating the Fortran from the copy of Numerical Recipes in the office into C++ mindlessly.

Date: 2006-10-28 08:52 pm (UTC)
From: [identity profile] ipsi-digit.livejournal.com
Ah, that's what I did in 1988, coded in C from the Fortran listing. If I remember correctly, I found searching for optimum went fastest when I squared the results of each evaluation the function. I didn't have that many variables, only four I think, so the overhead wasn't too severe. My problem was finding a custom optimum temperature compensation network for each crystal oscillator in a production environment. It worked rather well.

I originally did the coding at home on a Commodore C64 and the results were good. A typical run took only 145 seconds. On the 4.7MHz PC at work, it took 296 seconds. A few years later on my 12 MHz '286' it took about 90 seconds. When I added in a 287 math coprocessor it took only 15 seconds. Today, on my dinky 1.4GHz Athlon, it's less than 8mS. I really don't have a good way to measure anything this fast so it may be faster.

With your 100+ variables, the squaring might add a little computing time, but the convergence to optimum may go faster.

I also found setting hard limits on the variable values drastically affected convergence time or prevented convergence altogether. It was much better to add in a squared error term for each 'out of range' variable to the function results. I'd have to go back to the source code to see exactly how I handled this, though.

Anyway, thanks for the reminder!

Date: 2006-10-28 09:12 pm (UTC)
From: [identity profile] fivemack.livejournal.com
I like the pictures on your journal, particularly the anemone; I'm just slightly wondering how you came to find my page, and whether I know you in the physical world; feel free not to answer if you'd rather not, or to tom@womack.net if you'd rather not in public.

I'm now using about 250 variables, and with the objective function being itself a rather complicated optimisation (I'm trying to calibrate a detector, by the naively-ideal process of optimising the calibration values so that a mismatch measure given by a crystallographic program is minimised) the compute times are several days on a 2.4GHz dual-core Opteron; a dozen or so thousand iterations at 30 seconds each.

At least it gives me a nice feeling to come in in the morning and have freshly-baked minima-so-far delivered to my email; though the temptation to watch every twenty minutes the state of a job that you know won't complete until Monday is surprisingly hard to resist. Small incremental rewards, the conditioning technique of choice ...

Date: 2006-10-28 10:43 pm (UTC)
From: [identity profile] ipsi-digit.livejournal.com
Oh, I saw your entry about the sparkly sparks on the 'latest posts' (http://www.livejournal.com/stats/latest.bml) link. I've been an electronics geek for 45 years and so I zero in on electrical stuff. Liked what I read and checked your other journal entries. The Simplex entry caught my attention, tickling memories.

Yikes! 250 variables is a lot for NM. I wonder if one of the other NL optimization methods would work better. There isn't that much performance difference between the conjugate gradient methods. They can be used even on non-differentiable functions if finite differences are used. Each iteration takes longer but far, far fewer iterations are needed to converge, overall.

I've been watching the folding@home news. Seems they are starting to use the super number crunching power of the best video cards to accelerate their research. I can remember 25 years ago when Texas Instruments came out with a graphics card using their DSP chips to speed up graphics. The card had many times the power of the PC's CPU. Seems they were way ahead of their time.

Thanks about the graphics. I just use this account for posting some of the doodles I make in my retirement.

March 2024

S M T W T F S
     12
3456789
10111213141516
17181920212223
24 252627282930
31      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 10th, 2025 10:04 pm
Powered by Dreamwidth Studios