APL code for LOWESS or LOESS regression
Forum rules
This forum is for discussing APL-related issues. If you think that the subject is off-topic, then the Chat forum is probably a better place for your thoughts !
This forum is for discussing APL-related issues. If you think that the subject is off-topic, then the Chat forum is probably a better place for your thoughts !
-
- Posts: 17
- Joined: Tue Apr 26, 2011 1:03 pm
APL code for LOWESS or LOESS regression
Anyone know of a workspace that performs Locally Re-weighted Regression? (Variously known as LOWESS or LOESS)
Re: APL code for LOWESS or LOESS regression
It's that small function:
Left argument is the point where you'd like to calculate smoothed value.
Right argument is nested vector of 1) x values of process, 2) y values and 3) parameter b of smoother
So, to get smoothed (with LOWESS smoother) value at 3.62 and with parameter b=0.5 you should call
3.62 lowess x y 0.5
If want to smooth at many points you call, for instance:
(⍳99) lowess¨⊂x y 0.5
To draw the smoothed line you should pass to a plot function (⍳99) as x and (⍳99) lowess¨⊂x y 0.5 as y.
Hope this helps,
Sasha.
z←x1 lowess x_y_b;x;y;b;w
(x y b)←x_y_b
w←(x-x1)*2
w←*-w÷b*2
z←(+/w×y)÷+/w
Left argument is the point where you'd like to calculate smoothed value.
Right argument is nested vector of 1) x values of process, 2) y values and 3) parameter b of smoother
So, to get smoothed (with LOWESS smoother) value at 3.62 and with parameter b=0.5 you should call
3.62 lowess x y 0.5
If want to smooth at many points you call, for instance:
(⍳99) lowess¨⊂x y 0.5
To draw the smoothed line you should pass to a plot function (⍳99) as x and (⍳99) lowess¨⊂x y 0.5 as y.
Hope this helps,
Sasha.
-
- Posts: 17
- Joined: Tue Apr 26, 2011 1:03 pm
Re: APL code for LOWESS or LOESS regression
Sasha,
Did you just explain LOWESS/LOESS to me in a much more intuitive manner than the second chapter of William Cleavland's "Visualizing Data" . . . . ?????
Cleavland explains it as every point "P" of the smoothed value of observation "O" is a linear weighted regression of "N" points at O(t-N) to O(t+N) about it, (weighted by bisquare).
Or did I completely misunderstand that chapter? Or am I completely misunderstanding you?
thanks for taking the time,
Tony
Did you just explain LOWESS/LOESS to me in a much more intuitive manner than the second chapter of William Cleavland's "Visualizing Data" . . . . ?????
Cleavland explains it as every point "P" of the smoothed value of observation "O" is a linear weighted regression of "N" points at O(t-N) to O(t+N) about it, (weighted by bisquare).
Or did I completely misunderstand that chapter? Or am I completely misunderstanding you?
thanks for taking the time,
Tony
Re: APL code for LOWESS or LOESS regression
Tony,
>Did you just explain LOWESS/LOESS to me in a much more intuitive manner than the second chapter of William >Cleavland's "Visualizing Data" . . . . ?????
I sent you a working function I use a lot.
Sorry, I didn't mention two points. The first one is about smoothing window. You cite:
"Cleavland explains it as every point "P" of the smoothed value of observation "O" is a linear weighted regression of "N" points at O(t-N) to O(t+N) about it, (weighted by bisquare)."
That's true. I use "dirty trick" - the whole set is used for calculation of any smoothed point. The window is
defined by exponential weights. It's bad for a very large data set, but works fine for everyday needs.
The second point, I use a local constant fit, or zero degree polynomial, or just weighted average. In my experience there is no need to use linear or squared fit. All the game is made by a parameter b in my function.
Sasha.
>Did you just explain LOWESS/LOESS to me in a much more intuitive manner than the second chapter of William >Cleavland's "Visualizing Data" . . . . ?????
I sent you a working function I use a lot.
Sorry, I didn't mention two points. The first one is about smoothing window. You cite:
"Cleavland explains it as every point "P" of the smoothed value of observation "O" is a linear weighted regression of "N" points at O(t-N) to O(t+N) about it, (weighted by bisquare)."
That's true. I use "dirty trick" - the whole set is used for calculation of any smoothed point. The window is
defined by exponential weights. It's bad for a very large data set, but works fine for everyday needs.
The second point, I use a local constant fit, or zero degree polynomial, or just weighted average. In my experience there is no need to use linear or squared fit. All the game is made by a parameter b in my function.
Sasha.