Advanced Astrophotography
Introduction:
Steps when taking the photos:
Pertti's method:
1. Take RAW images (light, dark and flat frames),
2. Develop all the RAWs with ImagesPlus, linear mode, daylight white
balance, into 16-bit TIFF files,
3. Average combine darks into a master dark and flats into a master flat
(I don't use flats very often but sometimes they are crucial),
4. Calibrate light frames using the master dark and master flat,
5. Align calibrated light frames,
6. Combine aligned frames (adaptive addition or some of the variations
of average combination),
7. Enchance the images (Digital Development, Levels, iterative
restoration, sharpening, etc.)
What happens here is that the master dark and flat remove all constant problems in the image. Calibrating with the master dark does
actually increase the S/N ratio, but too little to be noticed, because it removes constant hot pixels only. Master flat does not change
the S/N ratio but it rather balances the brightness of the center with the corners as well as removes dust effects.
Stacking improves the S/N ratio the most and makes it possible to perform more drastic processing later on without making the noise
visible.
Besides, if you (manually) enchance images before stacking,
you will need to perform it multiple times. It is much easier
to do after stacking, because there is only one image!
ImagesPlus does a nice job with its Adaptive Addition where overflow is
automatically avoided.
When the signal level is high enough to begin with, I use averaging or some of its variations, but when the signal level
is low, like for nebulae, I use adaptive addition. In fact, I use it a lot, because even for globulars and open clusters
it helps to boost the dimmest parts of the image.
Adaptive Addition in ImagesPlus not just adds but it increases the level of the dim parts of the image more in proportion.
Image
Processing Nomenclature Abbreviations
R: Raw image
DS: Dark Subtracted
S(T/#):multiple Stacked images (Type/# of images)
The "T" in parentheses should be
replaced by a word to indicate the type of stacking done. Additive stacking
is the most common type, but other types include Averaging and Median
stacking. The "#" should be replaced with the number of images in the stack.
The parenthetical number indicates the number of images in the stack.
LS: Light subtraction
Also referred to as stray light subtraction
or scattered light subtraction. A form of background
compensation or flattening technique.
FFC: Flat Field Correction
SSF: Smoothness/Sharpening Functions
Filtering techniques such as sharpening,
softening, unsharp masking, high-pass or low-pass filtering, Gaussian
blurs, etc. Yes, I know
that there is overlap among these terms but
different individuals have their own preferences for terms.
HM: Histogram Manupulation
Histogram equalization, stretching,
clipping, histogram curve modification, etc. Histogram
manipulations may be done manually or
automatically. For instance, filters
for brightness enhancement, contrast enhancement, color/tone
enhancement, and the like work by histogram modification.
MT: Manual Touchup: localized manual fixes, such as hot pixel
removal
DSP: Digital Signal Processing techinques: these include
convolution, deconvolution, Fast Fourier Transforms
DDP: Digital Development Process
A technique that reduces the dynamic
luminance range of a digital image to be more like those obtained from
conventional film photography. The middle range of luminance is kept linear
but the high and low ends of the luminance are made nonlinear to avoid
saturation at the high end and to enhance details at the low end.
C: Composite
An image created by combining pieces of
multiple images into a single image.
M: Mosaic
Mosiac is a specific type of composite
image. In general, a composite image does not necessarily represent a real
scene. A mosaic is a composite image that is intended to
represent a true scene by stitching together an assemblage of images
(generally overlapping
ones). A mosaic may be created to
provide a larger or more detailed continuous image than a single shot could
have provided, or to provide detail for a portion of an image that would
not have been possible in a single image.
O: Other
Anything not covered by the above.
Helpful to let people know that additional processing was done beyond what
is specified by the
abbreviations that accompany the image.
Also helpful as feedback regarding this list; if "Other"
appears very often, it would
indicate that some commonly-used techniques
do not appear on this list and should be added.
Using
Mira to remove background gradients:
As the software engineer explained it to my on the phone, Mira Pro is
able to detect a gradient (which he referred to as a slope of linear
data) and remove it with some complicated math that he tried to explain.
I lost him halfway into it...
You are able to specifically define regions that should not be used in
the computation of the background data.
Now, how do you tell if you removed the gradient, or where the gradient
is in the first place?
Mira has a powerful feature that allows you assign a color palette to a
grayscale image so that you can look at your data in different color
spectrums. By fine tuning the histogram stretch and the color palette
contrast and gamma, you can get views of your data that reveal gradients
in stunning color.
For instance, lets say you are looking at your full screen nebula
region. You would assign an aggressive stretch on the grayscale image,
then assign a color palette to the image, do a little tweaking, and low
and behold, one corner of the image is bright red fading to bright blue
on the opposite corner of the image.
Now you use the gradient removal tool (which they call fit background)
by selecting regions to exclude from the calculation, set your math
options, tell it to maintain the value intensity in the central part of
the image, and click OK. You then can look at the false color data and
determine if the field is perfectly flat. If it is not, you can undo,
and do a little more tweaking of the fit background dialog until you get
it right.
At one point I was looking at M20 which filled the entire field. M20 was
bright green, the background was bright blue, with a bright red gradient
running through the background of the image. I selected the central
brightest portion of the M20 and clicked on the fit background tool, and
it perfectly removed the gradient, creating a perfect blue background
with no red, although all the dim nebulosity stayed bright green (and
yellow) at the brightest points. I did this on all four channels. It was
magic. That's the only way to explain it.
I also tried a image from the FSQ-106 and STL11000. A huge nebulous star
field. The gradient popped out in living color. In grayscale I could not
see it even with the most aggressive histogram stretch.
In addition to that, you can load all four channels into an image set,
where you can flip through the four images at 1-30 frames per second as
an animation. Make real time histogram stretches and color palette
adjustments that apply to all four channels at once and easily see the
gradient differences in each channel.
I could go on forever about what a great tool this is for *the
perfectionist*...but I only know about 10% of the program so far. ;)
$50,000 for imaging equipment...crummy pictures from gradients.
$50,000 + $1300 for software, and your images come out nicer and more
accurate with easy tools to repair data due to light pollution.
Now its not all that easy, but we spend less than $1300 on three
emission line filters. ;) There is a lot more to image processing than
gradient removal, but I will tell you, without gradients, image
processing becomes much easier.
The more time I spend in Astrophotography, the more I realize that
imaging raw data is the easy part. Its what you do with it once you have
it.
This is where software comes in. People have a hard time justifying
expensive software purchases because it does not weigh 47 lbs and breaks
your back putting it in your trunk...but if you think about it, software
for image processing has a far greater impact on the final product of
all your labors than the equipment you use.
Give me an 8" LX200 and a ST7XME and put me up against a 14" RC with a
ST10XME. If the owner of the 14" RC cant image process, then I produce
the finer image.
My point, think of software as being as valuable as your imaging
equipment and then you start getting the proper perspective on value.
I would rather own a 130mm refractor and lots of great software (and
RAM), then a 180mm refractor and Paint Shop Pro.
rb from Mt Ewell Observatory July 2004
I agree one hundred percent with your philosophy. It is a lot like
buying a telescope. Many new to this hobby and just starting out, buy such
things as a LX200GPS-12. (Yours truly included.) They think that the big pretty
OTA will allow them to see Hubble type vistas right in the eyepiece. The
mount is only that part of the system that holds the telescope up so you can
look through it! I now tell anyone who is thinking about getting into this
hobby to buy a GOOD mount then look for an OTA. If I were doing it again,
I would have bought something like a Losmandy GM-11 and put my old C-8 OTA on it
until I could afford a larger OTA. (It would have been less expensive
too.) Buying software is much the same. I used Paint Shop Pro for a
long time (And I still like some of its tools.) But PS-CS allows me to do things
that PSP simply does not have the power to do. This is what the extra few
hundred dollars i n purchase price buys you. I suppose the same thing is
true about Mira. You can use lower end programs and still come out with
very good images, but the higher end tools produce the images with a lot less
aggravation.
Don Waid
I have seen so many people plan their hardware budget carefully to get a
mount, scope, pier, reducers, focusers, etc., and have no clue that they will
eventually spend another $2000 or more in software to process their images.
They also have little insight into the process work flow and how many disparate
programs that they will need.
For example,
1. Acquire your data with CCDSoft and do image links with TheSky to find the
guide star. Get Focusmax software to focus your electronic focuser for
sharp stars.
2. Use reduction groups in CCDSoft (now Maxim, too) to reduce and possibly align
your images. If you don't like their alignment quality, get other programs like
Registar or MIRA.
3. Combine the reduced images in such programs as Sigma or Russ' Croman's RC
Control Panel to take advantage of more sophisticated rejection methods.
4. Deconvolute the luminance in programs like CCDSharp for even sharper stars.
5. Bring the R,G,B into Maxim, normalize the background and apply the color
weights after aligning. Oh, you may have used alignment in CCDSoft or gone
out and used programs like Registar. Create a 16-bit RGB TIF file in Maxim for
import into yet another program, Photoshop. If you don't own it yet,
prepare to give up 2" Nagler eyepiece and buy Photoshop, and if you don't
have Photoshop CS to work in 16-bit mode, better plan to upgrade $$.
6. You may still may go back into Registar to align the RGB TIF with the FITs
luminance.
7. Import into Photoshop. Oh, wait, the FITs file is not importing. Get
Eddie Trimarchi's free FITs plug-in for Photoshop. If you are working with
32-bit real (IEEE) files in such programs as MIRA, then you need to buy Eddie's
commercial program.
8. Spend 3 years learning Photoshop (Total Training's DVD set is wonderful in
this regard $$), and possibly buy Grain Surgery to smooth backgrounds.
9. Make sure you budget for a laptop or desktop computer with the largest hard
drive you can find and a fast (e.g. 3GHz) processor, and don't forget that
large-screen calibrated monitor.
10. And.........
I know this is a generalized example (rant) just to make a point, and is nowhere
near exact. There are many other programs out there, such as Image Plus, MIRA,
AIP4WIN, Picture Window Pro, AstroArt, etc. just to confuse your selection. Have
you ever tried flowcharting this???? Anyway, I was one who did a good job
budgeting for the hardware with no clue as to the software. As Richard points
out, it is your processing skills that will make your raw data alive, but you
have to figure out your process flow first.
Don Goldman
www.astrodon.com
Another method to remove gradients:
The information for gradient removal is in the picture, when it is a many
stars--much sky type picture.
My backgrounds were effectively removed by doing a wide range median fiiltering
on my original picture and subtracting it from the original.
Of course, in many cases there are structured foregrounds which make this
procedure difficult and artistic talent is
necessary to make a structureless background.
In case of Hi-Res pictures, it is not necessary to do a median on a full-scale
picture (which took ages, especially on a 33 MHz 486). Resampling the picture
for 10 % of the size, median filtering it and resampling to the original size
was just as effective.
Siebren Klein
s.s.klein@tue.nl
http://www.geocities.com/siebren2001/index.html
Manual normalisation:
To balance the color in a RGB image you need to have all the images
"normalized". Roughly speaking, and this may not be technically
correct, you need to set the background level of the images to the same value.
As you can see from my post the background ADU counts for my R G & B were
all very high and they were different. This is primarily due to the high level
of light pollution I was imaging under. This light pollution is not even across
the spectrum and therefore the different levels of background intensity. This of
course is also affecting the main part of the image but you do not have a
standard there to go by. The standard to balance to is a dark sky location. In
effect, the normalization process is to remove as much of the light pollution
(Sky Glow) effect as possible. This is not to be confused with correcting a
gradient problem. That is a different process. To normalize the RG&B frames
I use pixel math in MaxIm. I make note that the sub-exposures ha ve been reduced
and combined into 3 R G B master frames. If any gradient removal plug-ins are
used I do that before normalization. Now for the normalization steps.
First open all three R G B master frames into MaxIm. Open the image information
window. (View/Information) Select a master frame to work on. Choose three
locations on the master frame where you know the sky is to be dark. These will
be our standardization reference points. (Avoid any nebulae, stars, etc.) I set
the aperture radius of my cursor to 10 pixels. (Right click on the image and
choose "Set Aperture Radius") Move the cursor over the three areas you
chose and read the average ADU count displayed in the Information Window for
each location. Get out you calculator and average these readings. This now
becomes the background count for the frame. My R frame had a count of about
9,200. (Very high, I hope yours are not that bad.) You now go to pixel math.
(Process/Pixel Math) I try to bring the background count down to about 125 to
150. To do this set the parameters in the Pixel Math Window to: Scale factor % =
100 Operation = None and Add Constant to the amount you want to reduce. In my
case it is -9,050. This should give me a 150 background when the operation is
complete. Click on OK and the operation completes. You can now check by moving
the cursor over the three reference areas and see if they are in the ballpark of
between 125 to 175. If not simply go to Edit/Undo Pixel Math and redo the
operation again with a revised "Add Constant" amount. Do this for all
three of the R G B frames. Just be sure you use the same three locations for all
frames to get your background counts. After you normalize your R G B frames you
can combine them into a master RGB image. I combine this RGB image with my
Luminance image in PS layers.
I know this is long and I am no expert in image processing. Some on this group
may wish to educate me as to a better method of doing this. This is just what I
use and I learned a lot by trial and error. (It seems like more error than
anything else.) Use this and if you like it all the better. If it doesn't work
for you, disregard or modify it. If you find something else works better please
let me know so I can use it.
Don Waid
Here ya go:
1. Open all three R,G and B master FITS frames in Maxim.
2. Start with the Red frame.
3. Open up the information window. VIEW > INFORMATION or Ctrl-I.
4. Set mode to APERTURE.
5. Right click on the image and set APERTURE RADIUS to 10 pixels.
6. Now grab your notebook or a piece of paper.
7. Find three spots on the image that are most obvious background areas
largely unaffected my your object and stars. You may need to adjust your
APERTURE RADIUS to accommodate for a very busy image.
8. Note in the information box the average ADU count for those three
areas. You don't have to use three, you could just use two or even one.
What I normally do is just scan the image for all the background areas
and get a 'feel' for the background ADU count and then decide on a
number that I think represents an accurate ADU background number. I
think our brains can do a better job than the computer on figuring what
is background and what is not based on what we see with our own eyes.
Remember to stay away from obvious gradients and hot pixels and dark
areas while doing this. Once you get the hang of it, you will be a
background ADU expert. ;)
9. Pick a number and right it down in your notebook. Round up to the
nearest 50. So lets say you look around three areas of the image and
they all hover around 5012-5055. Write down 5050 as the background ADU
for that image.
10. Next open up pixel math. PROCESS > PIXEL MATH.
11. Image A should be the Red frame we are working on.
12. Image B should also be the Red frame.
13. Operation is add.
14. Add constant should be 100 pixels less than our ADU number we came
up with. So if we came up with 5050. The add constant should be -4950.
15 Select OK.
16 Now go to the corrected (normalized image) and confirm that the areas
you were studying all hover around 100 ADU. +/- 20% is expected.
17. Now repeat these steps for the Green and Blue images using the same
areas you measured ADU in the red frame. You don't want to use new areas
in these images as that would defeat the purpose of getting all three
images normalized.
18. Now you have three manually normalized RGB images.
19. Combine those using Mr. Goldman's fine RGB ratio's, (Don't forget to
uncheck normalized images when combining), and you will find that color
balance comes our very well.
20. When you save the image, make sure to save as TIFF file and that the
file is stretched for 16 bit. (Under SAVE AS, select STRETCH, select
LINEAR ONLY, INPUT RANGE as MAX PIXEL, OUTPUT RANGE, 16 BIT.)
21. Import TIFF into Photoshop and use levels and curves to bring out
the image details.
rb
Richard A. Bennion
Managing Director
Ewell Observatory
http://www.ewellobservatory.com
One way to manually normalize (per Ron's first book) is to use pixel
math. Say that your red channel has a background adu count of 1000,
your green is 1500, and your blue is 800. Then assume you want to
bring down the value so _all_ 3 channels are at 100. In Maxim,
choose pixel math and select the "add constant" feature. Then
plug
in a negative value for the red of -900, the green of -1400, and the
blue of -700. When you have done this, the background adu of all
three channels will be around 100..and the background should look
neutral when you combine the 3 channels (light pollution gradients
aside). Actually what I do is use software on each channel to take
care of light pollution gradients first, then do the pixel math.
Another way...the way I think Ron now uses and is in his new book,
is just to do the RGB combine and select (click on) normalize
background. Then Maxim will do a reasonable job of creating equal
background counts for each channel. You would then bring this into
PS and do final tweaks on the black point of the histogram of each
channel so that the space between the left point and the starting
point of each channel's histogram is equal. I prefer this method
because it's a hard core assurance that the background will be
neutral. (I have problems distinguishing between dark colors, so I
rely on the histogram routine to make sure things are "right")
Once the background is "neutral", color balance on the
"target"
becomes easier, since you are now dealing with just the color
balance tool (minor tweaks if your RGB combine ratios were correct
for you system)..or you can adjust the "target" color thru histogram
changes in the midpoint and white point "pointers" for one or more
channels. (after doing either of the above, you may need to go back
and tweak the background black points again to maintain a neutral
background after the tweaks.
Hope this makes sense...if not, please feel free to ask more
questions...maybe I or someone else can explain it better.
Incidentally, the reason that both Richard and I are shooting for a
100 adu background result is that we don't want to go below
the "pedestal" set by SBIG and others which is usually 100 adu.
If
you normalize to a number below the pedestal of 100, the resulting
histogram will look clipped. (no space between the left point and
the starting point of the histogram).
Randy Nulman
Computer needs:
2 GB of ram vs. 1 GB of RAM for makes a huge improvement when dealing
with these large files.
Lets do some math:
***********
STL11000 file = 20 MB
When loaded into Maxim, the 16-bit file gets converted into a 32 bit
working space. So now the file takes up 40 MB of RAM.
Load 20 Bias frames to perform a mean sigma clip combine.
20 Images X 40 MB = 800 MB of RAM.
Then load on top of that about another 300 - 400 MB of RAM for OS, Apps,
etc.
Total RAM = 1.2 GB (over physical RAM limit).
Now do the mean sigma clip combine = more RAM.
Now you are hitting the hard drive to do memory swaps.
Total time to combine = 10 minutes (if lucky)
***********
Now add another GB of RAM. You are now well under the 'hit the hard
drive to swap memory' limit.
Total time to combine = 1 minute.
***********
Now in my case, at one time I will have Maxim, Mira, PS CS, Registar,
and Outlook XP running all at the same time.
2 GB = Not enough RAM = Time for new machine.
rb