I’m still learning, in fact I hope I always will be. However, I think this may be a good point in my journey to put down some details so that others who are starting out on this fascinating path can pick up a few pointers & ideas. There are some excellent tutorials out there and I’m not going to compete with them; rather I’m going to tell the story of my recent M31 Andromeda image, with a few extra details.
Imaging Aims – My aim when taking this image was to produce a sharp image with a realistic hue. The image should show the extent & detail of the galaxy with dark dust lanes, star clouds & globular clusters. The core of the galaxy should not be too burnt out and the image should successfully print out at A4+ for framing. Here’s a reduced sized version of the final image (in case you didn’t see my Andromeda post) :
Equipment – Everything should start with a solid basis and never is that truer than in Astrophotography. A solid tripod or pier with a decent mount atop is the prerequisite for any imaging of DSO’s (Deep Space Objects). The mount will need to ‘track’ the sky ie counteract the rotation of our planet. On this mount will be your imaging system, a camera & lens system. The camera maybe a simple compact digital affair, a webcam, an SLR or a specialised astro CCD. The lens system may be a variety of telescopes or even quality camera lenses. Computerised control systems can also be added; as can guidance systems that greatly improve your ability to take long exposures. Discussions about equipment can be found elsewhere; suffice to say that I mainly use a Celestron AS-GT mount, Canon 20D camera and either a Celestron 6 inch SCT or a selection of Canon lenses. I have no guiding system at present. For this image I used the 20D with a Canon 100-400 L IS lens set at 300mm f5.6 and mounted directly on the AS-GT.
Mount, Scope & Camera with Lens
Technique – The basic technique employed here & for many astrophotos is to take many short sub exposures and then stack them together to make the final detailed image. For an interesting discussion on short sub-exposures visit Samir Kharusi’s website. Put simply, the benefit of this is to drastically increase the signal to noise ratio of the final image whilst allowing the demands on tracking / guiding to be reduced. To illustrate the point I have done a simple unregistered luminance add of my 1.5 hrs worth of subs, this is what the image would look like without using a stacking procedure, see below …
Not a pretty sight
Planning – Next we need to select our target, check that it will be visible for long enough and calculate what focal length will be best for the object; may I suggest AstroPlanner as an excellent piece of software for this purpose. To help general visualisation I would also suggest a Planetarium program such as the excellent freeware Carte du Ciel.
Set-up – As routine before imaging we should have aligned our mount to the celestial pole and adjusted the balance so that it is almost perfect but just a little heavy on the side pointing eastwards (this encourages the tracking to be slightly tighter).
Now we should pre-focus the camera on a suitable bright star; I’ve come to know Vega quite well, this summer. There are various automated computer programs to help but I currently use a visual diffraction focusing technique always followed by taking a test shot. This done we can now find & frame the object that we wish to image. The AS-GT mount has a very useful feature called ‘Precise Align’, if you have this mount then I recommend you use the function. With the object aligned take a test shot this will allow you to double check the framing of the object & to estimate the required camera settings.
Camera Settings – So what total exposure time do you need? What length sub exposures? And what ISO setting. Through trial & error I have found that ISO 1600 on the Canon 20D appears to suit my imaging best; ISO3200 gives a lot of noise but ISO800 gains me no quality improvement over 1600. Total exposure time – well looking at other astrophotos will give a good guide, experience has to help too and don’t forget to check out the magnitude of the object, if its available in a sky catalogue. As for sub-exposure time well I’m a little more scientific over that … Your sub needs to be long enough that the fine details are recorded above the background noise level but not so long that your mount runs into significant tracking errors. Take a test exposure and review its histogram on either the camera back or a PC.
The histogram for one of my M31 subs
If the histogram looks similar to the one above then your exposure should be fine, in general aim for the lefthand edge of the spike to be 1/3 to 1/2 of the way from the left of the histogram – this lifts the detail out of the noise zone without blowing the highlights. Now is the time to have a check for tracking issues, are the stars nicely round, if so then we’re ready to go!
Image Capture – We now can set the camera to work and allow it to accumulate the ‘lights’ that will make up your full exposure time. I find it useful to use Canons TC-80N3 programmable remote control, just tell the camera to take x many exposures y seconds apart and let it go. Now’s the time that you can go have a cup of coffee or two but do watch out for rain or any other problems. Alternatively, if you’re capturing to PC then you could give AstroScopius a go and watch as your image integrates in front of your very eyes!
As well as lights you need to capture dark frames & flats. Darks are simple, just put the lens cap on and shoot a few frames at all the same settings, including temperature, as the lights. These darks will be used to correct for hot pixels & noise. The flats should be taken at the lowest ISO setting possible and should be of an evenly illuminated white object I have successfully used the dawn sky, a lightbox and a magi-whiteboard with fluorescent strip light illumination. The histogram for flat
s should show a spike as far to the right as possible without overexposing anything.
Histogram for the flat field
Looks like I had room to expose this flat a little further but it worked fine anyway. The flats will be used to correct for errors in the imaging train such as vignetting & dust bunnies. You will also hear of bias or offset images, these are used to correct for permanent sensor pattern issues but I have found it unnecessary to use more than a synthetic bias file, created at the processing stage.
So its now dawn, you’ve been up all night and what do you have to show for it? A bunch of light dark & flat files, none of which look like you’d wish the finished image to be. I had 45 lights of 2.5 minutes each (but I chose to discard 8 of them) 10 Darks 5 Flats & 5 Flat Darks. Examples below (Light, Flat, Dark) …
Click on the light file (left) to see an enlarged version.
What to do now, my advice – get some sleep before starting processing 🙂
Processing – There are many different methods & programs to process your raw images; here I shall give an overview of what I did for this image:
I have tried several different programs for the scientific initial processing, some pay some free, and have found none better than Iris. Whatever program you use, good image organisation is useful; with Iris it is vital. The program has a working directory where it will save all files including a copious amount of working files, probably several gigabytes worth. I use a striped drive (for speed) with a separate working folder for each project, as per the screen capture on the left.
The next few stages are all done in Iris; if you are going to learn it then may I suggest reading the tutorials on Christian Buil’s site and for a detailed walk-through you will find Jim Solomon’s Cookbook an excellent resource. Here is the basic process:
- Discard any dodgy lights due to clouds, airplanes, etc.
- Import good RAW files
- Create master dark & flat files plus synthetic bias & cosmetic files
- Correct the lights by applying the darks, flats & bias to them.
- Register the files, so that stars are all aligned
- Crop out the edge overlap caused by aligning the files
- Normalise the files to equalise their background
- Stack the files to produce 1 file with all your data in it
At this point you could move on to another program or do some more processing in Iris. I did the latter, first correcting the white balance, using sub 1 ratios to avoid burning out any highlights, typically the ratios are 0.98R 0.50G 0.63B when I’ve used the 20D. I then experimented with both dynamic & colour stretches, finally deciding upon a relatively moderate asinh colour stretch. Final things in Iris were to adjust the visualisation and save my output to file – in this case a full depth bitmap file.
Next I moved into PixInsight which is convenient because it reads the full depth of an image produced by Iris without needing to adjust the levels as would be required in Photoshop. There is a significant overlap in functionality between Iris & PixInsight and while I prefer to use Iris for most things, there are a few functions that I am, so far, better at using in PixInsight. These include the two that I used here:
- HDRWavelet Transform to extract high dynamic range data within the galaxy, in this case it took 2 transforms one of 4 iterations & the next a single iteration. The output was rather too harsh for my liking so it was blended with the original in Photoshop.
- Finally a morphological transform to control the distracting starfield between us and the target. Again this was layered and blended using a mask in Photoshop so as to maintain clarity of the globular clusters within Andromeda.
The resulting layer blends were finally flattened in Photoshop before export as a 16bit Photoshop file. The final stage for all my images, astronomical or not, is in Lightroom. Clarity, Vibrance, Curves, Colour Temperature and more can all be fine tuned losslessly in Lightroom. Not to mention it is efficient at outputting to 8bit jpg for posting on the web. In this case I needed to do very few final tweaks (minimal clarity & sharpening) and the image is as at the head of this post.
I should point out that this is an overview of the processing that I did – in reality the image & processing evolved over the course of a fortnight as I tried out new ideas. Sometimes I’d leave the image open on my desktop so that I could glance at it & think “hmm, I’ve not quite got x right” and so forth. I think that’s the thing this should be a fun adventure and one of learning too – no rush, no competition just fun learning.
I hope that this post will help a few people who are starting out, to be able to take a step forward with their imaging; it has definitely been useful to write.