Colocalization and confocal images and imageJ Matlab

The laser scanning confocal microscope (LSCM) generates images of multiple labelled fluorescent samples. Colocalization of fluorescent labels is frequently examined.

Colocalization is usually evaluated by visual inspection of signal overlap or by using commercially available software tools, but there are limited possibilities to automate the analysis of large amounts of data.



–Colocalization image processing imageJ

Colocalisation analysis is an subject plagued with errors and contention. The literature is full of different methods for colocalisation analysis which probably reflects the fact that one approach does not necessarily fit all circumstances.

Analysis can be considered qualitative or quantitative. However, opinions differ as to which category the different approaches fall!

Qualitative analysis can be thought of as « highlighting overlapping pixels ». Although this is often given as a number (« percentage overlap ») suggesting quantification, the qualitative aspect arises when the user has to define what is considered « overlapping ». The two channels have a threshold set and any areas where they overlap is considered « colocalised ». Qualitative analysis has the benefit of being readily understood with little expert knowledge but suffers from the intrinsic user bias of « setting the threshold ». There are algorithms available which will automate the thresholding without user intervention but these rely on analysis of the image’s histogram which is subject to user intervention during acquisition.

Quantitative analysis removes user bias by analysing all the pixels based on of their intensity (it must be noted that some authors consider this a draw back rather than an advantage due to the intrinsic uncertainty of pixel intensity; see Lachmanovich et al. (2003) J. Microscopy, 212, 122-131). There are a number of coefficients detailed in the literature which can be calculated using ImageJ; each coefficient has it’s strengths and weaknesses and should be thoroughly researched before being used. It is this requirement for the coefficient to be fully understood which is a disadvantage when trying to convey information to research peers who are experts in biology, and not necessarily mathematics.

One key issue that can confound colocalisation analysis is bleed through. Colocalisation typically involves determining how much the green and red colours overlap. Therefore it is essential that the green emitting dye does not contribute to the red signal (typically, red dyes do not emit green fluorescence but this needs to be experimentally verified). One possible way to avoid bleed-through is to acquire the red and green images sequentially, rather than simultaneously (as with normal dual channel confocal imaging) and the use of narrow band emission filters. Single and unlabelled controls must be used to assess bleed-through.

Intensity Correlation Analysis

This plugin generates Mander’s coefficients (see below) as well as performing Intensity Correlation Analysis as described by Li et al. To fully understand this analysis you should read:
Li, Qi, Lau, Anthony, Morris, Terence J., Guo, Lin, Fordyce, Christopher B., and Stanley, Elise F. (2004). A Syntaxin 1, G{alpha}o, and N-Type Calcium Channel Complex at a Presynaptic Nerve Terminal: Analysis by Quantitative Immunocolocalization. Journal of Neuroscience 24, 4070-4081.

It is bundled with WCIF ImageJ and can be downloaded alone here.


Manders’ Coefficient » (formerly Image Correlator plus<!–[if supportFields]> XE « Image Correlator plus: Wayne Rasband, Tony Collins«  <![endif]–> ” and “Red-Green Correlator” plugins)

This plugin generates various colocalisation coefficients for two 8 or 16-bit images or stacks.

The plugins generate a scatter plots plus correlation coefficients. In each scatter plot, the first (channel 1) image component is represented along the x-axis, the second image (channel 2) along the y-axis. The intensity of a given pixel in the first image is used as the x-coordinate of the scatter-plot point and the intensity of the corresponding pixel in the second image as the y-coordinate.

The intensities of each pixel in the “Correlation Plot” image represent the frequency of pixels that display those particular red/green values. Since most of you image will probably be background, the highest frequency of pixels will have low intensities so the brightest pixels in the scatter plot are in the bottom left hand corner – i.e. x~ zero, y ~ zero. The intensities in the “Red-Green correlation plot” image represent the actual colour of the pixels in the image.

Mito-DsRed; ER-EGFP

Pearson’s correlation (R)=0.34

Overlap coefficient (R)=0.40

Nred ÷ Ngreen pixels=0.66

Colocalisation coefficient for red (Mred)=0.96

Colocalisation coefficient for green (Mgreen)=0.49

TMRE (red) plus Mito-pericam (Green)

Pearson’s correlation Rr=0.93

Overlap coefficient R=0.94

Nred ÷ Ngreen pixels=0.93

Colocalisation coefficient (red) Mred=0.99

Colocalisation coefficient (green) Mgreen=0.98

Both plugins generate various colocalisation coefficients: Pearson’s (Rr), Overlap (R) and Colocalisation (M1, M2) See Manders, E.E.M., Verbeek, F.J. & Aten, J.A. ‘Measurement of co-localisation of objects in dual-colour confocal images’,  (1993) J. Microscopy, 169, 375-382. See tutorial sheet ‘Colocalisation’ for details. The threshold is also reported (0,0 means no threshold was used).

Colocalisation Test

When a coefficient is calculated for two images, it is often unclear quite what this means, in particular for intermediate values. This raises the following question: how does this value compare with what would be expected by chance alone?

There are several approaches that can be used to compare an observed coefficient with the coefficients of randomly generated images. Van Steensel (3) adopted an approach where the observed colocalisation between channel 1 and channel 2 was compared to colocalisation between channel 1 and a number of channel 2 images that had been translated (i.e. displaced by a number of pixels) in increments along the image’s X-axis. Fay et al (4) extended this approach by translating channel 2 in 5-pixel increments along the X- and Y-axis (i.e., –10, –5, 0, 5, and 10) and ± 1 slices in the Z-axis. This results in 74 randomisations (plus one original channel 2). The observed correlation was compared to these 74 and considered significant if it was greater than 95% of them.

Costes et al. (5) subsequently adopted a different approach, based on “scrambling” channel 2. The original channel 1 image was compared to 200 “scrambled” channel 2 images; the observed correlations between channel 1 and channel 2 were considered significant if they were greater than 95% of the correlations between channel 1 and scrambled channel 2s.

Costes’ scrambled images were generated by randomly rearranging blocks of the channel-2 image. The size of these blocks was chosen to equal the point spread function (PSF) of the image.

An approximation of Costes’ approach is used by Bitplane’s Imaris and also the Colocalisation Test plugin. For Imaris, a white noise image is smoothed with a Gaussian filter the width of the image’s PSF. The Colocalisation Test plugin generates a randomized image by taking random pixels from the channel-2 image; it then smoothes the image with a Gaussian filter, which is again the width of the image’s PSF.

The Colocalisation Test plugin calculates Pearson’s correlation coefficient for the two selected channels (Robs) and compares this to Pearson’s coefficients for channel 1 against a number of randomized channel-2 images (Rrand).

–Colocalization image processing matlab:


Automated high through-put colocalization analysis of multichannel confocal images

M. Kreft , I. Milisav , M. Potokar and R. Zorec

Lab. Neuroendocrinology-Molecular Cell Physiology, Inst. Pathophysiology, Medical Faculty, Zaloska 4, 1000 Ljubljana and Celica Biomed. Sciences Center, Stegne 21, 1000, Ljubljana, Slovenia

accepted 20 April 2003.

Available online 15 July 2003.

We developed a simple tool using Matlab to automate the colocalization procedure and to exclude the biased estimations resulting from visual inspections of images. The script in Matlab language code automatically imports confocal images and converts them into arrays. The contrast of all images is uniformly set by linearly reassigning the values of pixel intensities to use the full 8-bit range (0–255). Images are binarized on several threshold levels. The area above a certain threshold level is summed for each channel of the image and for colocalized regions. As a result, count of pixels above several threshold levels in any number of images is saved in an ASCII file. In addition Pearson’s r correlation coefficient is calculated for fluorescence intensities of both confocal channels. Using this approach quick quantitative analysis of colocalization of hundreds of images is possible. In addition, such automated procedure is not biased by the examiner’s subject visualization.


1 Response to “Colocalization and confocal images and imageJ Matlab”

  1. 1 Elise Stanley 16 mars 2011 à 10:12

    Thank you for your interest in the ICA/ICQ analysis method. However, the explanation above is not entirely accurate. This is NOT a correlation method in the strict sensse, indeed it does NOT look for colocalization at all but for covariance. Thus, only if the pixels vary in synchrony is a covariance detected.

    The importance of this is two fold: first if every pixel was positive for both dyes with a similar value the ICQ value would be 0, not 1.

    The other factor is that with staining the opposite of covariance is NOT negative variance but segregated (the stains are in different areas of the image). This is what I originally invented the ICQ method for – thus a perfect covariance gives a value of +0.5 (maximum). A similar (relative) value would be obtained with a pearsons correlation. The opposite, totally segregated staining yeilds an ICQ value of -0.5 (pearsons would be 0). Random staining gives a value of 0 wheras pearons would be some low positive value. THus, the ICQ is much more meaningful to direct interpretation of immunostaining than a plain pearsons.

    The other neat thing about the ICA analysis is that you can detect sub-regions within your image where the two proteins are covarying or segregated, even if the general distribution is random. This shows up as a positive spike in the ICA plot and can be seen on the generated images.

    Thank you for you attention

Votre commentaire

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:


Vous commentez à l’aide de votre compte Déconnexion /  Changer )

Photo Google

Vous commentez à l’aide de votre compte Google. Déconnexion /  Changer )

Image Twitter

Vous commentez à l’aide de votre compte Twitter. Déconnexion /  Changer )

Photo Facebook

Vous commentez à l’aide de votre compte Facebook. Déconnexion /  Changer )

Connexion à %s


Blog Stats

  • 223 372 hits

Meilleurs clics

  • Aucun


Flickr Photos

juillet 2008

%d blogueurs aiment cette page :