Archive for the 'imageJ' Category



plug-in ImageJ Montpellier


MRI Cell Image Analyzer

mri-cia-launcher.jpg

tools.jpgImageJ est un programme d’analyse et de traitement de l’image gratuit et de domaine public. A partir de ImageJ, nous avons développé sur la plate-forme Montpellier RIO Imaging une interface visuelle pour le développement rapide d’applications pour l’analyse d’image. Cette interface complète ainsi les potentialités de ImageJ et permet, sur la base d’un drag and drop à partir d’une liste d’opérations existantes de créer facilement et rapidement des séquences d’opérations pour l’analyse interactive d’images ou l’automatisation de l’analyse d’un grand nombre d’images. Cette interface a été utilisée pour une grande variété d’applications. Cette interface et ces applications sont accessibles librement sous licence GNU (voir GNU General Public License).

MRI-CIA a été présenté pour la première fois à la communauté ImageJ à la "ImageJ User and Developer Conference 2006". Si vous utilisez MRI-CIA, merci de citer l’article suivant :

Pour plus d’informations :

Présentations

  • MRI Cell Image Analyzer – Automatic analysis of microscopy images (pdf | view online)
    Présentation de MRI-CIA en anglais, par Volker Bäcker, le 13.12.2005 au CRBM, Montpellier, France.

measure infections.jpgAteliers

    • Pour suivre l’atelier, il est nécessaire de télécharger ces images : images.zip

o vous trouverez ici des macros développées durant l’atelier.

Links

  • Publications utilisant MRI Cell Image Analyzer
  • applications développées avec MRI-CIA
  • MRI Object Modeling Workbench
  • MRI-CIA sur le documentation wiki de ImageJ
reference:

Colocalization and confocal images and imageJ Matlab

The laser scanning confocal microscope (LSCM) generates images of multiple labelled fluorescent samples. Colocalization of fluorescent labels is frequently examined.

Colocalization is usually evaluated by visual inspection of signal overlap or by using commercially available software tools, but there are limited possibilities to automate the analysis of large amounts of data.

–formation

www.picin.u-bordeaux2.fr/Cours/formation_2006/cerner_la_colocalisation_2006.pdf

—————————————————————–

–Colocalization image processing imageJ

Colocalisation analysis is an subject plagued with errors and contention. The literature is full of different methods for colocalisation analysis which probably reflects the fact that one approach does not necessarily fit all circumstances.

Analysis can be considered qualitative or quantitative. However, opinions differ as to which category the different approaches fall!

Qualitative analysis can be thought of as "highlighting overlapping pixels". Although this is often given as a number ("percentage overlap") suggesting quantification, the qualitative aspect arises when the user has to define what is considered "overlapping". The two channels have a threshold set and any areas where they overlap is considered "colocalised". Qualitative analysis has the benefit of being readily understood with little expert knowledge but suffers from the intrinsic user bias of "setting the threshold". There are algorithms available which will automate the thresholding without user intervention but these rely on analysis of the image’s histogram which is subject to user intervention during acquisition.

Quantitative analysis removes user bias by analysing all the pixels based on of their intensity (it must be noted that some authors consider this a draw back rather than an advantage due to the intrinsic uncertainty of pixel intensity; see Lachmanovich et al. (2003) J. Microscopy, 212, 122-131). There are a number of coefficients detailed in the literature which can be calculated using ImageJ; each coefficient has it’s strengths and weaknesses and should be thoroughly researched before being used. It is this requirement for the coefficient to be fully understood which is a disadvantage when trying to convey information to research peers who are experts in biology, and not necessarily mathematics.

One key issue that can confound colocalisation analysis is bleed through. Colocalisation typically involves determining how much the green and red colours overlap. Therefore it is essential that the green emitting dye does not contribute to the red signal (typically, red dyes do not emit green fluorescence but this needs to be experimentally verified). One possible way to avoid bleed-through is to acquire the red and green images sequentially, rather than simultaneously (as with normal dual channel confocal imaging) and the use of narrow band emission filters. Single and unlabelled controls must be used to assess bleed-through.

Intensity Correlation Analysis

This plugin generates Mander’s coefficients (see below) as well as performing Intensity Correlation Analysis as described by Li et al. To fully understand this analysis you should read:
Li, Qi, Lau, Anthony, Morris, Terence J., Guo, Lin, Fordyce, Christopher B., and Stanley, Elise F. (2004). A Syntaxin 1, G{alpha}o, and N-Type Calcium Channel Complex at a Presynaptic Nerve Terminal: Analysis by Quantitative Immunocolocalization. Journal of Neuroscience 24, 4070-4081.

It is bundled with WCIF ImageJ and can be downloaded alone here.

reference:

http://www.uhnresearch.ca/facilities/wcif/imagej/colour_analysis.htm

Manders’ Coefficient" (formerly Image Correlator plus<!–[if supportFields]> XE "Image Correlator plus: Wayne Rasband, Tony Collins" <![endif]–> ” and “Red-Green Correlator” plugins)

This plugin generates various colocalisation coefficients for two 8 or 16-bit images or stacks.

The plugins generate a scatter plots plus correlation coefficients. In each scatter plot, the first (channel 1) image component is represented along the x-axis, the second image (channel 2) along the y-axis. The intensity of a given pixel in the first image is used as the x-coordinate of the scatter-plot point and the intensity of the corresponding pixel in the second image as the y-coordinate.

The intensities of each pixel in the “Correlation Plot” image represent the frequency of pixels that display those particular red/green values. Since most of you image will probably be background, the highest frequency of pixels will have low intensities so the brightest pixels in the scatter plot are in the bottom left hand corner – i.e. x~ zero, y ~ zero. The intensities in the “Red-Green correlation plot” image represent the actual colour of the pixels in the image.

Mito-DsRed; ER-EGFP

Pearson’s correlation (R)=0.34

Overlap coefficient (R)=0.40

Nred ÷ Ngreen pixels=0.66

Colocalisation coefficient for red (Mred)=0.96

Colocalisation coefficient for green (Mgreen)=0.49

TMRE (red) plus Mito-pericam (Green)

Pearson’s correlation Rr=0.93

Overlap coefficient R=0.94

Nred ÷ Ngreen pixels=0.93

Colocalisation coefficient (red) Mred=0.99

Colocalisation coefficient (green) Mgreen=0.98

Both plugins generate various colocalisation coefficients: Pearson’s (Rr), Overlap (R) and Colocalisation (M1, M2) See Manders, E.E.M., Verbeek, F.J. & Aten, J.A. ‘Measurement of co-localisation of objects in dual-colour confocal images’,  (1993) J. Microscopy, 169, 375-382. See tutorial sheet ‘Colocalisation’ for details. The threshold is also reported (0,0 means no threshold was used).

Colocalisation Test

When a coefficient is calculated for two images, it is often unclear quite what this means, in particular for intermediate values. This raises the following question: how does this value compare with what would be expected by chance alone?

There are several approaches that can be used to compare an observed coefficient with the coefficients of randomly generated images. Van Steensel (3) adopted an approach where the observed colocalisation between channel 1 and channel 2 was compared to colocalisation between channel 1 and a number of channel 2 images that had been translated (i.e. displaced by a number of pixels) in increments along the image’s X-axis. Fay et al (4) extended this approach by translating channel 2 in 5-pixel increments along the X- and Y-axis (i.e., –10, –5, 0, 5, and 10) and ± 1 slices in the Z-axis. This results in 74 randomisations (plus one original channel 2). The observed correlation was compared to these 74 and considered significant if it was greater than 95% of them.

Costes et al. (5) subsequently adopted a different approach, based on “scrambling” channel 2. The original channel 1 image was compared to 200 “scrambled” channel 2 images; the observed correlations between channel 1 and channel 2 were considered significant if they were greater than 95% of the correlations between channel 1 and scrambled channel 2s.

Costes’ scrambled images were generated by randomly rearranging blocks of the channel-2 image. The size of these blocks was chosen to equal the point spread function (PSF) of the image.

An approximation of Costes’ approach is used by Bitplane’s Imaris and also the Colocalisation Test plugin. For Imaris, a white noise image is smoothed with a Gaussian filter the width of the image’s PSF. The Colocalisation Test plugin generates a randomized image by taking random pixels from the channel-2 image; it then smoothes the image with a Gaussian filter, which is again the width of the image’s PSF.

The Colocalisation Test plugin calculates Pearson’s correlation coefficient for the two selected channels (Robs) and compares this to Pearson’s coefficients for channel 1 against a number of randomized channel-2 images (Rrand).

–Colocalization image processing matlab:

doi:10.1016/S0169-2607(03)00071-3

Automated high through-put colocalization analysis of multichannel confocal images

M. Kreft , I. Milisav , M. Potokar and R. Zorec

Lab. Neuroendocrinology-Molecular Cell Physiology, Inst. Pathophysiology, Medical Faculty, Zaloska 4, 1000 Ljubljana and Celica Biomed. Sciences Center, Stegne 21, 1000, Ljubljana, Slovenia

accepted 20 April 2003.

Available online 15 July 2003.

We developed a simple tool using Matlab to automate the colocalization procedure and to exclude the biased estimations resulting from visual inspections of images. The script in Matlab language code automatically imports confocal images and converts them into arrays. The contrast of all images is uniformly set by linearly reassigning the values of pixel intensities to use the full 8-bit range (0–255). Images are binarized on several threshold levels. The area above a certain threshold level is summed for each channel of the image and for colocalized regions. As a result, count of pixels above several threshold levels in any number of images is saved in an ASCII file. In addition Pearson’s r correlation coefficient is calculated for fluorescence intensities of both confocal channels. Using this approach quick quantitative analysis of colocalization of hundreds of images is possible. In addition, such automated procedure is not biased by the examiner’s subject visualization.

kreft-2004-colocalization

imageJ import export file format

2. Importing Image Files

ImageJ primarily uses TIFF as the image file format. The menu command “File/Save” will save in TIFF format. The menu command “File/Open” will open TIFF files and import a number of other common file formats (e.g. JPEG, GIF, BMP, PGM, PNG). These natively supported files can also be opened by drag-and-dropping the file on to the ImageJ toolbar. MetaMorph *.STK files can also be opened directly.

Several more file formats can be imported via ImageJ plugins (e.g. Biorad, Noran, Zeiss, Leica). When you subsequently save these files within ImageJ they will no longer be in their native format. Bear this in mind; ensure you do not overwrite original data.

There are further file formats such as PNG, PSD (Photoshop), ICO (Windows icon), PICT, which can be imported via the menu command File/Import/*.PNGJimi Reader… .

2.1 Importing Zeiss LSM files

Files acquired on the Zeiss confocal are can be opened directly (with the “Handle Extra File Types” plugin installed) via the “File/Open” menu command, or by dropping them on the ImageJ toolbar. They can also be imported via the “Zeiss LSM Import Panel” which is activated by the menu command “File/Import/*.LSM”. This plugin has the advantage of being able to access extra image information stored with the LSM file, but it is an extra mouse click.

Images are opened as 8-bit colour images with the “no-palette” pseudocolour (!) from the LSM acquisition software. Each channel is imported as a separate image/stack. Lambda stacks are therefore imported as multiple images, not a single stack. They can be converted to a stack with the menu command: “Image/Stacks/Covert Images to stack”.

Once opened, the file information can be accessed and the z/t/lambda information can be irreversibly stamped in to the images or exported to a text file.

2.2 Importing Zeiss ZVI files

ZVI files can be imported via the menu command "File/Import/*.ZVI". The files are opened as a single stack with the different channels interleaved. The channels can be separated with the "Plugins/Stacks-Shuffling/DeInterleave" plugin.

2.3 Importing Noran SGI file

Noran movies can be opened in several ways:

File/Import/Noran movie… opens the entire movie as an image stack.
File/Import/Noran Selection…
allows you to specify a range of frames to be opened as a stack.

The Noran SGI plugins are not bundled with the ImageJ package. To receive them, please contact tonyc@uhnresearch.ca or their author, Greg Joss, so he can keep track of users. Greg Joss gjoss AT bio.mq.edu.au is in the Dept of Biology, Macquarie University, Sydney, Australia.

2.4 Importing Biorad PIC files

Biorad PIC files can be now be imported directly via the menu command “File/Open”. Experimental information, calibration, and other useful information can be accessed via Image/Show Info. Biorad PIC files can also be opened by drag-and-dropping the file on to the ImageJ toolbar. The PIC file is opened with the same LUT with which it was saved in the original acquisition software.

2.5 Importing multiple files from folder

Each time point of an experiment acquired with software such as Perkin Elmer’s UltraVIEW or Scion Image’s time lapse macro is saved by the acquisition software as a single TIF file. The experimental sequence can be imported to ImageJ via the menu command “File/Import/Image Sequence…”.

Locate the directory, click on the first image in the sequence and OK all dialogs. (You may get a couple of error messages while ImageJ tries to open any non-image files in the experimental directory.) The stack will “interleave” the multiple channels you recorded, and can be de-interleaved via “Plugins/Stacks – Shuffling/Deinterleave.

Selected images that are not the same size can be imported as individual images windows using “File/Import/Selected files to open… ” or as a stack with the “File/Import/Selected files for stack… ”. Unlike the “File/Import/Image Sequence…” function, the images need not be of the same dimensions. If memory is limited, stacks can be opened as Virtual-Stacks with most of the stack remaining on the disk until it is required “File/Import/Disk based stack” .

2.6 Importing Multi-RAW sequence from folder

To form an image, ImageJ needs to know the image dimensions, bit-depth, slice number per file and any extraneous information in the file format (offset and header size). All you really need to tell it is the image dimension in x and y. These values should be obtainable from the software in which the images were acquired. Armed with this information follow these steps:

1. File/Import/Raw…

2. Select experimental directory.

3. Typical values for the dialog box are:

Image type = 16-bit unsigned            (or 8 bit typically)

width and height as determined earlier

offset = 0, number of image = 1, gap = 0, ‘white’ is zero = off

‘Little-endian byte order’ = on, ‘open all files in folder’ = on to open all files in folder.

Non-image files will also be opened and may appear as blank images and need deleting: “Image/Stacks/Delete slice”. The stack will “interleave” the multiple channels you recorded, and can be de-interleaved via “Plugins/Stacks – Shuffling/DeInterleave.

2.7 Importing AVI and MOV files

There are two plugins which can open uncompressed AVIs and some types of MOV file.

For opening (and writing) QuickTime you need a custom installation of QuickTime to include QT for Java (see section 1.3). QuickTime movies are then opened via “File/Import/*.MOV ”.

Uncompressed AVIs can be opened via “File/Import/*.AVI ”.

2.8 Importing Scanalytics IPLab IPL files

IPLab files can be imported directly by ImageJ. File/Import/*.IPL . Allows Windows IPLab files to be opened directly with the “File/Open” menu command, or drag-and-drop.

The spatial calibration should be imported correctly from your IPLab software.

2.9 Importing Leica SP2 LEI series

Leica SP2 experiments are saved as multiple tiff files in a single folder. This can contain many different series acquired during the one experiment. Along with many tiffs, the folder also contains a text description in the *.TXT files and a Leica proprietary file *.LEI.

Double clicking, drag/dropping of "File/Open"‘ing the *.LEI files should run the Leica TIFF Sequence plugin. Alternatively, run the menu command "File/Import/*.TXT Leica SP2 series" and select the experiment’s TXT file from the open dialog.

A second dialog will open listing the names of the series in the folder. The user can then select those that are to be opened. The appropriate spatial calibration should be read form the txt file and applied to the image. Leica ‘Snapshots’ do not have spatial calibration saved with them. The entry in the TXT file for the series is written to the ‘Notes’ for the image and can be access by the menu command "Image/Show Info…".

Folders with large numbers of series in could potentially generate a dialog so large that some names are "off screen". The maximum number of series names per column can be set by running the plugin with the alt-key down.

2.10 Other Import functions

These import plugins import the image data as well as meta-data.

Leica SP- Leica multi-colour images are tiffs. They can be opened as multiple files to a a single stack. Each channel can be imported separately by adding ‘c1′ or c2′ etc. as the import string. Alternatively, they can be all imported to the one stack then separated by de-interleaving them ("Plugins/Stack – shuffling/Deinterleave").

Olympus Fluoview – available from http://rsb.info.nih.gov/ij/plugins/ucsd.html. Not bundled with the current download.

Animated GI F – This plugin opens an animated GIF file as an RGB stack. Also opens single GIF images.

File/Import/ICS,IDS Image Cytometry Standard file format from Nico Sturman.

File/Import/*.DV *.DV files generated on DeltaVision system format from Fabrice Cordelieres

File/Import/*.ND *.ND files created with MetaMorph’s ‘Multidimensional acquisition”. From Fabrice Cordelieres

leica SP2 tiff sequence – ressources

formats:

  • tiff
  • volume tiff
  • raw
  • avi

———————————–

plug-in imageJ

This plugin opens multi-TIFF series acquired with Leica SP2 confocal.

Run the plugin; select the TXT file associated with the TIFF series and select the series to be opened. This information is then passed to the native "Import Sequence" function.

Optional ability to split the channels.

The plugin should apply the spatial calibration found in the TXT file.

ImageJ primarily uses TIFF as the image file format. The menu command “File/Save” will save in TIFF format. The menu command “File/Open” will open TIFF files and import a number of other common file formats (e.g. JPEG, GIF, BMP, PGM, PNG). These natively supported files can also be opened by drag-and-dropping the file on to the ImageJ toolbar. MetaMorph *.STK files can also be opened directly.

Several more file formats can be imported via ImageJ plugins (e.g. Biorad, Noran, Zeiss, Leica). When you subsequently save these files within ImageJ they will no longer be in their native format. Bear this in mind; ensure you do not overwrite original data.

There are further file formats such as PNG, PSD (Photoshop), ICO (Windows icon), PICT, which can be imported via the menu command File/Import/*.PNGJimi Reader…<!–[if supportFields]> XE "Jimi Reader…:Wayne Rasband and Ulf Dittmer (udittmer at mac.com)" <![endif]–> .

exemple pour Zeiss LSM:

Importing Leica SP2 LEI series

Leica SP2 experiments are saved as multiple tiff files in a single folder. This can contain many different series acquired during the one experiment. Along with many tiffs, the folder also contains a text description in the *.TXT files and a Leica proprietary file *.LEI.

Double clicking, drag/dropping of "File/Open"‘ing the *.LEI files should run the Leica TIFF Sequence plugin. Alternatively, run the menu command "File/Import/*.TXT Leica SP2 series" and select the experiment’s TXT file from the open dialog.

A second dialog will open listing the names of the series in the folder. The user can then select those that are to be opened. The appropriate spatial calibration should be read form the txt file and applied to the image. Leica ‘Snapshots’ do not have spatial calibration saved with them. The entry in the TXT file for the series is written to the ‘Notes’ for the image and can be access by the menu command "Image/Show Info…".

Folders with large numbers of series in could potentially generate a dialog so large that some names are "off screen". The maximum number of series names per column can be set by running the plugin with the alt-key down.

plug-in download:

http://rsbweb.nih.gov/ij/plugins/leica-tiff.html

2006/02/16:First version
2006/03/02:Fixed error arising from series with similar names containing spaces; errors arising from images with Gray LUT.
2006/03/20:Filenames listed in multiple columns (max number of rows per column can be set by running the plugin with the alt-key down.).

—–author

Author: Tony Collins (tonyc at uhnresearch.ca)
Wright Cell Imaging Facility, Toronto, Canada

http://www.uhnresearch.ca/facilities/wcif/software/Plugins/LeicaTIFF.html

Save to plugins folder; compile and run plugin.
The compiled version is bundled with latest WCIF ImageJ bundle along with modified HandleExtraFileTypes.class (courtesy of Greg Jefferis) to allow double-clicking of the *.LEI file to open the sequence.

Code for HandleExtraFileTypes.java from Greg:

//  Leica SP confocal .lei file handler
        if (name.endsWith(".lei")) {
            int dotIndex = name.lastIndexOf(".");
            if (dotIndex>=0)
                name = name.substring(0, dotIndex);
            path = directory+name+".txt";
            File f = new File(path);
            if(!f.exists()){
                IJ.error("Cannot find the Leica information file: "+path);
                return null;
            }
            IJ.runPlugIn("Leica_TIFF_sequence", path);
            width = IMAGE_OPENED;
            return null;
        }

———————-Huygens Software reads and writes

The Huygens Software reads and writes (among other → File Formats) TIFF series with Leica style numbering if there are more channels (different wavelength), slices or frames (in a Time Series) than in a simple Numbered Tiff series.

An image of four slices and two frames is named with Leica style numbering as follows:

c_t00_z000.tif
c_t00_z001.tif
c_t00_z002.tif
c_t00_z003.tif
c_t01_z000.tif
c_t01_z001.tif
c_t01_z002.tif
c_t01_z003.tif

And an image sTCh of four slices, three frames and two channels:

sTCh_t00_z000_ch00.tif
sTCh_t00_z000_ch01.tif
sTCh_t00_z001_ch00.tif
sTCh_t00_z001_ch01.tif
sTCh_t00_z002_ch00.tif
sTCh_t00_z002_ch01.tif
sTCh_t00_z003_ch00.tif
sTCh_t00_z003_ch01.tif
sTCh_t01_z000_ch00.tif
sTCh_t01_z000_ch01.tif
sTCh_t01_z001_ch00.tif
sTCh_t01_z001_ch01.tif
sTCh_t01_z002_ch00.tif
sTCh_t01_z002_ch01.tif
sTCh_t01_z003_ch00.tif
sTCh_t01_z003_ch01.tif
sTCh_t02_z000_ch00.tif
sTCh_t02_z000_ch01.tif
sTCh_t02_z001_ch00.tif
sTCh_t02_z001_ch01.tif
sTCh_t02_z002_ch00.tif
sTCh_t02_z002_ch01.tif
sTCh_t02_z003_ch00.tif
sTCh_t02_z003_ch01.tif
------------

-----------------------quelques ressources des centres
http://www-ijpb.versailles.inra.fr/fr/lcc/fichiers/equip-sp2.htm
http://www-ijpb.versailles.inra.fr/fr/lcc/fichiers/pdf/instruction-utilisation-SP2.pdf

http://www.itg.uiuc.edu/ms/equipment/microscopes/lscm.htm
leica LCS lite

http://microscopy.unc.edu/How-to/leicasp2/default-viewing.html

http://ijm2.ijm.jussieu.fr/imagerie/fichiers

MRIcro et autres freeware surface or volume rendering

Introduction

This page describes how to create volume renderings using my free MRIcro software. You can learn a lot about the brain by viewing axial, coronal and sagittal slices. However, it is often useful to display lesion location on the rendered surface of the brain. Just as each individual has unique finger prints, each brain has a unique sulcal pattern. SPM’s spatial normalization adjusts the size and alignment of the MRI scan, but it does not deliver a precise sulcal match -such an algorithm would create many local distortions (just as an algorithm that attempted to normalize fingerprints from individuals with very different patterns would require many distortions). Volume rendering allows the viewer to grasp the sulcal pattern of the brain, and see lesions in relation to common landmarks. A second advantage for displaying a lesion on a rendered image is that you can specify how ‘deep’ to search beneath the surface to display a lesion, and in this way you can show the cortical damage without the underlying damage to the white matter Note that only showing lesions near the brain’s surface can also be misleading, as it can hide deeper damage. Therefore, it is often best to present surface renderings in conjunction with stereotactically aligned slices.

There are two popular ways to render objects in 3D. Surface rendering treats the object as having a surface of a uniform colour. In surface rendering, shading is used to show the location of a light source – with some regions illuminated while other regions are darker due to shadows. VolumeJ is an excellent example of a surface renderer. The benefit of surface rendering is that it is generally very fast – you only need to manipulate the points on the surface rather than every single voxel.

On the other hand, Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set.

A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner. Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

To render a 2D projection of the 3D data set, one first needs to define a camera in space relative to the volume. Also, one needs to define the opacity and color of every voxel. This is usually defined using an RGBA (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value.

A volume may be viewed by extracting surfaces of equal values from the volume and rendering them as polygonal meshes or by rendering the volume directly as a block of data. The Marching Cubes algorithm is a common technique for extracting a surface from volume data.

On the other hand, volume rendering examines the intensity of the objects. So darker tissues (e.g. sulci) appear darker than brighter tissue. Finally, hybrid rendering allows the user to combine these two techniques – first computing a volume rendering and then highlighting regions based on illumination. For example, my software can create volume, surface or combined volume and surface renderings. The images created by MRIcro below illustrates these techniques. The left-most image is a volume rendering, the middle image is a surface rendering, and the image on the right is a hybird.

Surface rendering is by far the most popular approach to rendering objects. One reason for this is that surface rendering can be much quicker than volume rendering (as only the vertexes need to be recomputed following a rotation, while in volume rendering every voxel must be recomputed). However, surface rendering typically has several disadvantages compared to volume rendering:

  • It requires high quality scan and excellent skull extraction to show clean edges.
  • Surface color does not reflect underlying tissue (unless texture maps are used).
  • To get sharp edges, the gray matter is typically eroded, creating an inaccurate image of brain size.

The third point is illustrated on the image on the right. Note that with a high air/surface threshold is required to show nice sulcal definition in the surface renderings. However, at these high values the gray matter has been stripped from the image. In contrast, low signal information enhances the definition of sulci for volume renderings.

Most rendering tools add perspective: closer items appear larger than more distant items. Perspective is a strong monocular cue for depth, so this technique creates a powerful illusion of depth. However, in some situations, it is helpful to have perspective free images (‘orthographic rendering’). Orthographic rendering has a couple of advantages: it it can be quicker and it can allow a region to remain a constant size in different views. MRIcro is an orthographic renderer.

One word of caution: Be careful about judging left and right with rendered views. My software retains the left-and right of the image. This is particularly confusing with the coronal view when the head is facing toward you. When we see people facing us, we expect their left to be on our right. With my viewer, their left is on your left.

Installing MRIcro Return to Index

The main MRIcro manual describes how to download and install MRIcro. The software comes with a sample image of the brain.

Creating a rendered image Return to Index

Rendering an image with MRIcro is very straightforward. Load the image you wish to view (use the ‘Open’ command in the file menu). Finally, press the button labelled ‘3D’. You will see a new window that allows you to adjust the air/surface threshold (which selects the minimum brightness which will be counted as part of your volume) as well as the surface depth (how many voxels beneath the surface are averaged to determine the surface intensity).

Most rendering tools require very high quality scans (such as the MRI scan above, which came from a single individual who was scanned 27 times). As noted earlier, surface rendering tools in particular have problems with the low resolution or average contrast images that are typically found in the clinic. Fortunately, MRIcro works very well with clinical quality scans. The figures on the right come from single fast scans from a clinical scanner (the whole MPRAGE sequences required 6 minutes with Siemens 1.5T). The image on the left shows a stroke patient.

The images above on the right shows a ‘free rotation’ with a cutaway through the skull. When you select the free rotation view, a new set of controls appear that allow you to select the azimuth and elevation of your viewpoint. A 3D cube illustrates the selected viewpoint – with a crosshair depicting the position of the nose. You can also use the mouse to drag the cube to your desired viewpoint. Finally, the ‘free rotation’ selection allows you to select a ‘cutout’ region to allow you to view inside the surface of the image.

Yoking Images Return to Index

MRIcro is specifically designed to help the user locate and identify the ridges and folds (gyri and sulci) of the human brain. These are often difficult to identify given 2D slices of the brain. By running two copies of my software, you can select a landmark on a rendered view and see the corresponding location on a 3D image (as shown below, note you need to have ‘Yoke’ checked [highlighted in green]).

Overlays Return to Index

MRIcro can also display Overlay images – the figures below illustrate overlays. Typically, these are statistical maps generated by SPM, VoxBo or Activ2000 which show functional regions (computed from PET, SPECT or fMRI scans). I have a web page dedicated to loading overlays with MRIcro.

For volume rendering, you must then make sure the ‘Overlay ROI/Depth’ checkbox is checked. The number next to this check box allows you to specify how deep beneath the surface the software will look for a ROI or Overlay (in voxels). A small value means you will only see surface cortical activations/lesions, while a large value will allow you to see deep activations. For example, in the image on the left below, a skull-stripped brain image was loaded and then a functional map was overlaid.

Left: functional results can be overlayed. Adjusting the depth value allows you to visualise surface or deep activity/lesions.

It is important to mention that MRIcro’s renderings of objects below the brain’s surface are viewpoint dependent. This is illustrated in the figures below. Both regions of interest (lesions) and overlays are mapped based on the viewer’s line of sight. This is very different from mri3dX, which computes the location of objects based on the surface normal (essentially, mri3dX computes a line of sight perpendicular to the plane of the surface). Each of these approaches has its benefits and costs – both are correct, but lead to different results. Note that the location of subcortical objects appears to move when viewpoint changes in MRIcro. On the other hand, deep objects will appear greatly magnified with mri3dX. The rendered image of a brain (above, right) illustrates this difference. Here MRIcro is showing a very deep lesion near the center of the brain. The lesion appears at a different location in each image (SPMers call this a ‘glass brain’ view). A very deep object like this would appear much larger in mri3dX.

Brain Extraction Return to Index

In order to create high quality images of the brain’s surface, you need to strip away the scalp. This is a challenging problem. My software comes with Steve Smith’s automated Brain Extraction Tool [BET], (for citations: Smith, SM (2002) Fast robust automated brain extraction, Human Brain Mapping, 17, 143-155). BET is usually able to accurately extract brain images very effectively. To use BET, you simply click on ‘Skull strip image [for rendering]‘ from the ‘Etc’ menu. You then select the image you want to convert, and give a name for the new stripped image. If you have trouble brain stripping an image, try these techniques:

  • You can adjust BET’s fractional intensity threshold. The default is 0.50. Lower numbers lead to smoother brains and a larger estimate of brain size.
  • BET starts stripping an image from the center of "gravity" of the volume (think of intensity as mass). If the COG of your head image is not near the centre of the brain, you may have a problem. This is particularly a problem with clinical images that show large portions of the neck. To clip excess slices from an image, choose MRIcro’s ‘Save as [clipped/format]‘ function and set the number of low and high slices you wish to clip (clipping is described in stage 1, Step 1 of my normalization tutorial).

Object Extraction Return to Index

The Brain Extraction Tool described in the previous section is useful for removing a brain from the surrounding scalp. How about removing other objects – for example BET will not work if you wish to extract the image of a torso from surrounding speckle noise. Most scans show a bit of ‘speckle’ in the air surrounding an object (also known as ‘salt and pepper noise’. Most of the time, you can simply adjust the MRIcro’s ‘air/surface’ threshold to eliminate noise. However, sometimes spikes of noise are impossible to eliminate without eroding the surface of the object you want to image. To help, MRIcro (1.36 or later) includes a tool to eliminate air speckles.

  • Load the image and adjust the contrast (often choosing ‘Contrast autobalance’ [Fn5] does a good job, but this often does not work for CT scans).
  • Choose ‘Remove air speckles’ from the ‘Etc’ menu.
  • Adjust the ‘Air/Surface threshold’ so that most noise appers as isolated pixels, while the object you want to extract is virtually all green.
  • Set the Erode and Dilate cycles – usually values of 2-3 for each is about right.
  • Press ‘Go’ and name your new file.

To understand the settings of this command, consider the stages MRIcro uses to despeckle an image. First, consider an image with a few air speckles (figure A below, note a few red speckles in the air). MRIcro first smooths the image by finding the mean of the voxel and the 6 voxels that share a surface with it (B). This usually attenuates any speckles (a median filter would be better, but is much slower). Second, only voxels brighter than the user specified threshold are included in a mask (C). Third, a number of passes of erosion are conducted (D). During each pass, a voxel is eroded if 3 or more of its immediate neighbors are not part of the mask. Fourth, the mask is grown for the number of dilate cycles (E). During each dilation pass, a voxel is grown if any of its immediate neighbors is part of a mask. Note that any cluster of voxels completely eliminated during erosion does not regrow – thus eliminating most noise speckles. Finally, the voxels from the original image are inserted into the masked region (F)

The images to the right show how this technique can be used to effectively extract bone from a CT of an ankle. The right panel shows a standard rendering of the image using a air/surface threshold that shows the bone and hides the surrounding tissue. Note that the ends of the bones are not visible. On the right we see the same image after object extraction (threshold 110, 2 erode cycles, 2 dilate cycles). The extracted rendering appears much clearer than the original.

Maximum Intensity Projections [MIP] Return to Index

So far this web page has described MRIcro’s two primary rendering techniques: volume and surface rendering. However, MRIcro 1.37 and later add a third technique: the maximum intensity projection. This technique simply plots the brightest voxel in the path of a ray traversing the image. The result is a flat looking image that looks a bit like a 2D plain film XRay. This technique is typically a poor choice for most images. The exception is some CAT scans and angiograms. In this case, the MIP is able to identify bright objects embedded inside another object. The images at the right show an MR angiogram of my brain (this image shows an axial view of my circle of willis) and a CT scan of a wrist (note the bright metal pin). To create a MIP, simply select press the ‘MIP’ button in MRIcro’s render window.

Sample Datasets Return to Index

Here are some large sample datasets. The CT scan is perfect for surface renderings, and allows you to change the air/surface threshold to either see the skin or bone as the image surface. This image demonstrates that MRIcro’s rendering can be equally effective with CT scans. Additional images can be on the web. Some nice data sets are a knee, a skull (with EEG leads attached), a bonsai tree, an aneurysm, a foot, a lobster, a high resolution skull, a a fish, and a a teddy bear. All the images I provide for download here have had the voxels outside the object (typically air) set to zero (this improves file compression).

This CT scan is from the Chapel Hill Volume Rendering Test Dataset. The data is from a General Electric CT scanner at the North Carolina Memorial Hospital. I cropped the image to 208x256x225 voxels. Shift+click here to download (2.7 Mb).
A clinical-quality MRI scan of a healthy individual (1.5T Siemens MPRAGE flip-angle=12-degrees, TR=9.7ms, TE=4.0ms, 1x1x1mm). This image has 180x256x213 voxels. Shift+click here to download (3.4 Mb). Courtesy of Paul Morgan.
This is a 256x256x110 voxel CT of an engine. This is a popular public domain image. Shift+click here to download (1.5 Mb).
As well as CT and MRI scans, MRIcro can also render high resolution images from laser scanning confocal microscopy. The latest versions of MRIcro automatically read BioRad PIC and Zeiss TIF images. Shift+click to download this image of a daisy pollen granule from Olaf Ronneberger (0.5 Mb).
Shift+click to download this MRI scan from a mouse embryo taken 13.5 days post conception (0.5 Mb). For more details, visit the home page for this project.
Shift+click to download this MRI arterial angiogram of my brain’s circle of willis (0.5 Mb). This scan was acquired with a 0.2×0.2×0.5mm resolution, then object extracted and rescaled to 0.4mm isotropic (to reduce download time). (3.0T Philips FFE flip-angle=20-degrees, TR=32.7ms, TE=3.4ms, ‘3DI INFL HR’). Courtesy of Paul Morgan (see his web page for movies of angiograms and other MRI scans). Amazingly, there is no artificial contrast. Also, note that the veins have been suppressed.

Acknowledgements Return to Index

Tom Womack devised the rapid surface shading algorithm and gave me a lot of tips (he also compiled the version of BET that I distribute). Krish Singh’s brilliant mri3dX inspired me. Its ability to show viewpoint independent functional data makes it a complimentary tool for visualising the brain. Steve Smith’s BET usually turns skull stripping from a tedious effort to an automated process. Earl F. Glynn developed the code for computing a matrix based on azimuth and elevation. This elegant tool allows the user to change their viewpoint without having to worry about gimbal lock problems or confusing controls. MRIcro does not use/require DirectX or OpenGL, for great articles on 3D graphics visit www.delphi3d.net.

Other renderers Return to Index

In addition to MRIcro, a number of freeware programs are available that can display rendered images of the brain (either surface rendering, which treats the surface as a uniform color, or volume rendering, which takes into account the brightness of the material beneath the surface). Both MRIcro and mri3dX inlude Steve Smith’s BET software for extracting the brain from the rest of the image. For the other programs, you will first need to skull-strip your brain images (e.g. with BET or BSE).

Name Description
Activ2000
Windows
Activ2000 for Windows can show functional activity on a surface rendering.
AMIDE
Linux
This Linux software can read Analyze, DICOM and ECAT images.
ImageJ with the Volume Rendering plugin
Macintosh, Unix, Windows
Michael Abramoff has added volume rendering features to Wayne Rasband’s popular Java-based ImageJ software. ImageJ can read/write analyze format using Guy Williams’ plugin.
Java 3D Volume Rendering
Macintosh, Unix, Windows
Java based volume renderer.
Julius
Unix, Windows
Volume Rendering Software.
OGLE
Linux, Windows
Volume rendering of grayscale (continuous intensity) and RGB (discrete colours) datasets. Ogle is a nice tool for rendering MRI/CT scans. For example, to view the clinical MRI scan from my Sample Datasets section, you can double-click on its .ogle text file (the first time you open a .ogle file, you will have to tell Windows that you want to open these files with Ogle). Here is a sample .ogle file for the clinical scan. You can also try a different version of this software named ogleS.
MEDAL
Windows
Reza A. Zoroofi’s freeware for Windows can create surface renderings of Analyze and DICOM images.
MindSeer
Macintosh, Unix, Windows
Java based volume renderer that can overlay statistical maps as well as venous/arterial maps. Can view Analyze, MINC and NIfTI formats.
mri3dX
Unix
Krish Singh’s freeware, which runs on Linux, Mac OSX, Sun and SGI computers. A basic mri3dX tutorial is available.
Simian
Linix, Windows
looks like the future of volume rendering. Joe Kniss has developed this software that can take advantage of powerful but low cost GeForce and Radeon graphics cards. The Translucency approximation looks pretty stunning.
Space
Windows
Lovely interface with nice looking rendering.
Volsh
Unix
NCAR Volume Rendering Software.
VolRenApp
Windows
Volume and surface rendering.
Volume-One
Windows
This software can also use an extension for viewing diffusion tensor imaging data.
V^3
Windows, Linux, MacOSX
Accelerated volume rendering (requires a GeForce video card).
ImageJ with the Volume Rendering plugin
Macintosh, Unix, Windows
Kai Uwe Barthel has added volume rendering features to Wayne Rasband’s popular Java-based ImageJ software. ImageJ can read/write analyze format using Guy Williams’ plugin.
Voxx
Windows
Volume rendering tailored for confocal imaging (requires a GeForce video card, with fast hardware-accelerated rendering). Includes the ability to superimpose different image protocols of the same volume (e.g. with one protein-type shown in red and the other shown as green).
MITK
Windows
Medical Imaging ToolKit, developed by Dr. Tian and colleagues.

I MOVED THIS BLOG FROM WORDPRESS TO BLOGGER. Ce blog est à
ex-ample.blogspot.com

Blog Stats

  • 182,498 hits

localization

Flickr Photos

Playa de Las Teresitas, Tenerife

Plus de photos
août 2014
L Ma Me J V S D
« oct    
 123
45678910
11121314151617
18192021222324
25262728293031

Suivre

Recevez les nouvelles publications par mail.