Archive for the 'soft_interessant' Category

list of public domain and commercial mesh generators

software SurfRelax neuroscience 3D

Next Previous Contents

4. Tutorial

4.1 Using the GUI: SurfRelax.tcl

The main graphical user interface (GUI) of the SurfRelax package is a Tk/Tcl application, started by typing SurfRelax.tcl in a terminal window. The main window of this application is used to launch various image processing and surface extraction subprocesses and also contains shortcuts for launching the volume and surface visualization programs VolumeViewer and SurfaceViewer. Each subprocess is accessed by the tabs on the top, arranged in the order processes are run. In the main window, the Tutorial button opens this document; the Quick reference button displays an abridged text-only version of the tutorial. All SurfRelax windows have a Help button which gives some information about the script it invokes and its usage; the name of the script is shown at the top of the help text. Subprocesses can be run directly from the GUI or, alternatively, the command can be saved to a file for later execution from the command line. The advantage of saving the commands and running them later is that it allows easy troubleshooting and allows running a series of commands in sequence without user intervention.

Command line operation
Since the GUI merely launches the scripts that do the actual work, the individual scripts can also be run as stand-alone applications (i.e. without the GUI). Help for each function is given by typing the script name with no options or with the -help option, i.e. <script name> -help. The name of the scripts performing each subprocess is documented below and is also shown in the help text of the GUI (see above).

The GUI also provides the option of saving commands to a file; this can be a handy way of finding out how to call the scripts from the command line.

4.2 Before you start

Before running SurfRelax, you must make sure that your MR input image is in the correct format. For successful operation, the image must be
1) T1-weighted with good gray-white image contrast
2) include the entire brain and most of the skull (note that there should be at least 10 mm padding between the brain and the edge of the image for optimal results)
3) in Analyze format
4) correctly oriented

Image contrast

SurfRelax can handle different types of T1 images, the only requirement being that the gray/white and gray/CSF contrast is good and that white>gray>CSF. The
default image contrast is specified in the $TFIDIR/tfiDefaults.tcl file; the predefined choices are MPRAGE and SPGR, or you can use the CUSTOM contrast which can be fine tuned to the data available to you. The default setting is MPRAGE. The default contrast determines the default thresholds used by all scripts, reducing the number of parameters users have to provide. The contrast can also be specified separately for each script. Incorrect contrast will in all likelihood lead to poor segmentation and bad results. It is a good idea to set the default contrast to match the typical range in your data.

To change default contrast, edit the appropriate sections of the tfiDefaults.tcl file (identified by the text ***CUSTOMISABLE***). Setting the custom contrast requires you to specify the ratio between gray/white, CSF/gray, and CSF/white (note that the three ratios must be consistent with one another!).

Converting files to Analyze
SurfRelax does not come with file converters from other formats, e.g. DICOM or proprietary scanner file formats. It is assumed that your data is already in Analyze format. There are, however, a number of ways of converting arbitrary data into Analyze format that SurfRelax can read. If you can convert your image data into ASCII text (one line per voxel), the program asc2image can be used to create an Analyze image. It can also convert a stack of binary PGM files (one file per slice) into an Analyze image. Here’s the help text for asc2image:

TFI: Toolkit for Imaging.
(c) Jonas Larsson & Biomedicon Consulting 2001.
This program converts ASCII text format files into Analyze/VFF file format.
It can also convert raw byte portable graymap files (PGM) (size will be set automatically).
-help          Print this help
-xsize         Array data x size (fastest changing dimension). If not set, will assume input is in PGM format. (0)
-ysize         Array data y size (second fastest changing dimension) (1)
-zsize         Array data z size (second slowest changing dimension) (1)
-tsize         Array data t size (slowest changing dimension) (1)
-xaspect       Array data x aspect (voxel size) (1)
-yaspect       Array data y aspect (voxel size) (1)
-zaspect       Array data z aspect (voxel size) (1)
-taspect       Array data t aspect (voxel size) (1)
-xorigin       Array data x origin (in mm) (0)
-yorigin       Array data y origin (in mm) (0)
-zorigin       Array data z origin (in mm) (0)
-torigin       Array data t origin (in mm) (0)
posarg         Infile (use – for stdin) ()
posarg         Outfile ()

As an example, if you have a file that is 100 x 150 x 200 in the x/y/z dimensions (defined as above), with a voxel size of 1.4 x 3.2 x 2.5 mm, stored as a text file called myfile.asc with 3000000 lines (one line per voxel=100 x 150 x 200 ) where each line is a single voxel value in ASCII format, it can be converted into an Analyze image as follows:

$> asc2image -xs 100 -ys 150 -zs 200 -xa 1.4 -ya 3.2 -za 2.5 myfile.asc myfile.img

If your data is in binary format, the standard Unix program od can be used to dump the data in ASCII format, which can then be converted with asc2image. You need to know in what format the data is stored (float/int, number of bytes per voxel, signed or unsigned), the offset into the file where the voxel data is stored, and the size of the image. E.g. assuming that your data is offset by 122 bytes from the beginning of the file, the dimensions of the data are as in the example above, and stored as shorts, you would write (in RedHat Linux):

$> od -i -j 122 -N 3000000 myfile.someformat | awk ‘{for (n=2; n<=NF; n++) {print $n}}’ > myfile.asc
$> asc2image -xs 100 -ys 150 -zs 200 -xa 1.4 -ya 3.2 -za 2.5 myfile.asc myfile.img

Alternatively, if your image is stored as binary data with no header in signed short, 32-bit integer, or 32-bit floating point format, then it may be sufficient to create an Analyze header to read the file. FSL comes with a utility called avwcreatehd that can be used for this purpose. In order to read the file with SurfRelax, your image data must be renamed with the extension .img and the header must have the same basename as the image data but with extension .hdr. Finally, if you have access to Matlab and can read the file into a Matlab 3D array, you can use the tfiWriteAnalyze.m script (in the $TFIDIR/matlab directory) to convert the file into Analyze format. In Matlab, type help tfiWriteAnalyze for details.

If your data is stored as single slices in Analyze format, you can also use imageconv with the -stack option to stack them into a single volume. Type imageconv -help for more information.

Changing image orientation
It is very important that the input image is in the correct orientation. To check this, once the file is in Analyze format, inspect the image in VolumeViewer. In horizontal view, the front of the brain should be upward, the back downward, true left should be rightward and true right leftward (radiological convention). Sagittal view should display the front of the head leftward, back rightward, top upward, and bottom downward. Coronal view similary for up and down, and again in radiological convention (left shown on right, right shown on left). If your data does not look ok, you need to reorder/reorient the image. To do this, use the program  imageconv with the -swapdim option. Suppose your image is in the correct orientation, but flipped left-right. The syntax for flipping would be:

$> imageconv -swapdim « -1 2 3 » input.img output.img
The argument to swapdim is a vector that defines how the dimensions (axes) of the image should be rearranged, relative to the original; 1 means the fastest changing dimension, 2, the next fastest etc. A negative number means that dimension should be reversed. E.g. if your image is oriented such that x and z are exchanged (i.e. in VolumeViewer you see axial slices in Sagittal view, and sagittal slices in Horizontal view), then you would write

$> imageconv -swapdim « 3 2 1 » input.img output.img

meaning swap dimensions 1 (=x) and 3 (=z).

Once your data is in the correct format and orientation, you’re ready to start. The following tutorial uses the sample data set provided in the $TFIDIR/examples directory. The sample data set consists of an input MR image (sample.img) and the results of running a full analysis of this brain (surface extraction, inflation, cutting and flattening). By running the tutorial on this data set you can compare your intermediate and final results against the sample data to make sure that you are doing things correctly – if you are, the results should be identical.

To run the tutorial, move into an empty directory and copy the sample input MR image (sample.img and sample.hdr) into this directory. Don’t run the tutorial inside the examples directory, as doing so will overwrite the sample data set.

NB. Always run SurfRelax from within the directory where you want the output results. It is also advisable to store the input files in the same directory.

Start by opening the sample image (a T1 weighted MPRAGE, acquired on a Siemens Allegra 3T scanner) in VolumeViewer:

$> VolumeViewer sample.img &
Set the Max value to 100, change to sagittal view (Image menu or press key ‘S’ with cursor in window) and move to slice 117, and you should see this:

Now launch the main SurfRelax GUI:

$> SurfRelax.tcl &
You’re now ready to begin the tutorial.

4.3 Image preprocessing

Preprocessing is not always necessary, and can be skipped altogether. However, often MR images are quite noisy, and it is relatively easy to improve image quality substantially by denoising the image. Denoising can substantially improve the final surface fit. Preprocessing also allows you to crop the image volume to save disk space and processing time, but beware that too tight cropping can cause problems – make sure there is at least 10 mm of space around the brain in all directions.

To perform denoising and/or cropping, click the Preprocess tab in the main window:

Select the input image: next to the input tab, click Browse and then select « sample.img ». Enter a name for the output image (sample_pp.img).

If you want to crop the image, select the Extract image block button and set the min and max values for extraction and specify a pad size (the image will be cropped so that all voxels with values in the min-max range are retained, and then the image will be padded with the specified pad width of zero-valued voxels). WARNING: make sure the extracted image is no wider than 256 voxels in any direction.


If the noise can be assumed to be Gaussian and additive (a reasonable assumption for MR data), then a local Wiener filter can remove much of the noise without blurring the image. This type of filter requires an estimate of the variance of the noise in the image. A reasonable estimate of the noise can be obtained by measuring the variance of image intensities outside the skull. This can be done directly in VolumeViewer (see section 5). Alternatively, you can specify a bounding box encompassing a region of « empty » space for measuring the noise variance. You can find the coordinates of any point in the volume by clicking the middle mouse button with the cursor on the image and are shown below the image as a coordinate triplet [x y z], prefixed by « WC: » and « AC: ». WC means world coordinates, that is a world coordinate system in mm that runs from left to right (x), back to front (y), and down to up (z) and an arbitrary origin. AC refers to a voxel-basd image array coordinate system that runs in the same directions, but is in voxel units and has the origin at the lefthand lower bottom corner voxel center. The bounding box should be specified in AC coordinates. In the example above, the noise variance has been estimated to be 5. The exact value is not crucial, but too high values will cause the image to blur fine structures and should be avoided (and too small a variance won’t improve image quality).

Now run the process by clicking the « Run script » button. You will be prompted to confirm the command to be run. You can choose to run the process in the foreground (which will freeze up the GUI while the script runs) or in the background (which won’t). Alternatively, you can save the command to a text file which you can run later. Either way, the GUI will display the command to be run as it would be written on the command line. In the above case, the command would be:

$> ImagePreprocess.tcl -noise 5 -wiener -x0 -1 -y0 -1 -z0 -1 -x1 -1 -y1 -1 -z1 -1  -lo 0 -hi 0 -pad 10  -verbose  sample.img sample_pp.img
which is of course how you would run this step from the command line without the GUI.

Run the denoising step. Once it is done (after about a minute), open the output image in VolumeViewer. A good way of comparing the result with the unfiltered image is to open it as an overlay (Overlay->Open). Change the overlay colour map to gray and set the OMax threshold to 100 (same as the Max value). Now toggle back and forth between the overlay and image (« Hide overlay » button or ALT+T). Notice how the denoising reduced the noise in the white matter, but that fine details (e.g. thin strands of white matter) are preserved.

Median filtering/Anisotropic diffusion filtering

ImagePreprocess.tcl also supports these types of filters, but generally they will degrade the image quality more than a Wiener filter and are difficult to adjust properly. Click Help for more information. You can also, if you’re adventurous, filter the image directly with the program imagefilter, which does the actual work behind the scenes.

4.4 Automated surface generation

The entire surface generation process – intensity correction, skull stripping, filling white matter cavities, segmenting the cerebral hemispheres and the cerebellum – can be run in direct succession without user intervention. SurfRelax provides an easy way to generate a script file that contains the sequence of commands that perform all these steps. However, note that it is instructive to initially run each subprocess separately, inspecting the intermediate results along the way, to understand what each subprocess does and also makes it easier to trace sources of error.

To generate a script file for automated surface generation, select the Extract tab in the main SurfRelax window:

Enter the input file name. The output file name need not be specified; if it is, it should NOT include a file extension. Intermediate files will be named automatically as follows (this is an excerpt from the output of clicking the Help button):

Naming conventions
In the following, INPUT means the input file base name (input MR image file
name without the .img/.img.gz/.img.Z/.vff/.vff.gz/.vff.Z suffix).

Using FSL for skull stripping/intensity normalization (default)
(faster, automatic, but sometimes less accurate):
Intensity normalized &
skull stripped image:         INPUT_strip.img
Since FSL bet does a less accurate job of stripping the skull than the TFI
skull stripping script (SkullStrip), the default setting is to run SkullStrip on
the output of the FSLStripNormalize script. This rarely takes longer than a few
minutes, but can be turned off with the -nostrip option.

Using TFI for skull stripping/intensity normalization (DEFUNCT)
(slower, but may be more accurate; usually requires manual setting of thresholds):
Intensity normalized image:    INPUT_intnorm.img
Skull stripped & intensity
normalized image:              INPUT_strip.img

Filled white matter image:     INPUT_fillmask.img
Hemisphere surfaces: ,,
Resampled input image:         INPUT_<subsample resolution>mm.img
e.g. INPUT_1.2mm.img
Output volumes*:               INPUT_left.img, INPUT_right.img
Output surfaces*:    ,

Click the « Save command..  » button. Enter a file name for the command script to be saved, e.g. « sample_gensurf.csh ». Now look at the generated script:

$> cat sample_gensurf.csh
FSLStripNormalize.tcl -save -verbose sample_pp.img sample_pp_intcorr.img
SkullStrip.tcl -wlow 82.5 -whigh 120.0 -blow 6.5 -wmask 150.0 -wval 40.43 -save -verbose sample_pp_intcorr.img sample_pp_strip.img
FillWhiteMatter.tcl -csf 26.0 -lo 82.5 -hi 150.0   -save -verbose  sample_pp_strip.img sample_pp_fillmask.img
SegmentHemispheres.tcl -clo 20.8 -wlo 79.0 -glo 40.43 -gint 45.5  -save -verbose  -mask sample_pp_fillmask.img sample_pp_strip.img sample_pp
GenerateSurfaces.tcl -lo 86.0 -hi 150.0 -verbose  -mask sample_pp_fillmask.img -hemi sample_pp_strip.img sample_topo_left.img
GenerateSurfaces.tcl -lo 86.0 -hi 150.0 -verbose  -mask sample_pp_fillmask.img -hemi sample_pp_strip.img sample_topo_right.img

Note the naming of the intermediate files. You can run the script by typing

$> csh -x sample_gensurf.csh
which will echo each command as it is run. Should any step fail, it is easy to debug and re-run it by modifying the script in a text editor of your choice. If you do not feel comfortable running the program from the command line, you are advised to run each step separately using the GUI. This is described in the following sections.

You can also create an automated surface generation script from the command line; type

$> SurfAndRelax.tcl
for help.

4.5 Intensity normalization (inhomogeneity correction) and skull stripping

Inhomogeneity correction can be done in one of two ways: either with the help of FSL fast, which uses FSL bet to first strip the skull coarsely, or it can be done using a more heuristic TFI/SurfRelax function, which can sometimes give better results but has the disadvantage that it requires the user to manually set thresholds.

In the main SurfRelax window, select the Extract tab, then select « Interactive surface generation ». The first tab labeled « Normalize intensity & strip skull (FSL) » runs the FSL intensity correction process. (The intensity normalization is actually done by a combination of FSL and ‘native’ SurfRelax functions; the script is not just an interface to the FSL programs.) The corresponding command line script is called FSLStripNormalize.tcl.

Note that despite the name, the main purpose of FSLStripNormalize.tcl is not to remove non-brain tissues, but to correct intensity variations in the image, even though a by-product of the intensity correction is an image with the brain coarsely segmented from the skull. However, FSL bet often fails to remove all non-brain tissue or removes too much, and generally a better brain extraction is achieved by SurfRelax’s built-in skull stripping tool, SkullStrip.tcl, which will be described in more detail below.

Enter the input file and the output file name. Normally you don’t need to change any of the default inputs, but if necessary, you can provide additional options for bet and fast, if the default values don’t give a reasonable result. Now run the script by clicking the « Run script » button. If you run the script from the command line, you would type:

$> FSLStripNormalize.tcl  -wlow 0 -whigh 0 -glow 0 -ghigh 0 -clow 0 -chigh 0   -verbose -save  sample_pp.img sample_pp_intcorr.img
This will take between 10 to 15 minutes (most of which time is spent running FSL fast). Once it’s done, open the output file sample_pp_intcorr.img in VolumeViewer. It should look like this when displayed with the colour map; for comparison, the uncorrected input image is shown to the right:

Note that the image intensity is uniform (e.g. the white matter has the same colour in the centre and along the edges) in the intensity corrected image, whereas in the original intensity drops off toward the image edge. The intensity corrected image has also been rescaled so that white matter has an average value around 100. If you open the image as an overlay on top of the original image you can verify that the intensity variations across the image have been reduced (this is most effectively visualized with a colour map like or by setting the image contrast so that both images look similar and toggling back and forth between the images.

If you list the contents of the working directory, you will notice it contains two additional new image files, a text file, and a new directory:

sample_pp_intcorr_fslstrip.img: This is the intensity corrected input, coarsely stripped by FSL bet. It will often have substantial chunks of non-brain tissue remaining, or have parts of the gray matter missing (check with VolumeViewer). It is used only for estimating the mean intensities of different tissue classes (for rescaling).

sample_pp_intcorr_fslstrip_seg.img: This is the segmentation image produced by FSL fast, in which a value of 0 corresponds to non-brain, 1 to CSF, 2 to gray matter, and 3 to white matter. It is wise to inspect this image in VolumeViewer, as most problems in subsequent steps stem from a poor initial segmentation. While it is not necessary for the segmentation to be very accurate, large systematic misclassification of gray or white matter will cause problems. If that is the case, the best option is to run SurfRelax’s built-in intensity correction algorithm, which requires you to set thresholds manually, see below.

sample_pp_intcorr_fslstrip_intensities.txt lists the computed average intensities of each tissue class after intensity correction. It can be used to set image contrast in subsequent steps, although it is usually sufficient to use predefined settings for the image type used (e.g. MPRAGE or SPGR).

$> cat sample_pp_intcorr_fslstrip_intensities.txt
WM 100
GM 65.735278736
CSF 24.0272868342
The new directory is a temporary directory where intermediate files are stored while the script runs. This directory is removed once the script finishes, unless the -save flag is used when running (in the GUI, by selecting the « Save intermediate files » button). If the process ran successfully, you can remove this directory to save disk space:

$> rm -r sample_pp_FSLStripNormalize_TemporaryFiles

Intensity correction using manual thresholds (if FSLStripNormalize fails)
Select the « Normalize intensity (TFI) » tab. Enter the input image as above (sample_pp.img). Deselect « Automatic determination of thresholds » (which does not work well for large intensity inhomogeneities). Now find the lower and upper thresholds for white matter using VolumeViewer. Open the input image sample_pp.img in VolumeViewer. Select Overlay-> »New overlay from image ». Use a solid colour for the overlay. Select Overlay-> »Threshold upwards ». Now adjust the OMin and OMax values so that all white matter is within this range and as little as possible of other tissues. This is a matter of trial and error, but does not take more than a few minutes after some practice. Ideally, the thresholds should isolate a single connected white matter component that does not contain scalp tissues. You can check this as follows. First, select Goodies->Connected component connectivity->6-connectivity. Click the middle button somewhere on the thresholded white matter. Select Overlay-> »Label connected component ». The white matter should now be labeled red; scroll back and forth to see that it doesn’t include too much of the scalp. Note that the connected component doesn’t change immediately if you change the thresholds; you must first select Overlay->Clear label and then redo the labeling to see the effect of a threshold change. It is difficult to isolate the white matter without including some other tissues, especially the eye musculature, but this does not usually cause problems. Once you have found a good threshold, enter the OMin and OMax thresholds in the SurfRelax window. Choose an output name (here sample_pp_intcorr2.img to avoid overwriting the output of FSLStripNormalize).

Run the script. This will take 5-10 minutes. Inspect the result (sample_pp_intcorr2.img). Note that the image is intensity corrected, but not skull stripped; nor does the script output a segmentation image or intensity text file. Nonetheless, if the input thresholds are properly chosen (the exact values are not critical), the script is usually successful at removing intensity variations in the image and rescaling global intensity to the correct range for subsequent processing.

The command line version of this step is

$> IntensityNormalize.tcl  -lo 45 -hi 90   -verbose -save  sample_pp.img sample_pp_intcorr2.img

Final skull stripping

Now you are ready to remove remaining non-brain tissues. Select the « Strip skull » tab and enter sample_pp_intcorr.img as the input file name. Enter sample_pp_strip.img as the output name. Don’t change any default values and make sure the « Automatic threshold » button is not selected.

Run the script; it should take less than 5 minutes. Open the result in VolumeViewer; it should look like this (again using a pink colour scale and overlaid on the unstripped image):

Note that all non-brain tissues have been removed, leaving only the brain with the gray matter intact. If too much gray matter has been removed, you can try adjusting (lowering) the wrap value. Serious failures (large pieces of brain missing) normally stems from poor initial segmentation of white matter (which may result if the intensity correction step failed); try adjusting the white matter thresholds (usually by lowering the low threshold).

The command line version of this step is

$> SkullStrip.tcl   -wlo 82.5 -whi 120.0 -blow 6.5 -wmask 150.0 -wval 40.43 -min 0   -verbose -save  sample_pp_intcorr.img sample_pp_strip.img

4.6 White matter filling

This step fills cavities in the white matter, including subcortical nuclei such as the basal ganglia and the thalamus, and generates a mask image where nonzero voxels represent filled regions. This mask image is used by the hemisphere segmentation and surface extraction scripts. By treating these non-white matter compartments as white matter, the final volume/surface becomes closed and there is no need to manually remove or edit handle-like structures such as the fornix. This script works by aligning and then morphing templates of the basal ganglia and the ventricles to the brain, based on a template brain image. The alignment is done with a slightly modified version of AIR 3.0, using non-linear warping. The warping parameters are saved as a *.warp file (by default named <input basename>_TFItemplate2mm.warp, e.g. sample_pp_strip_TFItemplate2mm.warp). The same warp file is also used by the hemisphere segmentation script.

Don’t change default thresholds. Run the script (it should take 5-10 min to run). Open the image as an overlay over the input image; use a solid overlay colour, and set OMin to 0.5 and OMax to 1.5. Note that the filled mask image extends into the lateral ventricles; depending on the image contrast it may or may not include the basal ganglia and/or thalamus (since these structures have intensities intermediate between cortical gray matter and white matter, and thus may be within the threshold limits of white matter). Also note that the mask includes single voxels scattered within the white matter, corresponding to noisy voxels (or microlesions!).

If the mask image is empty or the process otherwise fails, try changing the thresholds for white matter (usually by making them more conservative). You can also look at the files in the temporary directory, whose name give some indication of their purpose.

The command line version of the script is

$> FillWhiteMatter.tcl  -csf 26.0 -lo 82.5 -hi 150.0  -verbose -save  sample_pp_strip.img sample_pp_fillmask.img

4.7 Hemisphere segmentation

The hemisphere segmentaion script, SegmentHemispheres.tcl, automatically partitions the skull-stripped brain into the left and right hemispheres and the cerebellum+brainstem. The output of this script is three surface meshes in OFF format that can be viewed in SurfaceViewer or overlaid on a volume in VolumeViewer. Like the FillwhiteMatter script, SegmentHemispheres uses a template of the hemispheres and the cerebellum which is aligned and morphed to the input image. The alignment is identical to the one used by FillWhiteMatter, and uses the warp file output by that script.

Select the Segment hemispheres tab. As input, enter the name of the skull stripped image. As output, enter the BASENAME of the output surfaces – i.e. do not use an extension (such as .off or .img). The output files will be named <basename>, <basename>, and <basename>

For the entry labeled « Filled WM mask image », enter the name of the filled mask image output by FillWhiteMatter.tcl. For warp file, enter the name of the warp file produced by FillWhiteMatter. (You can use the Browse button to avoid having to type it in manually).

You should not need to change any other defaults. Run the script. Here’s the result of the script, superimposed on the skull stripped image and colour coded so that left hemisphere is red, right hemisphere green, and the cerebellum+brainstem yellow check that your result looks the same. To the right is a SurfaceViewer 3D rendered version of the three surfaces.

Occasionally one hemisphere encroaches upon the other slightly. Normally, a minor displacement (less than 1mm) is not problematic, but larger displacements can impair the surface extraction and optimization. (The main exception is the callosal white matter, where the precise location of the hemisphere boundaries is of minor importance.) Fortunately, VolumeViewer provides a simple way of editing surfaces by manual elastic deformation. Consult the SurfaceViewer chapter for details. Just remember to save changes frequently. It is recommended that you do this editing before proceeding to the next step.

The command line version of the script is

$> SegmentHemispheres.tcl -warp sample_pp_strip_TFItemplate2mm.warp -mask sample_pp_fillmask.img -clo 20.8 -cll 6.5 -wlo 79.0 -glo 40.43 -gint 45.5  -verbose -save sample_pp_strip.img sample_pp

4.8 Surface extraction

You are now finally ready to extract the an initial, topologically correct, estimate of the cortical surface (to be precise, the gray/white boundary). Note that although GenerateSurfaces.tcl does produce a surface, the primary output of the script is a volume corresponding to the (filled) white matter of one hemisphere – from which a surface is subsequently extracted. The volume is closed and contains no handles, guaranteeing that the surface which is generated from it has spherical topology. The output volume can be edited by hand (if necessary or desired), and a new surface extracted from this volume. The advantage of using a volume for editing, rather than a surface, is that it is much easier to edit a volume than a surface, and that arbitrary sections can be removed without changing the density of vertices in the final surface (an advantage for most subsequent processing). (There are also other reasons that have to do with the way GenerateSurfaces generates a topologically correct volume, rather than a surface, see References for details.)

IMPORTANT: GenerateSurfaces.tcl automatically resamples the input image to isotropic voxels (default is 1mm, suitable for adult brains). This removes any directional bias in the surface generation process and ensures that vertices are isotropically distributed in the output surface. The script outputs a resampled input image, in addition to the extracted white matter volume. This resampled volume (named <input basename>_1mm.img), not the original input, should be used as reference image when editing – see below.

Select the Generate surface tab. Enter the name of the skull stripped image as input. For output, enter the name of the volume corresponding to the cortical surface – i.e. the filename should have an .img extension. Enter the name of the hemisphere boundary (the output of SegmentHemispheres). Note that you must run GenerateSurfaces twice, once for each hemisphere (if run interactively; the automatic surface generation script extracts surfaces for both hemispheres). Enter the name of the filled white matter mask image. (Beware that « optional » should not be taken too literally here; it just means the script can be run without these options, e.g. if the input image only consists of a single hemisphere.)

Run the script. It will take about 10 min or so. Now inspect the result.
Open the skull-stripped input file with VolumeViewer. Superimpose the surface (Surface->Open, select Click the « Connect points » button. As you scroll through the image, you should see the surface roughly follow the gray/white boundary. Now open the corresponding volume as an overlay (Overlay->Open, select sample_topo_left.img). Set OMin to 0.5, OMax to 1.5 (the image is binary, with 0 for empty space and 1 for white matter). Select a solid overlay colour that is different from the surface (e.g. blue overlay with yellow surface). Note the close correspondence between the surface and volume: the surface largely align with individual voxel edges, giving it a jagged appearance. This jaggedness is particularly noticeable in 3D view (below).

If the process was successful, the surface should be a relatively close approximation (apart from the jagged appearance) to the true gray/white boundary. It does not need to be exact, as the surface optimization step will improve the fit (and remove the jaggedness). Minor errors can be edited manually (see below); a typical and easily fixed such error is the surface/volume not penetrating all the way into a sulcus. This will sometimes take the appearance of a « bridge » across a sulcus, but is in fact always a « ridge » of white matter extending from the depths of the sulcus, as scrolling back and forth will verify. (Remember that the output volume and surface are guaranteed to be topologically equivalent to a sphere (a single connected volume with Euler characteristic 2) as a result of the algorithm used.)

Major classification errors, such as systematic under- or overestimation of the white matter, or otherwise very poor quality results, are practically always caused by incorrect white matter thresholds (apart from trivial but common causes, such as a full disk or write permissions incorrectly set). Also verify that the errors are not due to earlier processes failing (which may not be apparent if using the automatic surface script). If previous processes – in particular the intensity correction step – were successful, the predefined thresholds almost always work. However, it is easy to adjust the thresholds manually if the output surface is poor quality. To get a feel for how conservative thresholds should be, it is instructive to open the sample skull stripped image in VolumeViewer, select Overlay->New overlay from image, select a solid overlay colour, set the lower and upper thresholds to 86 and 150 (the predefined thresholds for this data set), and scroll through the image to see how much of white matter is within the range. Compare this with your own image, and find the thresholds that select the same amount of white matter; re-running the script with these thresholds should work.

The left and right hemisphere surfaces may overlap within the callosal white matter; this is normal and is not a matter of great concern (since the medial surface isn’t a real structure, but only serves to close the surface). Excessive medial white matter can be quickly and easily edited away by hand, if desired, but this is really only necessary for cosmetic reasons. More importantly, notice the thin strand of white matter protruding from the orbitofrontal cortex in the surface view to the right – this is an image artefact that will be manually removed in the next step to make the inflated surface look nicer.

And finally, this is how the script would be run from the command line:

$> GenerateSurfaces.tcl -hemi -mask sample_pp_fillmask.img -lo 86.0 -hi 150.0  -verbose  sample_pp_strip.img sample_topo_left.img
$> GenerateSurfaces.tcl -hemi -mask sample_pp_fillmask.img -lo 86.0 -hi 150.0  -verbose  sample_pp_strip.img sample_topo_right.img

4.9 Surface editing

Although the output of the surface generation step is guaranteed to be topologically correct (i.e. topologically equivalent to a sphere), manual editing of the result can sometimes be necessary to capture structures obscured by partial voluming or image noise. Manual editing can also improve the quality of the final surfaces, even when it is not strictly necessary. Finally, it can be helpful to remove any remaining non-cortical structures such as the optic chiasm or image artefacts (e.g. due to flow). Overall, practical experience suggests that for images that have good contrast but are slightly noisy – such as the sample image – manual editing is strongly advisable only for a small fraction of brains, helpful in about 50-75% of all brains, and unnecessary in the remaining 50-25%. The exact proportions will depend on the quality of the input and the individual anatomy of the brains examined. Brains with highly convoluted cortex and thin white matter are likely to need more editing. Even so, editing is not very time-consuming, rarely exceeding 10 min per brain after some practice.

It should be emphasized that manual editing is never required to remove handles, only to improve the anatomical accuracy of the result. This is a crucial difference compared to some other surface extraction software packages. Since it is much harder to recognize and remove a handle (even with the aid of handle-finding algorithms) than it is to recognize and remove an anatomical error for a user familiar with neuroanatomy, manual editing with SurfRelax is much less time consuming than finding and removing handles. Furthermore, to avoid introducing topological errors when editing, VolumeViewer has a topology preserving editing mode that should always be used when editing extracted volumes. This mode prevents the user from changing the topology through editing, greatly simplifying the editing process.

First, you will learn how to remove the strand-like flow artefact in the orbitofrontal region visible in the 3D view above. The principle is the same for other structures.

Open the resampled version of the input to GenerateSurfaces.tcl (sample_pp_strip_1mm.img). Open the extracted white matter surface ( and volume (sample_topo_left.img) as overlay as described above, setting the OMin and OMax thresholds to 0.5 and 1.5 respectively. In sagittal view, scroll to slice 90. Notice the isolated voxels at WC [90,154,119]. Scroll back slowly to slice 70, paying attention to how these voxels extend into a thin protruding structure underneath the orbitofrontal white matter. This is an image artefact that has the same intensity as white matter, and should be removed to make the output smooth (leaving it in place won’t interfere with the surface optimization process, but may be a problem for surface inflation).

Click the « Euler chk » button. This button prevents any editing from changing the topology of the edited image, and must always be checked when editing. Beware that the topology check is relative to the edit value; set this value to 1 so that added voxels have the same intensity as the original. Now click on the « Edit overlay » button. First, check that the topology check works. Deselect the Euler chk button. With the left mouse button, draw a figure in the periphery of the image (far outside the brain) – make sure it does not touch the brain or white matter. You will see the outline of the shape in green. Close it by clicking the right mouse button. The enclosed region will be filled with the overlay colour. Clearly, the topology of the overlay has changed; there are now two connected components in the image. Delete it by drawing around it with the right mouse button, closing the drawn outline wiht the left button (the outline will be drawn in red). Now click the Euler chk button again. Try to draw the same shape with the left button. When you click the right button to close it, nothing will be drawn. Adding the region would change the topology of the image and is therefore not permitted. Beware that if any of the drawn region touches the white matter, the added shape will not change topology so it will be retained (and there is no undo option). Also, the topology check is done voxel-by-voxel from left to right, top to bottom. This can cause regions drawn by outline to look different from the outline after topology check, even though the outline shape might not have changed topology, and can be confusing. For this reason it is better to use the « Rect edit » option, i.e. voxels are added in blocks, using a small block size (2 or 3).

Scroll back to slice 90. Continue scrolling to the right (increasing slice values) and notice that the isolated voxels disappear on slice 96. This means the structure boundary is at slice 95. Because of the topology restriction, editing must always proceed from edges inward (when removing voxels) or outward (when adding voxels). Go to slice 95. Click « Rect edit », set size to 4, and click with the right mouse button over the voxels to be removed. You may have to hold the mouse button down and move it over the voxels a bit. They should disappear. Continue to slice 94, and repeat the process. It may be helpful to hide the surface overlay when editing; you can toggle back and forth with Shift+H or clicking the « Hide surface » button. Repeat the process all the way to slice 79. You’ll probably notice that one voxels at WC [79,173,118] can’t be removed. If you scroll to the next slice (78), you will notice that the the voxel in the corresponding location is empty, but surrounded by filled voxels; hence removing the voxel on slice 79 would have introduced a hole in the volume, thereby changing the topology, and this is not permitted. This illustrates the advantage of using the Euler check option to prevent topology changes, and also shows the difficulty of visually detecting handles. Fortunately, it is easy to get around this restriction. Simply scroll to the next slice and remove the voxels surrounding the empty voxel location. Then move back to slice 79, and you will be able to remove the single remaining voxel. A similar problem will arise around slice 75; just move one slice down and edit as much as you can there, then go back and remove the voxels left. This is the general principle for editing: always proceed from the edge of a structure inward (when removing voxels) or outward (when adding voxels), and if a structure resists editing, change the direction of editing. It is highly recommended to change slice view frequently to get a better feel for the edited structures, and avoid editing too much. After some practice it is easy to find the edge of a boundary by scrolling back and forth and changing views. The surface overlay also helps.

Save the edited overlay image, (Overlay->Save), but rename it sample_topo_left_edit.img so you can go back to the original if you edit too much or make a mistake. Keep saving the image after every major editing performed.

Now move into coronal view. Around slice 130 and forward, you will notice that the segmented white matter volume extends from the medial and basal surface down into the optic lobe. This should be removed to ensure good inflation. Edit this the exact same way as before; proceed from the edge inward. Moving back, you will also notice that some voxels extend into the brainstem; these should be removed for the same reasons. Starting from the anterior edge of this voxel cluster, around slice 122, edit away these voxels from the front back. You will again encounter voxels that can’t be removed without going back and forth to remove voxels in adjacent slices.

Moving back to coronal slice 101, you will notice that the medial segmented white matter connects with the entorhinal white matter in what appears to be a bridge. This is actually a ridge of white matter left over by the surface generation process. To verify this, move forward one slice and notice that the connection disappears; move back slcie by slice and you will see the connection merging with the surrounding white matter as the medial and entorhinal white matter merges in the anatomy. Removing this ridge is necessary if you’re interested in having an accurate surface in this region, and will likely improve inflation. It’s easily removed: starting at slice 101, remove the excess voxels, then move back one slice at a time, removing the bridging voxels. Be careful not to remove too much. You can also, if you like, fill in the small cavity extending into the posterior lateral venticle, but this is not really necessary. (In other brains, the cavity might extend further backward, in which case filling it is recommended.) Filling is easy; start from the back and progress forward, using the same strategy as when removing artefacts, except you’re now adding, not removing, voxels.

Finally, remove some of the excess callosal white matter. Change to sagittal view, slice 96, and change edit mode to normal (unclick Rect edit). Draw an outline with the right mouse button around the callosal voxels, and proceed inward in the same fashion. Note that while removing voxels with the outline is quick, it sometimes fails to remove all voxels that can be removed without changing topology (this is a consequence of the order in which voxels are removed). It is usually faster to use the Rect edit mode to remove single remaining voxels.

Note that the main reason for the above editing is to ensure proper inflation (the inflation process will sometimes fail when surfaces contain long, narrow structures such as the orbitofrontal flow artefact). If you don’t intend to do inflation or don’t care about the medial surface, most of the editing is not required. Beware though, that occasionally editing might be called for on the lateral surface of the brain, although this is rare.

Next, you can experiment by shifting (displacing) a segment of the volume. Move to slice 56 in sagittal view. Look at the thin strand of white matter below the lateral ventricle in the temporal lobe. Note that the extracted surface and associated volume is displaced downward; this is because the intensity of most of the white matter voxels in this region is lower than the global threshold due to partial voluming. When the volume is extracted, topology constraints ensure that the thin white matter strand (which is actually a thin sheet connecting the centre of the temporal white matter with the entorhinal cortical white matter, as is apparent in coronal view) is retained, but since there is no suprathreshold white matter to locate the extracted surface, it may be displaced from the true location slightly. Normally, such a small displacement is not a problem and will be corrected by the optimization process. Sometimes, however, the displacement is sufficiently large that it may be a good idea to first manually adjust the surface/volume. There are two ways of doing the adjustment. A simple displacement can be done directly on the surface (see the SurfaceViewer chapter for instructions). More complex displacements require voxelwise editing. Since you cannot directly move voxels, the displacement is done by adding and removing voxels. The principle is the same as described above: add/remove voxels at the boundary. Since the structure is thin and narrow, removing the voxels first won’t work, as this will break the sheet and introduce topology changes. Instead, begin by adding voxels at the top along the upper boundary of the sheet (guided by the underlying anatomy). Be sure to use a small rect edit size (2) to avoid inadvertently adding too much. Then remove the voxels along the lower boundary (toggling back and forth with the Hide overlay or Alt+H key is helpful). Continue in this fashion until the sheet is in the right place. This way you can incrementally correct relatively large displacements. For instance, a common error on brains with very thin temporal white matter is that the sheet appears to be interrupted in the middle or cuts across the lateral ventricle. (This is not a true interruption; it just means that the sheet of temporal white matter rises to meet the central temporal white matter too anteriorly.) To correct this error, you would need to shift the sheet backward. Again, the way to do this is to begin by adding voxels where they are missing and subsequently removing excess voxels. Very occasionally, this requires first filling much of the lateral ventricle.

Finally, move to slice 59 in sagittal view. Scrolling medially, note that gaps appear in the thin white matter below the anterior end of the temporal horn of the lateral ventricle. This is a result of partial voluming of thin white matter. Filling the gaps will greatly improve anatomical accuracy, but can be ignored if you don’t care about this part of the cortex. It is not critical that the added voxels align perfectly with the white matter boundary, as long as they roughly cover the white matter.

It is a good idea at this point to always scroll through the image carefully in all orientations, paying attention to regions containing fine structures that are likely to suffer from partial voluming artefacts – especially, as mentioned, in the temporal lobe, but also sometimes in the occipital regions. Occasionally you will encounter errors that are hard to spot simply because the image contrast is too poor. A good understanding of cortical anatomy is crucial. It can be helpful to view the surface in 3D (with SurfaceViewer) to check that the surface looks ‘sane’, but beware that many artefacts are not apparent in a 3D view.

Before proceeding, you need to re-create a surface from the edited volume (this is the input to the surface optimization process). This is easy. First save the edited overlay image. Then make sure the OMin and OMax thresholds are set to 0.5 and 1.5 respectively. Select Overlay->Extract surface from overlay. You will be prompted for a smoothing parameter; don’t change the default value (0). After a short while, a new surface will appear (in yellow). It will have the same name as the overlay image, with the .off extension (thus in this case It corresponds to the last (lowest) entry in the surface menu. Make sure to save this surface (Surface->[surface name]->Save) with the same name as the edited image, i.e.  You can now compare your edited surface & volume with the ones provided with the sample data set. They are unlikely to be identical, but should be similar to your results. (A quick way of comparing the two is to use one as display image and the other as overlay image.) Below is the surface extracted from the edited volume (in yellow) with the original surface in red (only the edited portion is visible).

Repeat the editing process for the right hemisphere. Once you have finished editing, you’re ready to proceed to the optimization step.

4.9 Surface optimization

The surface optimization process creates estimates of the inner and outer cortical surfaces, as well as a measure of local cortical thickness (the distance between the surfaces along the normal). Surface optimization is a mostly automated process; you only need to specify the input and output files. Open the optimization window in SurfRelax by clicking the Optimize tab. Enter the name of the (edited if necessary) surface created in the previous step as input, e.g. As output, enter the basename of the output files; they will be named <basename> and  <basename> (and optionally <basename> Enter the name of the anatomy image; normally it would be the skull stripped, intensity corrected image, but you can also use the intensity corrected, non-skull stripped image. Using the intact image means there is a greater risk that the gray matter surface will encroach upon the scalp (although this is rare), but can be an option if one is concerned that the skull stripping process might have cut away too much of the gray matter.

Run the script. If you use the default settings and method, it should take about 10 min per hemisphere (be warned that the older method option can take several hours to run). The script will have created three new surfaces, corresponding to the white/gray, gray/pial, and midcortical surfaces. To see how well the fit did, open the skull stripped anatomy image in VolumeViewer and overlay the gray and white matter surfaces. Click the « Connect points » button, and choose a different colour for the gray matter surface to make the two surfaces easily distinguishable. The result should look like this:

Notice how the optimization has removed the jagged appearance of the white matter surface and how the gray matter surface follows the pial surface closely. Inspect the result and look for any locations where the surface fit is poor. Small errors are most easily fixed by manual editing (deformation) of the surface; systematic errors may require adjusting the optimization parameters (see below). It is also instructive to open the surfaces in SurfaceViewer; in particular, inspecting the gray matter surface can give a feel for how anatomically correct the result is, if you have experience looking at real brains. It should be stressed, however, that « looking good » (often equivalent to looking smooth) is not a reliable criterion for assessing the quality of an extracted surface; only overlaying the surface on the anatomy can reveal how well the surface captures the detailed structures of the cortex. Below are screenshots of the white and gray matter surfaces of the left hemisphere, as seen in SurfaceViewer (curvature has been overlaid in pink, smoothed 5 times, and thresholded at <=-1..>=+1).

This is the command line version of the script:

$> OptimizeSurface.tcl  -anat sample_pp_strip.img -white 82.5 -gray 33.8   -verbose -save sample_topo_left

Hints & tips

The surface optimization step is in a sense the most critical step of the surface extraction process, as it determines the quality of the final output surfaces. Generally, the default settings should work well if the input is high quality and previous processing steps ran successfully. It may still on occasion be necessary to adjust the settings manually to attain a better quality result. Options to consider are, primarily:

– the filter width for smoothing the image. Default is an isotropic Gaussian filter kernel with 1.5 mm full width at half-maximum (FWHM); larger values means smoother image. Can be useful to make the outputs smoother and less sensitive to noise, at the expense of detail. Smaller or zero FWHM will enhance detail at the expense of smoothness.
– gray, white, and CSF thresholds. Adjusting these can vary the thickness and goodness of fit of the extracted surfaces. Beware that there isn’t a simple one-to-one correspondence between thresholds and the location of the extracted surface, as smoothness constraints influence the surface fit.
– surface smoothness. Primarily determined by the number of normal vector averaging iterations, as well as the cortical thickness smoothness (which determines how rapidly the thickness can change from one vertex to another).

Some of these parameters must be given as « Advanced options » through the GUI, or run through the command line; see the help for OptimizeSurfaces.tcl for details.

Manual surface editing

Minor improvements of the fitted surface can be done manually, using VolumeViewer and the surface editing function. You can drag a vertex and its neighbours to the desired location, and the amount and range that neighbouring vertices will move can be set to adjust the smoothness (essentially the tension) of the surface. Beware that the undo function is buggy; save youre edited surface frequently (preferably with a different name from the output of OptimizeSurface.tcl).

Finally, note that the script also outputs a vector of cortical thickness estimates in VFF format, named <output basename>_cortexThickness.vff. This can be used for visualization or other purposes.

4.10 Surface inflation

Surface inflation is useful for visualization and for guiding cuts across the surface, although you can also do cutting on the uninflated surface. You can inflate either the gray or white matter surface; in the following, the white matter surface is used. Select the Inflate tab in the main GUI. Enter the input surface (e.g. (note that you need to run the inflation separately for each hemisphere) and the output surface (e.g. No other options are necessary.

Run the script; it takes between 5-10 min. Open the inflated surface in SurfaceViewer. To visualize the curvature of the original surface, open the uninflated surface as overlay data surface. Select curvature as Overlay 1, and select as colour map. Click draw. It will take a short while to calculate the curvature. Initially, it will look uniform; this is because extreme values cause the scale to be off. From the Filter tab in the Overlay 1 box, select Smooth. The default iterations (2) works well. Set Min to -1, Max to 1, select the < and > buttons, and click draw. The curvature of the original surface is now drawn on the inflated surface; light indicates high curvature (gyri) and dark low curvature (sulci).

4.11 Surface flattening

Cutting surface patches
To flatten a surface, you must first convert the closed extracted surface with spherical topology into an open surface with planar topology – either by cutting out a surface patch, or by introducing cuts into the medial aspect of the surface. This is done manually in SurfaceViewer; there is also an option to automatically cut a surface at a given distance from a vertex (along the isodistance contour). [This option is as yet only available from the command line, using the program surfcut.]

To place a cut, display the inflated surface with curvature of the uninflated surface as overlay. Select Edit->Cut. Follow the instructions for cutting – essentially just clicking the lefthand mouse button on successive points on the surface. To cut out a patch over the occipital pole, for instance, begin by placing a cut encircling the occipital pole, starting in the intraparietal region and moving down across the lateral aspect of the cortex:

Note that the surface can be rotated and viewed normally while cutting. Rotate to the other side of the brain and extend the cut just anterior to the anterior end of the calcarine sulcus. Continue up toward the top and the starting point, and when you’re close enough, close the cut line by clicking while holding the SHIFT key down. Then you may wish to add a relieving cut along the calcarine (this will reduce distortions in the flattened surface): start at the occipital pole, but this time make sure to hold the CONTROL key down when you click the mouse button – this will start a new cut line. Then add line segments to the cut with the mouse button but without pressing any key until you reach the anterior end of the calcarine:

In a zoomed-in view with edges drawn, it looks like this:

After finishing placing the cut line, click Cut. (You’ll have to reset the overlay display range again; a small bug.) WARNING: Due to a bug in the flattening algorithm, flattening can fail if the cut surface edge has single, isolated triangle segments. Try always to place your cuts so that the edge is smooth; this is difficult to verify if cutting the uninflated brain, but easy on the inflated surface (especially if you zoom in and select to draw triangle edges). For instance, notice that the above cut was badly placed: it left a protruding triangle segment. To remove it, select Edit->Cut again and draw a cut line separating the protrusion:

Click cut again. After cutting it will look like this:

Next, select Edit->Select patch. Click on the cut surface segment you want to save. It will be labeled in red and you can save it, e.g. as <basename>


In the SurfRelax.tcl GUI, select Flatten. As input, enter the name of the cut out surface patch. Enter the name of the original folded surface (this is for correcting distortions due to flattening), and the name of the output. You shouldn’t need to change any other settings.

Run the script. Now view the patch in SurfaceViewer, again using the original folded surface as overlay surface and using curvature as overlay. Notice that default perspective for flat surfaces is from above. You can enhance contrast by clicking the Flat button (meaning no shading will be used). Below is the cutout occipital patch, flattened, overlaid with the curvature of the folded white matter surface smoothed 5 times, and with the 3D isocoordinate contours along x=60 (blue), y=50 (green) , z=130 (red).

Runnig from the command line, you would write:

$> FlattenSurface.tcl  -verbose -save

4.12 Using surfaces for visualization

The inflated or flattened surfaces can be used for visualization of surface- or volume-based data (the optimized surfaces can of course also be used, but the convolutions of the cortical surface will hide most regions inside sulci). The data can either be derived from intrinsic properties of the surface (such as curvature or distance from a surface vertex), describe properties of the cortical sheet (such as thickness), derived from the 3D space the surface is embedded in (such as 3D coordinate), or sampled from a 3D image aligned with the folded brain surface (such as a 3D FMRI activation image). Typically, the data is derived from the original, folded surface, but displayed on the inflated or flattened surface. This requires a way of keeping track of corresponding vertices in the different surfaces that derive from a common source. In SurfRelax, this is done as follows:

1) If the displayed surface has the exact same number of vertices as the original, folded surface, it is assumed the vertices in the two surfaces correspond exactly. This applies to inflated surfaces (inflation only changes the coordinates, not the indexing, of the vertices in a surface). There exists a remote possibility that two unrelated surfaces by pure chance have the exact same number of vertices, but this is very unlikely (and would only be a problem if the user deliberately tried to mix up unrelated files).
2) If the displayed surface contains a subset of the original surface – e.g. it has been cut out of the original, or subsampled from this – an index in the header of the surface file preserves the correspondence between vertices in the original and the cutout surface. However, since the indexing is done transparently, users do not need to worry about the exact structure of the header. Note that a surface patch that is cut out of the inflated surface (as above) can be correctly associated with the folded surface, since the number of vertices and the vertex order in the inflated and folded surfaces are the same.

While you do not need to understand the implementational details of how vertices are indexed etc, it is important to understand which surfaces can be associated with one another, and which ones cannot. As a rule, always use the original, uninflated, closed surface for retrieving data, and always extract surface patches from the intact (inflated or folded) surface. A patch cut out of another patch cannot easily be associated with the original surface (while technically possible, it is somewhat involved, likely to introduce confusion, and not recommended).

WARNING: Much confusion and errors can arise from using the wrong surfaces for data analysis and extraction. SurfRelax is deliberately designed to be flexible in that it does not enforce naming conventions (apart from file extensions) or a particular directory structure. The flip side of this flexibility is that users must themselves keep track of files and file names. A consistent naming scheme is very helpful, and if you do not fear long file names (thankfully Unix is not DOS), much information can be contained in the names. (Beware however that the MacOS X file system is letter-case challenged: it preserves case, but can’t distinguish it, so MyFile and myfile, which are clearly distinguishable by most well behaved Unix systems and even illiterate humans, are considered the same by Apple.) The perhaps most common error is to use the same surface for display and visualization – not an issue if the displayed surface is the folded white matter surface, but definitely a problem if it is the inflated brain.

Types of visualized data

Surface-derived data will typically be in the form of a vector with a single value per surface vertex. By default, SurfRelax stores such data in VFF format (see chapter 5). A number of surface derived data options are built-in into SurfaceViewer (see chapter 6); others can be calculated from the command line (see section 4.13) and imported as data vectors.

Volume-derived data can be sampled from the 3D location of each surface vertex in a number of different ways. The default, and simplest method, is to sample (by interpolation or nearest neighbour) the image value at the location of each vertex. It is also possible to sample data along the surface normal (inward or outward). An extended version of this method samples along the normal at a number of different locations and averages the sampled values. Normally it is the white matter surface that is used for sampling. A different approach which is also implemented in SurfRelax samples data across the cortex, taking into account the different thickness of the cortical sheet. This method creates a discrete grid of voxels (corresponding to the anatomical MR image from which the surface is derived) and assigns voxels between the white and gray matter surfaces to the nearest surface vertex. The grid is simply an Analyze image where each gray matter voxel is assigned the vertex index of its nearest surface vertex (voxels not in the gray matter are assigned a value of -1). Since the cortex varies in thickness, some vertices may sample from more voxels than others. This method was originally developed for compatibility with Stanford’s mrGray/mrLoadRet software (see chapter 7). The different sampling strategies give more or less the same results; sampling near the white matter may reduce flow artefacts when sampling from functional MRI images. The crucial aspect of all three methods is that the surface must be correctly aligned with the sampled image; even small misalignments can cause large errors in assigning vertex data values, often in non-obvious ways. See chapter 5 for more information on aligning surfaces with volumes.

Chapter 6 describes in greater detail how to use SurfaceViewer for data visualization.

4.13 Using surfaces for data analysis

SurfRelax is primarily intended as a visualization tool, not for analysis, and so does not come with e.g. software for statistical analysis. It does however, provide a number of means to extract and process surface-based data, which can then be exported to the statistics package of your choice (such as Matlab). It is also very well suited to visualizing 3D statistical data from other packages, such as SPM or FSL, or mrLoadRet. Below are a couple of possible analysis options that may be useful.

WARNING: Always use VFF as output format when extracting surface vector data. Analyze images are limited to be no more than about 32,000 voxels in any dimension, and since surfaces will typically have many more vertices, the data cannot be saved in this format. VFF has no such limitations, and can easily be converted to ASCII for exporting.

Extracting volume data sampled along the surface
The program surf2image can be used to sample data from a 3D image. Data are sampled at the vertex coordinates, optionally displaced along the normal, either by trilinear interpolation or by nearest neighbour.

Example: Extract image data from myfunc.img, sampling the volume by nearest neighbour 1 mm outside the surface

$> surf2image -type data -image myfunc.img -expand 1 -noint mydatavector.vff
A variation of this is to restrict sampling to vertices within a surface ROI (see chapter 6), using the -sroi option. Vertices not within the ROI will have a value of zero.

You can also use surffilt to extract volume data, with the -type none option. This is particularly useful for extracting time series data (surf2image doesn’t yet have support for 4D input or output).

The data is by default saved as a 1D VFF vector that can be exported to text format using image2asc:

$> image2asc mydatavector.vff mydatavector.txt
Extracting surface properties
surf2image also has the option to save intrinsic surface properties (such as curvature) or embedded surface properties (such as 3D vertex coordinates) to data vectors.

Example: exporting mean surface curvature as a data vector:

$> surf2image -type meancurv mydatavector.vff

Filtering data along the surface
The program surffilt provides a variety of options for filtering data along a 2D surface. It has similar input data options (but quite different syntax for historical reasons) as surf2image, and data can be filtered by local averaging, min, median, and max filtering, among other options. surffilt can take 4D input (time series data), in which case the output is 2D – the first dimension is vertex index, the second time point. This can be useful for e.g. filtering FMRI data along the surface prior to performing pointwise statistical analysis.

Example: filtering 3D image data by 5 iterations of local averaging (sampled 1 mm outside the surface) along the surface and exporting as a data vector.

$> surffilt -type mean -data myfunc.img -expand 1 -noint -iter 5 mydatavector.vff

Extracting summary surface region statistics

Use the program surfclust to extract summary statistics about regions of clustered surface vertices (such as mean intensity above a threshold).

Example: extracting the mean size and thickness of regions of cortex between 2 and 4 mm thick that contains at least 10 vertices:

$> surfclust -minval 2 -maxval 4 -minsize 10 -values sample_topo_left_cortexThickness.vff

4.14 Troubleshooting

If a process fails, it is helpful to check intermediate files for errors. A quick way of identifying errors to check the file sizes; if any file is zero-sized, it is guaranteed to be corrupt.

Common sources of error and fixes
The five most common sources of error (probably more than 80% of all problems) are due to:
– Mistyped commands or wrong input parameters (using a surface when volume required or vice versa, etc). Can cause quite unexpected results, not always obvious. Please read the manual and help functions carefully and follow the instructions exactly. If you’re new to Unix, take the time to learn it; it’s easy to make errors unwittingly.
– Not enough disk space. Clear disk space before running (and you need lots of it).
– Missing input files due to earlier processes failing, or from overzealous cleaning of disk space.
– No write permissions in output directory. Change permissions.
– Insufficient memory, causing programs to terminate and subsequent processes to fail. Not always obvious. It is unwise to run the software with less than 500 MB RAM. This is more or less standard on desktops these days; if you don’t have that much, upgrade. It’s cheap.

Less common sources of error are:
– Too little space between image edge and edge of skull/brain (can cause errors at later stages without affecting early processing steps). Run the preprocessing step and make sure to pad the image appropriately.
– Intensity inhomogeneity correction failed. Rerun with manual threshold settings.
– Too little gray/white/CSF contrast, too noisy data. Rerun preprocessing step with Wiener denoising option, increase the noise variance value if necessary. Get better MR data.
– Program naming conflicts – in the unlikely event any of the programs in the $TFIDIR/bin directory has the same name as another program on your system. This should be fixed since the scripts use full path names, but has not been verified extensively. If you run into this problem, contact the author.

If all else fails and you are unable to determine the source of the error, you can contact the author, but this should be considered a last resort and even so I cannot guarantee I will have the time to answer your question. However, if the error is due to a bug in the software it will be fixed as soon as possible. Bug reports are welcome. Be sure to state exactly what you were doing, what the problem was, and be prepared to send me the input data so I can try to reproduce the bug.

Next Previous Contents

MeshLab documentation

MeshLab is a advanced mesh processing system, for the automatic and user assisted





and rendering

of large unstructured 3D triangular meshes.

MeshLab is actively developed by the a small group of people at the Visual Computing Lab at the ISTI – CNR institute, a large group of university students and some great developers from the rest of the world.

For the basic mesh processing tasks and for the internal data structures the system relies on the GPL VCG library


FREE VRML Viewer and matlab

FREE VRML Viewer and matlab, virtual reality toolbox:



The examples can be viewed also directly using standard VRML 97 browser plug-ins by clicking the corresponding « View virtual scene » links.

see some examples that were created using Virtual Reality Toolbox, a tool that lets you view and interact with dynamic system simulations           in a 3-D virtual reality environment. All of the behavioral dynamics of the examples are being driven by MATLAB® and Simulink® developed by             The MathWorks, Inc. that allow effective creating dynamic models of real systems physics.

Solar System
This model represents the dynamics of inner Solar System (Sun, Mercury, Venus, Earth + Moon, Mars). For visualization purposes, the relative planet sizes to their distances and the distance between Moon and Earth were adjusted. Otherwise, the model is a good approximation of Solar system dynamics.

View virtual scene
Download zipped VRML world

GNU Triangulated Surface Library – 3D surfaces meshed with interconnected triangles

GTS stands for the GNU Triangulated Surface Library. It is an Open Source Free Software Library intended to provide a set of useful functions to deal with 3D surfaces meshed with interconnected triangles. The source code is available free of charge under the Free Software LGPL license.

The code is written entirely in C with an object-oriented approach based mostly on the design of GTK+. Careful attention is paid to performance related issues as the initial goal of GTS is to provide a simple and efficient library to scientists dealing with 3D computational surface meshes.

A brief summary of its main features:

  • Simple object-oriented structure giving easy access to topological properties.
  • 2D dynamic Delaunay and constrained Delaunay triangulations.
  • Robust geometric predicates (orientation, in circle) using fast adaptive floating point arithmetic (adapted from the fine work of Jonathan R. Shewchuk).
  • Robust set operations on surfaces (union, intersection, difference).
  • Surface refinement and coarsening (multiresolution models).
  • Dynamic view-independent continuous level-of-detail.
  • Preliminary support for view-dependent level-of-detail.
  • Bounding-boxes trees and Kd-trees for efficient point location and collision/intersection detection.
  • Graph operations: traversal, graph partitioning.
  • Metric operations (area, volume, curvature …).
  • Triangle strips generation for fast rendering.

The screenshots will give you an idea of what GTS can do. You can read the reference manual for more information.

Help would be very appreciated and you are very welcome to contribute.

A multi-resolution model with respectively 2918, 4849 and 15076 triangles. The original model is on the left and was refined using a midpoint insertion technique.


topeScope freeware shareware 3D mac osx, UO format

Read stl, gts, nff, q3o, obj, off, ply, tri, uo

UO (point X Y Z)

topeScope Base features:

‘topeScope Plus additions:

he UO file format is used to store 3-d point sets (collection of X Y Z coordinates).


[from the MINGLE site]

sourceforget and 3D; opensource; my list

name (alphab. order…) :






Fusion Viewer



GNU Triangulated Surface Library






Stani’s Python Editor





Blog Stats

  • 223 070 hits


Flickr Photos

juin 2020