## Archive pour 3 juin 2008

### Polygonal modeling and file format

In 3D computer graphics, polygonal modeling is an approach for modeling objects by representing or approximating their surfaces using polygons.

Polygonal modeling is well suited to scanline rendering.

Alternate methods of representing 3D objects include NURBS surfaces, subdivision surfaces, and equation-based representations used in ray tracers.

The basic object used in mesh modeling is a vertex, a point in three dimensional space.

ONE VERTEX , TWO VERTICES

Two vertices connected by a straight line become an edge.

Three vertices, connected to the each other by three edges, define a triangle, which is the simplest polygon in Euclidean space. More complex polygons can be created out of multiple triangles, or as a single object with more than 3 vertices. Four sided polygons (generally referred to as quads) and triangles are the most common shapes used in polygonal modeling. A group of polygons, connected to each other by shared vertices, is generally referred to as an element. Each of the polygons making up an element is called a face.

A group of polygons which are connected together by shared vertices is referred to as a mesh. In order for a mesh to appear attractive when rendered, it is desirable that it be non-self-intersecting, meaning that no edge passes through a polygon. Another way of looking at this is that the mesh cannot pierce itself. It is also desirable that the mesh not contain any errors such as doubled vertices, edges, or faces. For some purposes it is important that the mesh be a manifold – that is, that it does not contain holes or singularities (locations where two distinct sections of the mesh are connected by a single vertex).

There are many disadvantages to representing an object using polygons. Polygons are incapable of accurately representing curved surfaces, so a large number of them must be used to approximate curves in a visually appealing manner. The use of complex models has a cost in lowered speed. In scanline conversion, each polygon must be converted and displayed, regardless of size, and there are frequently a large number of models on the screen at any given time. Often, programmers must use multiple models at varying levels of detail to represent the same object in order to cut down on the number of polygons being rendered.

## File formats

A variety of formats are available for storing 3d polygon data. The most popular are:

OBJ (or .OBJ) is a geometry definition file format first developed by Wavefront Technologies for its Advanced Visualizer animation package. The file format is open and has been adopted by other 3D graphics application vendors and can be imported/exported from e-Frontier’s Poser, Autodesk‘s Maya, Avid‘s Softimage|XSI, Blender, MeshLab, Misfit Model 3D, 3D Studio Max, and Rhinoceros 3D, Hexagon, Newtek Lightwave, Art of Illusion, GLC_Player etc. For the most part it is a universally accepted format.

PLY is a computer file format known as the Polygon File Format or the Stanford Triangle Format. The The Digital Michelangelo Project at Stanford University used the PLY format for an extremely high resolution 3D scan of the Michelangelo « David » sculpture. The format was principally designed to store three dimensional data from 3D scanners. It supports a relatively simple description of a single object as a list of nominally flat polygons. A variety of properties can be stored including: color and transparency, surface normals, texture coordinates and data confidence values. The format permits one to have different properties for the front and back of a polygon. There are two versions of the file format, one in ASCII, the other in binary.

STL is a file format native to the stereolithography CAD software created by 3D Systems. This file format is supported by many other software packages; it is widely used for rapid prototyping and computer-aided manufacturing. STL files describe only the surface geometry of a three dimensional object without any representation of color, texture or other common CAD model attributes. The STL format specifies both ASCII and binary representations. Binary files are more common, since they are more compact. An STL file describes a raw unstructured triangulated surface by the unit normal and vertices (ordered by the right-hand rule) of the triangles using a three-dimensional Cartesian coordinate system

—–

• MeshLab is an open source Windows and Linux application for visualizing, processing and converting three dimensional meshes to or from the OBJ file format.
• GLC_Player is an Open Source software used to view 3d models in OBJ Format and to navigate easily in these models.
• 3DMLW is a markup language that shows OBJ files through common web browsers

RepRap is an OpenSource project that uses STL file input and generates solid objects as output.

• The STL Format – Standard Data Format for Fabbers: The STL Format
• How to Create an STL file Guide to exporting STL files from various CAD packages (courtesy of ProtoCAM)
• SolidView SolidView is a commercial STL manipulation package that has a Lite version available (under provision of a business email address) for STL viewing.
• Freesteel with a web-interface where you can upload an STL file and render it into an image in your browser.
• ADMesh is a GPLed text-based program for processing triangulated solid meshes, and reads and writes the STL file format.

—————

There are many other file formats capable of encoding triangles available (such as VRML,…) but they have the disadvantage that it’s possible to put things other than triangles into it, and thus produce something ambiguous or unusable.

*.STL, *.OBJ, *.PLY are OK.

### MRIcro MRIreg registers a MRI scan with scalp locations

http://www.sph.sc.edu/comd/rorden/mrireg.html

It is often useful to coregister a MRI scan with the subject’s real head geometry – for example, with Transcranial Magnetic Stimulation, it is useful to have a precise idea of which region of the brain is being stimulated. MRIreg is a free tool that allows you to identify which region of the scalp is near a particular brain region. To use MRIreg you will need:

3. A MRI scan for the person you want to register.
4. An Ascension Flock-Of-Birds/MiniBird or Polhemus Fastrak magnetic position tracker.

It is easiest to get good results from coregistration by using MRI scans where you have marked easy-to find landmarks on the individuals face or scalp (e.g. place liquid-vitamin gel-capsules on their facial beauty marks during scanning). You will need to find at least 5 anatomic landmarks.

### MRIcro et autres freeware surface or volume rendering

Introduction

This page describes how to create volume renderings using my free MRIcro software. You can learn a lot about the brain by viewing axial, coronal and sagittal slices. However, it is often useful to display lesion location on the rendered surface of the brain. Just as each individual has unique finger prints, each brain has a unique sulcal pattern. SPM’s spatial normalization adjusts the size and alignment of the MRI scan, but it does not deliver a precise sulcal match -such an algorithm would create many local distortions (just as an algorithm that attempted to normalize fingerprints from individuals with very different patterns would require many distortions). Volume rendering allows the viewer to grasp the sulcal pattern of the brain, and see lesions in relation to common landmarks. A second advantage for displaying a lesion on a rendered image is that you can specify how ‘deep’ to search beneath the surface to display a lesion, and in this way you can show the cortical damage without the underlying damage to the white matter Note that only showing lesions near the brain’s surface can also be misleading, as it can hide deeper damage. Therefore, it is often best to present surface renderings in conjunction with stereotactically aligned slices.

There are two popular ways to render objects in 3D. Surface rendering treats the object as having a surface of a uniform colour. In surface rendering, shading is used to show the location of a light source – with some regions illuminated while other regions are darker due to shadows. VolumeJ is an excellent example of a surface renderer. The benefit of surface rendering is that it is generally very fast – you only need to manipulate the points on the surface rather than every single voxel.

On the other hand, Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set.

A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner. Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

To render a 2D projection of the 3D data set, one first needs to define a camera in space relative to the volume. Also, one needs to define the opacity and color of every voxel. This is usually defined using an RGBA (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value.

A volume may be viewed by extracting surfaces of equal values from the volume and rendering them as polygonal meshes or by rendering the volume directly as a block of data. The Marching Cubes algorithm is a common technique for extracting a surface from volume data.

On the other hand, volume rendering examines the intensity of the objects. So darker tissues (e.g. sulci) appear darker than brighter tissue. Finally, hybrid rendering allows the user to combine these two techniques – first computing a volume rendering and then highlighting regions based on illumination. For example, my software can create volume, surface or combined volume and surface renderings. The images created by MRIcro below illustrates these techniques. The left-most image is a volume rendering, the middle image is a surface rendering, and the image on the right is a hybird.

 Surface rendering is by far the most popular approach to rendering objects. One reason for this is that surface rendering can be much quicker than volume rendering (as only the vertexes need to be recomputed following a rotation, while in volume rendering every voxel must be recomputed). However, surface rendering typically has several disadvantages compared to volume rendering: It requires high quality scan and excellent skull extraction to show clean edges. Surface color does not reflect underlying tissue (unless texture maps are used). To get sharp edges, the gray matter is typically eroded, creating an inaccurate image of brain size. The third point is illustrated on the image on the right. Note that with a high air/surface threshold is required to show nice sulcal definition in the surface renderings. However, at these high values the gray matter has been stripped from the image. In contrast, low signal information enhances the definition of sulci for volume renderings.

Most rendering tools add perspective: closer items appear larger than more distant items. Perspective is a strong monocular cue for depth, so this technique creates a powerful illusion of depth. However, in some situations, it is helpful to have perspective free images (‘orthographic rendering’). Orthographic rendering has a couple of advantages: it it can be quicker and it can allow a region to remain a constant size in different views. MRIcro is an orthographic renderer.

One word of caution: Be careful about judging left and right with rendered views. My software retains the left-and right of the image. This is particularly confusing with the coronal view when the head is facing toward you. When we see people facing us, we expect their left to be on our right. With my viewer, their left is on your left.

The main MRIcro manual describes how to download and install MRIcro. The software comes with a sample image of the brain.

Rendering an image with MRIcro is very straightforward. Load the image you wish to view (use the ‘Open’ command in the file menu). Finally, press the button labelled ‘3D’. You will see a new window that allows you to adjust the air/surface threshold (which selects the minimum brightness which will be counted as part of your volume) as well as the surface depth (how many voxels beneath the surface are averaged to determine the surface intensity).

 Most rendering tools require very high quality scans (such as the MRI scan above, which came from a single individual who was scanned 27 times). As noted earlier, surface rendering tools in particular have problems with the low resolution or average contrast images that are typically found in the clinic. Fortunately, MRIcro works very well with clinical quality scans. The figures on the right come from single fast scans from a clinical scanner (the whole MPRAGE sequences required 6 minutes with Siemens 1.5T). The image on the left shows a stroke patient.

The images above on the right shows a ‘free rotation’ with a cutaway through the skull. When you select the free rotation view, a new set of controls appear that allow you to select the azimuth and elevation of your viewpoint. A 3D cube illustrates the selected viewpoint – with a crosshair depicting the position of the nose. You can also use the mouse to drag the cube to your desired viewpoint. Finally, the ‘free rotation’ selection allows you to select a ‘cutout’ region to allow you to view inside the surface of the image.

MRIcro is specifically designed to help the user locate and identify the ridges and folds (gyri and sulci) of the human brain. These are often difficult to identify given 2D slices of the brain. By running two copies of my software, you can select a landmark on a rendered view and see the corresponding location on a 3D image (as shown below, note you need to have ‘Yoke’ checked [highlighted in green]).

MRIcro can also display Overlay images – the figures below illustrate overlays. Typically, these are statistical maps generated by SPM, VoxBo or Activ2000 which show functional regions (computed from PET, SPECT or fMRI scans). I have a web page dedicated to loading overlays with MRIcro.

For volume rendering, you must then make sure the ‘Overlay ROI/Depth’ checkbox is checked. The number next to this check box allows you to specify how deep beneath the surface the software will look for a ROI or Overlay (in voxels). A small value means you will only see surface cortical activations/lesions, while a large value will allow you to see deep activations. For example, in the image on the left below, a skull-stripped brain image was loaded and then a functional map was overlaid.

 Left: functional results can be overlayed. Adjusting the depth value allows you to visualise surface or deep activity/lesions.

It is important to mention that MRIcro’s renderings of objects below the brain’s surface are viewpoint dependent. This is illustrated in the figures below. Both regions of interest (lesions) and overlays are mapped based on the viewer’s line of sight. This is very different from mri3dX, which computes the location of objects based on the surface normal (essentially, mri3dX computes a line of sight perpendicular to the plane of the surface). Each of these approaches has its benefits and costs – both are correct, but lead to different results. Note that the location of subcortical objects appears to move when viewpoint changes in MRIcro. On the other hand, deep objects will appear greatly magnified with mri3dX. The rendered image of a brain (above, right) illustrates this difference. Here MRIcro is showing a very deep lesion near the center of the brain. The lesion appears at a different location in each image (SPMers call this a ‘glass brain’ view). A very deep object like this would appear much larger in mri3dX.

In order to create high quality images of the brain’s surface, you need to strip away the scalp. This is a challenging problem. My software comes with Steve Smith’s automated Brain Extraction Tool [BET], (for citations: Smith, SM (2002) Fast robust automated brain extraction, Human Brain Mapping, 17, 143-155). BET is usually able to accurately extract brain images very effectively. To use BET, you simply click on ‘Skull strip image [for rendering]’ from the ‘Etc’ menu. You then select the image you want to convert, and give a name for the new stripped image. If you have trouble brain stripping an image, try these techniques:

• You can adjust BET’s fractional intensity threshold. The default is 0.50. Lower numbers lead to smoother brains and a larger estimate of brain size.
• BET starts stripping an image from the center of « gravity » of the volume (think of intensity as mass). If the COG of your head image is not near the centre of the brain, you may have a problem. This is particularly a problem with clinical images that show large portions of the neck. To clip excess slices from an image, choose MRIcro’s ‘Save as [clipped/format]’ function and set the number of low and high slices you wish to clip (clipping is described in stage 1, Step 1 of my normalization tutorial).

 The Brain Extraction Tool described in the previous section is useful for removing a brain from the surrounding scalp. How about removing other objects – for example BET will not work if you wish to extract the image of a torso from surrounding speckle noise. Most scans show a bit of ‘speckle’ in the air surrounding an object (also known as ‘salt and pepper noise’. Most of the time, you can simply adjust the MRIcro’s ‘air/surface’ threshold to eliminate noise. However, sometimes spikes of noise are impossible to eliminate without eroding the surface of the object you want to image. To help, MRIcro (1.36 or later) includes a tool to eliminate air speckles. Load the image and adjust the contrast (often choosing ‘Contrast autobalance’ [Fn5] does a good job, but this often does not work for CT scans). Choose ‘Remove air speckles’ from the ‘Etc’ menu. Adjust the ‘Air/Surface threshold’ so that most noise appers as isolated pixels, while the object you want to extract is virtually all green. Set the Erode and Dilate cycles – usually values of 2-3 for each is about right. Press ‘Go’ and name your new file.

To understand the settings of this command, consider the stages MRIcro uses to despeckle an image. First, consider an image with a few air speckles (figure A below, note a few red speckles in the air). MRIcro first smooths the image by finding the mean of the voxel and the 6 voxels that share a surface with it (B). This usually attenuates any speckles (a median filter would be better, but is much slower). Second, only voxels brighter than the user specified threshold are included in a mask (C). Third, a number of passes of erosion are conducted (D). During each pass, a voxel is eroded if 3 or more of its immediate neighbors are not part of the mask. Fourth, the mask is grown for the number of dilate cycles (E). During each dilation pass, a voxel is grown if any of its immediate neighbors is part of a mask. Note that any cluster of voxels completely eliminated during erosion does not regrow – thus eliminating most noise speckles. Finally, the voxels from the original image are inserted into the masked region (F)

 The images to the right show how this technique can be used to effectively extract bone from a CT of an ankle. The right panel shows a standard rendering of the image using a air/surface threshold that shows the bone and hides the surrounding tissue. Note that the ends of the bones are not visible. On the right we see the same image after object extraction (threshold 110, 2 erode cycles, 2 dilate cycles). The extracted rendering appears much clearer than the original.

 So far this web page has described MRIcro’s two primary rendering techniques: volume and surface rendering. However, MRIcro 1.37 and later add a third technique: the maximum intensity projection. This technique simply plots the brightest voxel in the path of a ray traversing the image. The result is a flat looking image that looks a bit like a 2D plain film XRay. This technique is typically a poor choice for most images. The exception is some CAT scans and angiograms. In this case, the MIP is able to identify bright objects embedded inside another object. The images at the right show an MR angiogram of my brain (this image shows an axial view of my circle of willis) and a CT scan of a wrist (note the bright metal pin). To create a MIP, simply select press the ‘MIP’ button in MRIcro’s render window.

Here are some large sample datasets. The CT scan is perfect for surface renderings, and allows you to change the air/surface threshold to either see the skin or bone as the image surface. This image demonstrates that MRIcro’s rendering can be equally effective with CT scans. Additional images can be on the web. Some nice data sets are a knee, a skull (with EEG leads attached), a bonsai tree, an aneurysm, a foot, a lobster, a high resolution skull, a a fish, and a a teddy bear. All the images I provide for download here have had the voxels outside the object (typically air) set to zero (this improves file compression).

Tom Womack devised the rapid surface shading algorithm and gave me a lot of tips (he also compiled the version of BET that I distribute). Krish Singh’s brilliant mri3dX inspired me. Its ability to show viewpoint independent functional data makes it a complimentary tool for visualising the brain. Steve Smith’s BET usually turns skull stripping from a tedious effort to an automated process. Earl F. Glynn developed the code for computing a matrix based on azimuth and elevation. This elegant tool allows the user to change their viewpoint without having to worry about gimbal lock problems or confusing controls. MRIcro does not use/require DirectX or OpenGL, for great articles on 3D graphics visit www.delphi3d.net.

In addition to MRIcro, a number of freeware programs are available that can display rendered images of the brain (either surface rendering, which treats the surface as a uniform color, or volume rendering, which takes into account the brightness of the material beneath the surface). Both MRIcro and mri3dX inlude Steve Smith’s BET software for extracting the brain from the rest of the image. For the other programs, you will first need to skull-strip your brain images (e.g. with BET or BSE).

 Name Description Activ2000 Windows Activ2000 for Windows can show functional activity on a surface rendering. AMIDE Linux This Linux software can read Analyze, DICOM and ECAT images. ImageJ with the Volume Rendering plugin Macintosh, Unix, Windows Michael Abramoff has added volume rendering features to Wayne Rasband’s popular Java-based ImageJ software. ImageJ can read/write analyze format using Guy Williams’ plugin. Java 3D Volume Rendering Macintosh, Unix, Windows Java based volume renderer. Julius Unix, Windows Volume Rendering Software. OGLE Linux, Windows Volume rendering of grayscale (continuous intensity) and RGB (discrete colours) datasets. Ogle is a nice tool for rendering MRI/CT scans. For example, to view the clinical MRI scan from my Sample Datasets section, you can double-click on its .ogle text file (the first time you open a .ogle file, you will have to tell Windows that you want to open these files with Ogle). Here is a sample .ogle file for the clinical scan. You can also try a different version of this software named ogleS. MEDAL Windows Reza A. Zoroofi’s freeware for Windows can create surface renderings of Analyze and DICOM images. MindSeer Macintosh, Unix, Windows Java based volume renderer that can overlay statistical maps as well as venous/arterial maps. Can view Analyze, MINC and NIfTI formats. mri3dX Unix Krish Singh’s freeware, which runs on Linux, Mac OSX, Sun and SGI computers. A basic mri3dX tutorial is available. Simian Linix, Windows looks like the future of volume rendering. Joe Kniss has developed this software that can take advantage of powerful but low cost GeForce and Radeon graphics cards. The Translucency approximation looks pretty stunning. Space Windows Lovely interface with nice looking rendering. Volsh Unix NCAR Volume Rendering Software. VolRenApp Windows Volume and surface rendering. Volume-One Windows This software can also use an extension for viewing diffusion tensor imaging data. V^3 Windows, Linux, MacOSX Accelerated volume rendering (requires a GeForce video card). ImageJ with the Volume Rendering plugin Macintosh, Unix, Windows Kai Uwe Barthel has added volume rendering features to Wayne Rasband’s popular Java-based ImageJ software. ImageJ can read/write analyze format using Guy Williams’ plugin. Voxx Windows Volume rendering tailored for confocal imaging (requires a GeForce video card, with fast hardware-accelerated rendering). Includes the ability to superimpose different image protocols of the same volume (e.g. with one protein-type shown in red and the other shown as green). MITK Windows Medical Imaging ToolKit, developed by Dr. Tian and colleagues.

I MOVED THIS BLOG FROM WORDPRESS TO BLOGGER. Ce blog est à

• 221 476 hits