Archive for the 'traitement d’image' Category

The algorithms below are ready to be downloaded. Biomedical Imaging Group. EPFL

Available Algorithms

http://bigwww.epfl.ch/algorithms.html

http://www.google.com/codesearch/p?hl=fr&sa=N&cd=4&ct=rc#M0QGbzpICpo/kybic/thesis/&q=MRI%20mesh%20matlab

The algorithms below are ready to be downloaded. They are generally written in JAVA or in ANSI-C, either by students or by the members of the Biomedical Imaging Group.Please contact the author of the algorithms if you have a specific question.
JAVA: Plug-ins for ImageJ
JAVA classes are usually meant to be integrated into the public-domain software ImageJ.
bullet Drop Shape Analysis. New method based on B-spline snakes (active contours) for measuring high-accuracy contact angles of sessile drops.
bullet Extended Depth of Focus. The extended depth of focus is a image-processing method to obtain in focus microscopic images of 3D objects and organisms. We freely provide a software as a plugin of ImageJ to produce this in-focus image and the corresponding height map of z-stack images.
bullet Fractional spline wavelet transform. This JAVA package computes the fractional spline wavelet transform of a signal or an image and its inverse.
bullet Image Differentials. This JAVA class for ImageJ implements 6 operations based on the spatial differentiation of an image. It computes the pixel-wise gradient, Laplacian, and Hessian. The class exports public methods for horizontal and vertical gradient and Hessian operations (for those programmers who wish to use them in their own code).
bullet MosaicJ. This JAVA class for ImageJ performs the assembly of a mosaic of overlapping individual images, or tiles. It provides a semi-automated solution where the initial rough positioning of the tiles must be performed by the user, and where the final delicate adjustments are performed by the plugin.
bullet NeuronJ. This Java class for ImageJ was developed to facilitate the tracing and quantification of neurites in two-dimensional (2D) fluorescence microscopy images. The tracing is done interactively based on the specification of end points; the optimal path is determined on the fly from the optimization of a cost function using Dijkstra’s shortest-path algorithm. The procedure also takes advantage of an improved ridge detector implemented by means of a steerable filterbank.
bullet PixFRET. The ImageJ plug-in PixFRET allows to visualize the FRET between two partners in a cell or in a cell population by computing pixel by pixel the images of a sample acquired in three channels.
bullet Point Picker. This JAVA class for ImageJ allows the user to pick some points in an image and to save the list of pixel coordinates as a text file. It is also possible to read back the text file so as to restore the display of the coordinates.
bullet Resize. This ImageJ plugin changes the size of an image to any dimension using either interpolation, or least-squares approximation.
bullet SheppLogan. The purpose of this ImageJ plugin is to generate sampled versions of the Shepp-Logan phantom. Their size can be tuned.
bullet Snakuscule. The purpose of this ImageJ plugin is to detect circular bright blobs in images and to quantify them. It allows one to keep record of their location and size.
bullet SpotTracker Single particle tracking over noisy images sequence. SpotTracker is a robust and fast computational procedure for tracking fluorescent markers in time-lapse microscopy. The algorithm is optimized for finding the time-trajectory of single particles in very noisy image sequences. The optimal trajectory of the particle is extracted by applying a dynamic programming optimization procedure.
bullet StackReg. This JAVA class for ImageJ performs the recursive registration (alignment) of a stack of images, so that each slice acts as template for the next one. This plugin requires that TurboReg is installed.
bullet Steerable feature detectors. This ImageJ plugin implements a series of optimized contour and ridge detectors. The filters are steerable and are based on the optimization of a Canny-like criterion. They have a better orientation selectivity than the classical gradient or Hessian-based detectors.
bullet TurboReg. This JAVA class for ImageJ performs the registration (alignment) of two images. The registration criterion is least-squares. The geometric deformation model can be translational, conformal, affine, and bilinear.
bullet UnwarpJ. This JAVA class for ImageJ performs the elastic registration (alignment) of two images. The registration criterion includes a vector-spline regularization term to constrain the deformation to be physically realistic. The deformation model is made of cubic splines, which ensures smoothness and versatility.
ANSI C
Most often, the ANSI-C pieces of code are not a complete program, but rather an element in a library of routines.
bullet Affine transformation. This ANSI-C routine performs an affine transformation on an image or a volume. It proceeds by resampling a continuous spline model.
bullet Registration. This ANSI-C routine performs the registration (alignment) of two images or two volumes. The criterion is least-squares. The geometric deformation model can be translational, rotational, and affine.
bullet Shifted linear interpolation. This ANSI-C program illustrates how to perform shifted linear interpolation.
bullet Spline interpolation. This ANSI-C program illustrates how to perform spline interpolation, including the computation of the so-called spline coefficients.
bullet Spline pyramids. This software package implements the basic REDUCE and EXPAND operators for the reduction and enlargement of signals and images by factors of two based on polynomial spline representation of the signal.
Others
bullet E-splines. A Mathematica package is made available for the symbolic computation of exponential spline related quantities: B-splines, Gram sequence, Green function, and localization filter.
bullet Fractional spline wavelet transform. A MATLAB package is available for computing the fractional spline wavelet transform of a signal or an image and its inverse.
bullet Fractional spline and fractals. A MATLAB package is available for computing the fractional smoothing spline estimator of a signal and for generating fBms (fractional Brownian motion). This spline estimator provides the minimum mean squares error reconstruction of a fBm (or 1/f-type signal) corrupted by additive noise.
bullet Hex-splines : a novel spline family for hexagonal lattices. A Maple 7.0 worksheet is available for obtaining the analytical formula of any hex-spline (any order, regular, non-regular, derivatives, and so on).
bullet MLTL deconvolution : This Matlab package implements the MultiLevel Thresholded Landweber (MLTL) algorithm, an accelerated version of the TL algorithm that was specifically developped for deconvolution problems with a wavelet-domain regularization.
bullet OWT SURE-LET Denoising : This Matlab package implements the interscale orthonormal wavelet thresholding algorithm based on the SURE-LET (Stein’s Unbiased Risk Estimate/Linear Expansion of Thresholds) principle.
bullet WSPM : Wavelet-based statistical parametric mapping, a toolbox for SPM that incorporates powerful wavelet processing and spatial domain statistical testing for the analysis of fMRI data.

MATLAB Central – How to create 3D mesh model?

MATLAB Central – How to create 3D mesh model?

MATLAB Central – Newsreader – How to create 3D mesh model?: « Thread Subject: How to create 3D mesh model?

Subject: How to create 3D mesh model?

From: Tong

Date: 14 Jul, 2009 19:55:03

Message: 1 of 6
Reply to this message
Add author to My Watch List
View original format
Flag as spam

I have segmented meniscus images from MRI that is created in about 3mm slices. How would I combine these slices together to create a 3D model of the meniscus?

Subject: How to create 3D mesh model?

From: Luigi Giaccari

Date: 14 Jul, 2009 20:49:03

Message: 2 of 6
Reply to this message
Add author to My Watch List
View original format
Flag as spam

Please send me that models of yours, I am plannig to build a surface recostructor for sliced cloud. Send to : giaccariluigi@msn.com

In the mean time look for:

http://www.mathworks.com/matlabcentral/fileexchange/22185
http://giaccariluigi.altervista.org/blog/

and related

Subject: How to create 3D mesh model?

From: Brad Henrie

Date: 17 Jul, 2009 21:45:18

Message: 3 of 6
Reply to this message
Add author to My Watch List
View original format
Flag as spam

‘Tong ‘ <celticbaseball06@gmail.com> wrote in message <h3inqn$ni5$1@fred.mathworks.com>…
> I have segmented meniscus images from MRI that is created in about 3mm slices. How would I combine these slices together to create a 3D model of the meniscus?

First place all of your slices into a 3-d matrix. This will give you a cube of data. You can then view it from multiple planes by using this format variable(:,:,a) where a is the slice position in a direction directly into your displayed image. Using the same format you can display other planes variable(:,a,:). Converting your image to greyscale will allow you to display it using implay.

I’m sure that since you are working with MRI you have access to the image processing toolbox.

While viewing images in a plane where the pixels are not square you need to scale your image. (if you have a 3x3x5 voxel and display the 3×5 pixel representation) Also remember your slice separation if you don’t have 3-d k-space.

Subject: How to create 3D mesh model?

From: Image Analyst

Date: 18 Jul, 2009 04:02:35

Message: 4 of 6
Reply to this message
Add author to My Watch List
View original format
Flag as spam

‘Tong ‘ <celticbaseball06@gmail.com> wrote in message <h3inqn$ni5$1@fred.mathworks.com>…
> I have segmented meniscus images from MRI that is created in about 3mm slices. How would I combine these slices together to create a 3D model of the meniscus?
—————————————-
I’m not sure what you mean by ‘model,’ but you can combine 2D images together to form a 3D image by using the cat(3, slice1, slice2, slice3, slice4, slice5,……) function.

Subject: How to create 3D mesh model?

From: Tong

Date: 20 Jul, 2009 18:36:02

Message: 5 of 6
Reply to this message
Add author to My Watch List
View original format
Flag as spam

‘Image Analyst’ <imageanalyst@mailinator.com> wrote in message <h3rhgr$of5$1@fred.mathworks.com>…
> ‘Tong ‘ <celticbaseball06@gmail.com> wrote in message <h3inqn$ni5$1@fred.mathworks.com>…
> > I have segmented meniscus images from MRI that is created in about 3mm slices. How would I combine these slices together to create a 3D model of the meniscus?
> —————————————-
> I’m not sure what you mean by ‘model,’ but you can combine 2D images together to form a 3D image by using the cat(3, slice1, slice2, slice3, slice4, slice5,……) function.

What about when I am using regions of interest, not images?

Subject: How to create 3D mesh model?

From: fabio freschi

Date: 20 Jul, 2009 21:06:01

Message: 6 of 6
Reply to this message
Add author to My Watch List
View original format
Flag as spam

you can try iso2mesh in FE
fabio »

pour commencer un article PLoS_one__guidelines-figure-table

Guidelines for Figure and Table Preparation

http://www.plosone.org/static/figureGuidelines.action

Contents:

  1. Introduction
  2. Creative Commons Attribution License
  3. Titles and Legends
  4. General Considerations
  5. Figure Preparation
  6. Figure Dimensions
  7. Figure Types
  8. Uploading Figures to the PLoS Manuscript Submission System
  9. Multimedia Files
  10. Image Manipulation
  11. How To
  12. Format Tables
  13. Getting Help

1. Introduction

As part of the process of making scientific and medical literature openly accessible on the Web, PLoS uses a streamlined production process that takes authors’ submitted figures straight to the formatting stage. Most importantly, PLoS does not redraw figures submitted for publication in articles. Therefore, figure preparation is the author’s responsibility.

Please read the following guidelines carefully and thoroughly. Failure to comply with these guidelines may result in lower-quality figures and prolonged publishing time of your article.

2. Creative Commons Attribution License

All figures and photographic images will be published under a Creative Commons Attribution License (CCAL), which allows them to be freely used, distributed, and built upon as long as proper attribution is given. Please do not submit any figures or photos that have been previously copyrighted unless you have express written permission from the copyright holder to publish under the CCAL license.

For license inquiries, email license@plos.org.

3. Titles and Legends

Titles and legends (captions) for figures published with articles (i.e., not Supporting Figures) should be included in the main manuscript text file, not as part of the figure files themselves. For each figure, list the following information at the end of the manuscript text, after the references:

  • Figure number (in sequence, using Arabic numerals: Figure 1, Figure 2, Figure 3, etc.)
  • Short title using a maximum of 15 words. The figure title should be bold type, using sentence case ending with a period (.). For example: Figure 1. Adaptation and its potential costs.
  • A detailed legend of 300 words maximum can follow the figure title. Figure parts should be indicated (see Parts Labels, below).
  • For more detailed information on Legends, see Author Guidelines: Figure Legends.

Supporting Figures. If Supporting Information figures will publish with your paper, please include the captions in the article file for PLoS Biology or Medicine, and in the File Title field of the online submission system for PLoS ONE, Neglected Tropical Diseases, Genetics, Computational Biology, or Pathogens.

Note: If at any point you have to change the numbering order of your figures, you must make sure that all figure captions correctly correspond with the figures.

4. General Considerations

There are two broad categories of figures in PLoS articles: (1) those publishing directly with the article and (2) Supporting Information figures.

Supporting Figures are not published directly in the article; rather, a hyperlink to the figure is provided in the online version of the published article. Figures publishing as Supporting Information can be in any file format or dimension, as long as they are no larger than 10 MB.

Provide a separate file for every figure in your manuscript, including Supporting Information figures. Figures should not be embedded in the main manuscript file. For example, if your manuscript has 10 figures, you would upload 10 individual files.

Note: PLoS converts EPS figures to TIFF before publishing so that they can be viewed in our online and PDF formats.

Recommended Graphics Software

Several graphics software packages are available to help you create high-quality graphics:

  • Adobe Photoshop
  • Adobe Illustrator
  • PowerPoint
  • CorelDraw
  • GIMP (freely distributed at www.gimp.org)

Note: Microsoft Word
PLoS does not recommend using Microsoft Word to adjust image size. Microsoft Word automatically down-samples figures and embeds them in the document at 72 dpi, so the images may be at a lower resolution and quality than is acceptable. We require that figures be created at a minimum resolution of 300 dpi.

Note: Microsoft Excel
PLoS does not recommend Excel to make or adjust figures. It does not have the optimal formatting to display graphics and images properly. This program should be used for tables only. See Table Guidelines for more information on formatting tables.

5. Figure Preparation

File Size

Individual figure files should not exceed 10 MB. If you are having trouble reducing the size of your files, refer to the section below titled Reduce TIFF File Size with LZW Compression.

Figure Quality

A figure that looks good on screen may not be at optimal resolution. Test your figures by sizing them to their intended dimensions and then printing them on your personal printer. The online version should look relatively similar to the personal-printer copy: it should not look fuzzy, jagged, pixilated, or grainy at intended print size.

Note: The quality of your figures will be only as good as the lowest-resolution element placed in them. In other words, if you created a 72 dpi line graph and save it as a 300 dpi TIFF, the image will still print out as a 72 dpi image.

Figure Format

Figures for publication must be submitted in high-resolution TIFF or EPS format only. Some figure types should be submitted in TIFF only (see Figure Types below). If you submit an EPS file it will be converted to TIFF prior to publishing. See How To: Convert Other File Types to TIFF below for more information on converting figure files to TIFF.

Color Mode

Figures containing color should be saved in RGB rather than CMYK or any other channels.

Layered TIFFs

TIFF files with multiple layers are not an accepted format for figures. Please make sure you provide us with a flattened version of your file. To flatten a layered TIFF file, open your figure in Photoshop. From the menu bar select Layer/Flatten Image and save the file. See also Combination Figures, below.

layered1 layered2
Figure example that has layers. Figure example that has the layers flattened. Only the Background layer remains.

Background Color

Create your figures using a white background. If you create figures using a transparent background, the figures may not display well in the online format.

Figure example showing a figure created with a transparent background. Transparent backgrounds do not work well in the online format. Figure example showing a figure created with a white background. White backgrounds display well in any format.

Lines, Rules, and Strokes

Lines should be at least 0.5 point and no more than 1.5 points in order to reproduce well in a PDF file or web format.

Figure example showing lines that are too thick and lines that are too light in color. Light color do not display well when published. Figure example showing the correct line widths and darker colored accent lines.

White Space

Each figure should be closely cropped to minimize the amount of white space surrounding it. PLoS recommends a 2 point white space border around each figure. Cropping figures improves accuracy when the figure is placed among other elements during production of the final published article.

Figure example that has too much white space. Figure example that has the correct amount of white space.

Text within Figures

Fonts

Figure text must be in Arial font, between 8 and 12 points. Make sure that the visual information is readable at the size you select.

Figure text that requires a font family other than Arial (math symbols, etc.) must have the font information embedded in the figure file. See Embed Fonts in EPS Files and Convert Text to Outlines below for more information.

Parts labels

Multi-panel figures (those with parts A, B, C, and D) should be submitted as a single file that contains all parts of the figure. Label the figure itself with capital letters, Arial bold font, 12 points. Do not use punctuation (no periods or brackets). Any TIFFs with layers must be flattened (see Combination Figures below.)

Figure example that has the incorrect label format. Figure example that has the correct label format.
Figure example showing the use of the incorrect font family. Figure example correctly using the Arial font family.

6. Figure Dimensions

Figures for publication will be sized to fit 1, 1.5, or 2 columns of the final printable PDF of the article. Dimensions will also depend on the article type. Please follow the sizing recommendations below for your original submission to create high-quality, appropriately sized figures. See Figure Types below for descriptions and recommendations for line drawings, grayscale drawings, halftones, and combination figures.

Note: Figures for article types other than Research Articles are not sized or scaled. You must create figures for these articles types in their actual print or online display size. See below for sizing information.

Figure Alignment

Figures will be left-aligned on the page or column, so please design them accordingly.

Figure Width

Figures can have a width between 8.25 cm and 17.15 cm and a maximum height of 23.5 cm. If your figures have labels that are in 8 point type or if your figures are very detailed, it is recommended that your figure be created so that it will span two columns.

Article Type

  • 2-column: Research Article, Expert Commentary, Guidelines and Guidance, Learning Forum, Neglected Diseases, PLoS Medicine Debate, Primer, Review, Symposium.
  • 3-column: Editorial, Education, Essay, Health In Action, Historical and Philosophical Perspectives, Historical Profiles and Perspectives, Interview, Message from ISCB, Opinion, Perspective, Policy Forum, Policy Platform, Research In Translation, Special Report, Viewpoint.

Quick Reference – Figure Dimensions for 2-Column Article Types
Inches Pixels Centimeters Picas
Width for 1-Column Figures 3.25 in 312 px 8.25 cm 19.49 picas
Width for 1.5-Column Figures 4.75 – 5.0 in 456 – 480 px 12.06 – 12.7 cm 28.5 – 30 picas
Width for 2-Column Figures 6.75 in 648 px 17.15 cm 40.5 picas
Height Maximum for All Figures 9.25 in 888 px 23.5 cm 55.5 picas
Quick Reference – Figure Dimensions for 3-Column Article Types
Inches Pixels Centimeters Picas
Width for 1 Column Figures 2.15 in 207 px 5.5 cm 12.95 picas
Width for 2 Column Figures 4.5 in 434 px 11.5 cm 27.15 picas
Width for 3 Column Figures 6.75 in 648 px 17.15 cm 40.5 picas

7. Figure Types

Line Art

Line art has sharp, clean lines and geometrical shapes against a white background. Line art is typically used for tables, charts, graphs, and gene sequences. You can use a program like Illustrator to create high-quality line art. A minimum resolution of 300 dpi will maintain the crisp edges of the lines and shapes.

  • Format: EPS or TIFF
  • Minimum Resolution: 300 dpi

Grayscale

Grayscale figures contain varying tones of black and white. They contain no color, so grayscale is synonymous with « black and white. » The gray scale is divided into 256 sections with black at 0 and white at 255. Software for preparation of grayscale art includes Photoshop.

  • Format: EPS or TIFF
  • Minimum Resolution: 300 dpi

Halftones

The best example of a halftone is a photograph, but halftones include any image that uses continuous shading or blending of colors or grays, such as gels, stains, microarrays, brain scans, and molecular structures. To prepare and manipulate halftone images, use Photoshop or a comparable photo-editing program.

  • Format: TIFF
  • Minimum Resolution: 300 dpi

Combination Figures

Combination figures contain two or more types of images, for example, a halftone figure containing text. You should embed the images, group the objects, or flatten the layers, and flatten transparencies before saving as TIFF at a minimum of 300 dpi.

  • Format: TIFF
  • Minimum Resolution: 300 dpi

Stereograms

Stereograms are figures with two almost identical pictures placed side by side which, when viewed through special glasses or a stereoscope, produce a three-dimensional image.

If you plan on submitting a stereogram as one of your figures, make sure this is clearly mentioned in the caption for the figure within the manuscript. Stereograms must be sized so that the centers of each of these images are 63 mm apart. Make sure that the stereogram figure is at the size you would like them to display. They will be checked prior to publishing, but this step will ensure your stereogram will be viewed properly.

Quick Reference Table for Common Figure Types
Line Art Grayscale Halftones Combination Figures
Required File Types EPS or TIFF EPS or TIFF TIFF TIFF
Required Resolution 300 dpi 300 dpi 300 dpi 300 dpi
Example Software for Preparation Adobe Illustrator Adobe Photoshop; GIMP Adobe Photoshop; GIMP Adobe Photoshop; GIMP

8. Uploading Figures to the PLoS Manuscript Submission System

Upload Order

  • Upload cover letter, then article file first. Ensure that it contains the figure legends, but not the figures themselves.
  • Figures should be numbered in the order they are first mentioned in the text, and uploaded in the same order. For example, Figure 1 should be uploaded as the first figure file, Figure 2 the second, etc.
  • Figures should be uploaded in the desired orientation.
  • Multimedia files (.avi or .swf files) must be uploaded as a Supporting Information file type and not a figure. See Multimedia Files below for more information.

Note: When a figure is uploaded to the PLoS manuscript submission system, a PDF file is created that contains the image but does not represent the final appearance of your figures in your published article. In addition, a « merged PDF » containing the article file and all of the figures is created automatically, which should be used by authors as a quick way to review their figures for egregious errors.

9. Multimedia Files

PLoS encourages authors to submit multimedia files that are crucial to the conclusions of the paper. Multimedia files should be smaller than 10 MB because of the difficulties that some users will experience in loading or downloading files. These files are published as Supporting Information. Preferred formats are:

  • Audio: MP3
  • Video: MOV, progressive download, 320 x 240 px frame size
  • Flash: SWF

10. Image Manipulation

Image files should not be manipulated or adjusted in any way that could lead to misinterpretation of the information present in the original image. Inappropriate manipulation includes but is not limited to:

  • The introduction, enhancement, movement, or removal of specific feature(s) within an image;
  • Unmarked grouping of images that should otherwise have been presented separately (for example, from different parts of the same gel, or from different gels, fields, or exposures);
  • Adjustments of brightness, contrast, or color balance that obscure, eliminate, or misrepresent any information.

Digital images in manuscripts nearing acceptance for publication may be scrutinized for any indication of improper manipulation. If evidence is found of inappropriate manipulation we reserve the right to ask for original data and, if that is not satisfactory, we may decide not to accept the manuscript.

We are grateful to staff at the Journal of Cell Biology (Rockefeller University Press) for their help in establishing these guidelines and procedures (http://www.jcb.org/misc/ifora.shtml#image_aquisition)

11. How To

Embed Fonts in EPS Files

Always embed fonts or create outlines when creating EPS files. If your figures require special symbols and Greek characters the text may not reproduce properly unless you embed your fonts or create outlines of the text. See the Convert Text to Outlines below for more information.

To embed fonts using Adobe Illustrator, open the EPS file. From the File Menu, select Save As. In the Save As dialog box, make sure that the Embed Fonts option is selected and click OK.

Convert Text to Outlines

When you convert text to outlines, the text is converted to a series of lines and fills. The reference to the font that was used to create the text is no longer present. This process makes it unnecessary for the PLoS production department to have the original font used to create the figure text. This is to ensure that your figures publish as you intended them to.

Example of text that has not been converted to outlines. Example of text that has been converted to outlines. Notice that every character is outlined.

You can use Adobe Illustrator to convert text to outlines by selecting the text you want to convert. Then from the Type menu, select Create Outlines (Shift + Control + O on PC, and Shift + Apple + O on Mac).

If you do not convert text to outlines, when your figure is opened during the production process any text in a non-standard font will automatically be substituted for default font. This can cause the text in the figure to render incorrectly.

Caution: You will not be able to change your text after it has been converted to outlines so make sure it is correct before converting.

Convert Other File Types to TIFF

Convert PDF to TIFF Using Photoshop

  1. Open the PDF file in Photoshop and select the page of the PDF that contains the figures to save as TIFF.
  2. From the File menu, select Save As to open the Save As dialog box.
  3. In the Save As dialog box, select TIFF from the Format dropdown list.
  4. When the TIFF Options dialog box displays, make sure to check the LZW compression checkbox.
  5. Click OK.

Convert EPS, JPG, GIF, or Other File Types to TIFF Using Photoshop

  1. Open the figure file in Photoshop.
  2. From the File menu, select Save As to open the Save As dialog box.
  3. In the Save As dialog box, select TIFF from the Format drop down list.
  4. When the TIFF Options dialog box displays, make sure to check the LZW compression checkbox.
  5. Click OK.

Note: Do not use the « optimize for web » wizard for any figures. Some programs may down sample your images to low resolution.

Convert PDF to TIFF Using Adobe Illustrator

  1. Open the PDF file in Adobe Illustrator, select the PDF page to export and click OK.
  2. From the File menu, select Export to display the Export dialog box.
  3. From the Export dialog box, select TIFF from the Save as Type drop down list and click OK.
  4. When the TIFF Options dialog displays, select LZW compression.
  5. Click OK to complete the process.

Convert EPS to TIFF Using Illustrator

  1. Open the EPS file in Adobe Illustrator.
  2. From the File menu, select Export to display the Export dialog box.
  3. From the Export dialog box, select TIFF from the Save as Type drop down list and click OK.
  4. When the TIFF Options dialog displays, select LZW compression.
  5. Click OK to complete the process.

Convert PowerPoint Files to High-Resolution TIFFs Using Adobe Acrobat and Photoshop

Caution: Do not use File > Save as > TIFF. This will result in a low-resolution, poor-quality figure.

Step I: Convert PowerPoint File to PDF

There are two possible ways to create PDFs from PowerPoint files: use the Adobe PDF menu in some versions of PowerPoint, or create a PDF via the Print command.

  1. Open your file in PowerPoint. From the Adobe PDF menu, select Change Conversion Settings. The PDFMaker Settings dialog displays.
  2. From the Conversion settings dropdown menu, select High Quality and click OK.
  3. From the Adobe PDF menu, select Convert to Adobe PDF. You will be asked to save the PDF file to a location of your choosing.
  4. Click OK.

– OR –

  1. Open your file in PowerPoint.
  2. Select Print from the File dropdown menu.
  3. Select the PDFCreator or similar tool in the Printer Name window.
  4. Click OK.

Note: If your PowerPoint file contains figures on multiple slides, after you create the PDF file you will need to use Adobe Acrobat to separate the figures/slides into individual files. You can also use PowerPoint to create separate files of each figure/slide.

Step II: Convert Multi-Page PDF File to Individual Files

  1. Using Adobe Acrobat Standard, open the PDF file that you created in Step 1. From the Document menu, select Pages and then Extract. The Extract Page dialog box displays.
  2. Enter the page numbers in the To and From fields and then select the Delete Pages checkbox. Checking this box will delete the page that you entered in the To and From fields from the PDF file.
  3. Click OK. The page that you specify in the previous step is now shown in Acrobat.
  4. From the File menu, select save and enter the file name (e.g., Figure 1) for the extracted page and then click OK.
  5. Repeat this process until a separate file is created for each figure/slide.

Step III: Convert Individual PDF Files to TIFFs

  1. Using Photoshop, open the PDF file that you created in Step II.
  2. From the File menu, select Save As.
  3. From the Save As dialog box, select TIFF from the Format dropdown list and click Save.
  4. In the TIFF Options dialog box, make sure the following options are selected. Under Image Compression, select LZW and under Pixel Order, select Interleaved.
  5. Click OK.
  6. Repeat this process until a separate TIFF file is created for each figure/slide.

Reduce TIFF File Size with LZW Compression

PLoS has a strict 10 MB figure file limit. To reduce the size of your figure, open your TIFF files in Photoshop. From the File menu, select Save As to open the Save As dialog box. In the Save As dialog box, select TIFF from the Format dropdown list. When the TIFF Options dialog box displays, make sure to check the LZW compression checkbox. Click OK.

Locate the Resolution Information in a TIFF File

You can locate the resolution of a figure file using Adobe Photoshop or through Windows Explorer.

Photoshop

To find the resolution of a figure using Photoshop, first open the file. Then from the Image menu, select Image Size. The Image Size dialog box will open displaying the figure dimensions, document size and resolution. You can decrease the size of a file, but you should not increase the resolution and/or dimensions of a file to meet the journals requirements. Increasing the file sizes manually may result in poor quality figures.

Windows Explorer

To check the resolution of a figure file using Windows Explorer, locate and select the file. Right-click and select Properties. In the Properties dialog box, select the Summary Tab. If you do not see the properties of the figures, click Advanced. This will display all of the properties associated with the selected figure. Look at the Horizontal Resolution and Vertical Resolution to determine the figure resolution.

12. Format Tables

Tables submitted for production should be included at the end of the article DOC or RTF file. For LaTeX submissions, table files should be uploaded individually into the online submission system. Tables that will be Supporting Information files can be submitted in any allowed format: Word, Excel, PDF, PPT, JPG, EPS, or TIFF.

Title and footnotes

Each table needs a concise title of no more than one sentence. The legend and footnotes should be placed below the table. Footnotes can be used to explain abbreviations.

Specifications

Tables that do not conform to the following requirements may give unintended results when published. Problems may include movements of data (rows or columns), loss of spacing, or disorganization of headings. Note: Multi-part tables with varying numbers of columns or multiple footnote sections should be divided and renumbered as separate tables.

Table requirements:

  • Cell-based (e.g., created in Word with Tables tool or in Excel).
  • Editable (i.e., not graphic object).
  • Heading/subheading levels in separate columns.
  • Size no larger than one printed page (7 in x 9.5 in). Larger tables can be published as online supporting information.
  • No returns, tabs, or merged cells or rows.
  • No color, shading, lines, or rules.
  • No inserted text boxes or pictures.
  • No tables within tables.

Examples

<!–

Bad Use of Subhead Images Good Use of Subhead Images Example of incorrect subheads and use of text boxes Example of acceptable subheads within a table

–>

13. Getting Help

Contact If you have questions about your figures after reading the guidelines, you can email figures@plos.org.

« image registration » en français « recalage d’image »

« image registration » : « recalage

le recalage est une technique qui consiste en la mise en correspondance d’images ou des objets numériques à n dimensions, ceci afin de pouvoir comparer ou combiner leurs informations respectives. Cette mise en correspondance se fait par la recherche d’une transfo géométrique ou temporello-géométrique permettant de passer d’une image n-Dim à une autre de n-Dim ou de p-Dim. Cette technique comprend de nombreuses applications  par exemple de fusionner plusieurs modalités d’imagerie, de divers traitement de vidéos comme le suivi de mouvement et la compression, ou encore la création de mosaïques d’images…

SPM version 8b statistical parametric mapping MATLAB

les 2 liens les plus « usefull »:

http://www.fil.ion.ucl.ac.uk/spm/software/spm8b/#Introduction

http://www.fil.ion.ucl.ac.uk/spm/doc/intro/

http://www.fil.ion.ucl.ac.uk/spm/ext/

The SPM approach in brief

The Statistical Parametric Mapping approach is voxel based:

  • Images are realigned, spatially normalised into a standard space, and smoothed.
  • Parametric statistical models are assumed at each voxel, using the General Linear Model GLM to describe the data in terms of experimental and confounding effects, and residual variability.
  • For fMRI the GLM is used in combination with a temporal convolution model.
  • Classical statistical inference is used to test hypotheses that are expressed in terms of GLM parameters. This uses an image whose voxel values are statistics, a Statistic Image, or Statistical Parametric Map (SPM{t}, SPM{Z}, SPM{F})
  • For such classical inferences, the multiple comparisons problem is addressed using continuous random field theory RFT, assuming the statistic image to be a good lattice representation of an underlying continuous stationary random field. This results in inference based on corrected p-values.
  • Bayesian inference can be used in place of classical inference resulting in Posterior Probability Maps PPMs .
  • For fMRI, analyses of effective connectivity can be implemented using Dynamic Causal Modelling DCM.

alliance for medical image computing Kit = free open source software platform

Overview

The NA-MIC Kit is a free open source software platform. The NA-MIC Kit is distributed under a BSD-style license without restrictions or « give-back » requirements and is intended for research, but there are not restrictions on other uses. It consists of the 3D Slicer application software, a number of tools and toolkits such as VTK and ITK, and a software engineering methodology that enables multiplatform implementations. It also draws on other « best practices » from the community to support automatic testing for quality assurance. The NA-MIC kit uses a modular approach, where the individual components can be used by themselves or together. The NA-MIC kit is fully-compatible with local installation (behind institutional firewalls) and installation as an internet service. Significant effort has been invested to ensure compatibility with standard file formats and interoperability with a large number of external applications.

See this presentation on the NA-MIC Kit for more information.

Download Central

Please go here to download Slicer software, documentation and data.

Software Packages

3D Slicer

3D Slicer is a software package for visualization and medical image computing. A tutorial for prospective users of the program can be found on the web. See our tutorials page for an introduction to the use of 3D Slicer. More…


The Visualization Toolkit VTK

The Visualization Toolkit is an object-oriented toolkit for processing, viewing and interacting with a variety of data forms including images, volumes, polygonal data, and simulation datasets such as meshes, structured grids, and hierarchical multi-resolution forms. It also supports large-scale data processing and rendering. More…


The Insight Toolkit ITK

The Insight Segmentation and Registration Toolkit (ITK) is an open-source software toolkit for performing registration and segmentation. Segmentation is the process of identifying and classifying data found in digitally sampled representations. Typically the sampled representation is an image acquired from such medical instrumentation as CT or MRI scanners. Registration is the task of aligning or developing correspondences between data. For example, in the medical environment, a CT scan may be registered with a MRI scan in order to combine the information contained in both. More…


KWKidgets GUI Toolkit

KWWidgets is an Open Source library of GUI classes based on Tcl/Tk with a C++ API. This library was originally developed by Kitware for ParaView, and now has been extended in functionality and architecture thanks to NAMIC support. More…


Teem Libraries and Command Line Tools

Teem is a coordinated group of libraries for representing, processing, and visualizing scientific raster data. Teem includes command-line tools that permit the library functions to be quickly applied to files and streams, without having to write any code. More…


XNAT Web-based Image Informatics Server

The Extensible Neuroimaging Archive Toolkit (XNAT) is an open source software platform designed to facilitate management and exploration of neuroimaging and related data. XNAT includes a secure database backend and a rich web-based user interface.

NA-MIC is working to provide a portable, easy-to-install and easy-to-administer version of XNAT that can be deployed as part the Kit. These efforts will build on ongoing work in the BIRN community to integrate Slicer with XNAT.


Batchmake

BatchMake is a cross platform tool for batch processing of large amount of data. BatchMake can process datasets locally or on distributed systems using Condor (a grid computing tool that enables distributed computing across the network). Some of the key features of BatchMake include: 1) a BSD License, 2) CMake-like scripting language, 3) distributed scripting via Condor, 4) a centralized remote website for online statistical analysis. 4) a user Interface using FLTK, and 5) BatchMake is cross platform. More…


CMake The Cross-platform Make Tool

CMake is used to control the software build process using simple platform, compiler and operating system independent configuration files. CMake generates native makefiles and workspaces that can be used in the development environment of your choice. That is, CMake does not attempt to replace standard development tools such as compilers and debuggers, rather it produces build files and other development resources that can benefit from automated generation. Further, once CMake configuration files are created, they can be used to produce developer resources across the many platforms that CMake supports. CMake is quite sophisticated: it is possible to support complex environments requiring system configuration, pre-processor generation, code generation, and template instantiation. More…

New: CMake has been adopted by KDE, one of the world’s largest open source software systems.


CDash, CTest, CPack Software Process Tools

As an adjunct to CMake the tools CDash, CTest, CPack are used to test and package all components of the NAMIC kit. CTest is a testing client that locally performs testing on a software repository, and then communicates the results of the testing to CDash (and other testing, dashboard servers such as DART2). CPack is a cross-platform tool for packaging, distributing and installing the NAMIC kit on various systems including Linux, Windows, and Mac OSX. More…


DART Testing Server

DART is a testing server, meaning that it gathers the results of testing from clients (such as CTest) and aggregates them on the testing « dashboard ». This dashboard is central to the NAMIC software process; it provides a centralized web site where NAMIC Kit developers and users can ascertain the day-to-day health of the software, and repair the software immediately if faults are discovered. It facilitates distributed development, and provides the stability that complex software such as the NAMIC Kit requires to support a large community of users. More…

—————

Ref.

http://www.na-mic.org/Wiki/index.php/NA-MIC-Kit

—————

NA-MIC Kit in Numbers

The numbers in this table are statistics characterizing the NA-MIC kit. They provide an estimate of the scale of the Kit, including approximate costs to create and total effort expended. Note that estimates such as these are required because large open-source software systems cannot be tracked via direct investment since much of the effort is voluntary in nature, and distributed across the world through a variety of organizations.
Source: http://www.ohloh.org. Captured on May 30 2008. See the Ohloh website for an explanation of how the numbers were computed.

Package Lines of code Person years Price tag at 100k per person year
Slicer 587,919 161 $16,068,440
KWW 189,627 49 $ 4,925,590
VTK 1,344,989 385 $38,521,873
ITK 711,474 197 $19,712,495
CMAKE 213,671 56 $ 5,586,895
Total 3,047,680 848 $84,815,293

matlab .m .ppt course : introduction to the theory of manifolds; mesh generation ; mesh processing

take care : too many bugs…

This course is an introduction to the  theory of manifolds. Manifold models arise in various area of mathematics, image processing, data mining or computer science. Surfaces of arbitrary dimension can be used to model non-linear datasets that one encounters in modern data processing. Numerical methods allow to exploit this geometric non-linear prior in order to extract relevant information from the data. These methods include in particular local differential computations (related to the Laplacian operator and its variants) and global distance methods (related to geodesic computations). In this course, you will learn how to perform differential and geodesic computations on images, volumes, surfaces and high dimensional graphs.

The course includes a set of Matlab experiments. These experiments give an overview of various tasks in computer vision, image processing, learning theory and mesh processing. This includes computation of shortest paths, voronoi segmentations, geodesic delaunay triangulations, surface flattening, dimensionality reduction and mesh processing.

Ref.

http://www.mathworks.de/matlabcentral/fileexchange/loadFile.do?objectId=13464&objectType=file

Lecture 0 – Basic Matlab Instructions

Abstract : Learn the basic features of Matlab, and learn how to load and visualize signals and images.

Basic Matlab commands.

  • Create a variable and an array
  • % this is a comment
    a = 1; a = 2+1i; % real and complex numbers
    b = [1 2 3 4]; % row vector
    c = [1; 2; 3; 4]; % column vector
    d = 1:2:7; % here one has d=[1 3 5 7]
    A = eye(4); B = ones(4); C = rand(4); % identity, 1 and random matrices
    c = b’; % transpose

  • Modification of vectors and matrices
  • A(2,2) = B(1,1) + b(1); % to access an entry in a vector or matrix
    b(1:3) = 0; % to access a set of indices in a matrix
    b(end-2:end) = 1: % to access the last entries
    b = b(end:-1:1); % reverse a vector
    b = sort(b); % sort values
    b = b .* (b>2); % set to zeros (threshold) the values below 2
    b(3) = []; % suppress the 3rd entry of a vector
    B = [b; b]; % create a matrix of size 2×4
    c = B(:,2); % to access 2nd column

  • Advanced instructions
  • a = cos(b); a = sqrt(b); % usual function
    help perform_wavelet_transform; % print the help
    a = abs(b); a = real(b); a = imag(b); a = angle(b); % norm, real part and imaginary part of a complex
    disp(‘Hello’); % display a text
    disp( sprintf(‘Value of x=%.2f’, x) ); % print a values with 2 digits
    A(A==Inf) = 3; % replace Inf values by 3
    A(:); % flatten a matrix into a column vector
    max(A(:)); % max of a matrix
    M = M .* (abs(M)>T); % threshold to 0 values below T.

  • Display
  • plot( 1:10, (1:10).^2 ); % display a 1D function
    title(‘My title’); % title
    xlabel(‘variable x’); ylabel(‘variable y’); % axis
    subplot(2, 2, 1); % divide the screen in 2×2 and select 1st quadrant

  • Programmation
  • for i=1:4 % repeat the loop for i=1, i=2, i=3 et i=4
    disp(i); % make here something
    end
    i = 4;
    while i>0 % while syntax
    disp(i); % do smth
    i = i-1;
    end

Load and visualize signals and images

  • Load and plot a signal: (function load_signal.m should be in the toolbox of each course)
  • f = load_signal(‘Piece-Regular’, n); % signal of size n
    plot(f);

  • Load and display an image (download function load_image.m should be in the toolboxes)
  • I = load_image(‘lena’);
    imagesc(I); axis image; colormap gray(256);

Copyright © 2006 Gabriel Peyré

Lecture 1 – Active Contours and Level Sets

Abstract : The goals of this lecture is to use the level set framework in order to do curve evolution. The mean curvature motion is the basic tool, and it can be extended into edge-based (geodesic active contours) and region-based (Chan-Vese) snakes.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_fast_marching.zip. Unzip it into your working directory. You should have a directory toolbox_fast_marching/ in your path.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_fast_marching/’);
    path(path, ‘toolbox_fast_marching/data/’);
    path(path, ‘toolbox_fast_marching/toolbox/’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_fast_marching/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_fast_marching
    compile_mex;
    cd ..

Managing level set function.

  • In order to perform curve evolution, we will deal with a distance function stored in a 2D image D. The curve will be embedded in the level set D=0. This curve will be evolved by modifying the image D. A curve evolution ODE can be replaced by an PDE on D. This allows to deal with topological changes when a curve split or two curves merge.
  • n = 200; % size of the image
    % load a distance function
    D0 = compute_levelset_shape(‘circlerect2’, n);
    % type ‘help compute_levelset_shape’ to see other
    % basic curve you can load.

    % display the curve
    clf; hold on;
    imagesc(D); axis image; axis off; axis([1 n 1 n]);
    [c,h] = contour(D,[0 0], ‘r’);
    set(h, ‘LineWidth’, 2);
    hold off;
    colormap gray(256);

    % do the union of two curves
    options.center = [0.15 0.15]*n;
    options.radius = 0.1*n;
    D1 = compute_levelset_shape(‘circle’, n,options);
    imagesc(min(D0,D1)<0);

  • During the curve evolution, the image D might become far from being a distance function. In order to stabilize the algorithm, one needs to re-compute this distance function.
  • % here we simulate a modification of the distance function
    [Y,X] = meshgrid(1:n,1:n);
    D = (D0.^3) .* (X+n/3);
    D1 = perform_redistancing(D);
    % display both the original and the new,
    % redistanced, curve (should be very close)

Mean Curvature Motion.

  • In order to compute differential quantities (tangent, normal, curvature, etc) on the curve, you can compute derivatives of the image D.
  • % the gradient
    g0 = divgrad(D);
    % display the gradient (as arrow field with ‘quiver’, …)

    % the normalized gradient
    d = max(eps, sqrt(sum(g0.^2,3)) );
    g = g0 ./ repmat( d, [1 1 2] );
    % display

    % the curvature
    K = d .* divgrad( g );
    % display

  • The mean curvature motion of the level sets of some image u is driven be the following equation.

    Implement this evolution explicitly in time using finite differences.

    Tmax = 1000; % maximum time of evolution
    dt = 0.4; % time step (should be small)
    niter = round(Tmax/dt); % number of iterations
    D = D0; % initialization

    for i=1:niter
    % compute the right hand size of the PDE

    % update the distance field
    D = …;
    % redistance the function from time to time
    if mod(i,30)==0
    D = perform_redistancing(D);
    end
    % display from time to time
    if mod(i,30)=1
    % display here

    end
    end

    Curve evolution under the mean curvature motion (the background is the distance function D).

Edge-based Segmentation with Geodesic Active Contour (snakes + level set).

  • Given a background image M to segment, one needs to compute an edge-stopping function E. It should be small in area of high gradient, and high in area of large gradient.
  • % load an image
    name = ‘brain’;
    M = rescale( sum( load_image(name, n), 3) );
    % display it

    % compute a smoothed gradient
    sigma = 4; % blurring size
    G = divgrad( perform_blurring(M,sigma) );
    % compute the norm of the gradient
    d = …
    % compute the edge-stopping function
    E = …
    % rescale it so that it is in realistic ranges
    E = rescale(E,0.3,1);

  • The geodesic active contour evolution is a mean curvature motion modulated by the edge-stopping function:

    Implement this evolution explicitly in time using finite differences.
  • Segmentation with geodesic active contours.

Region-based Segmentation with Chan-Vese (Mumord-Shah + level sets).

  • The geodesic active contour uses an edge-based energy. It has lots of local minima and is very sensitive to initialization. In order to circumvent these drawbacks, one can use a region based energy like the Mumford-Shah functional. Re-casted into the level set framework, it reads:

    The corresponding gradient descent is the Chan-Vese active contour method:

    Implement this evolution explicitly in time using finite differences, when c1 and c2 are known in advanced.
  • % initialize with a complex distance function
    D0 = compute_levelset_shape(‘small-disks’, n);
    % set parameters
    lambda = 0.8;
    c1 = …; % black
    c2 = …; % gray

    Segmentation with Chan-Vese active contour without edges.
  • In the case that one does not know in advance the constants c0 and c1, how to update them automatically during the evolution ? Implement this method.
  • Copyright © 2006 Gabriel Peyré

Lecture 2 – Front propagation
in 2D and 3D

Abstract : The goals of this lecture is to manipulate the fast marching algorithm in 2D and 3D. Application to shortest path extraction (e.g. road tracking and tubular structure extraction in medical images) and Voronoi cell segmentation is presented.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_fast_marching.zip. Unzip it into your working directory. You should have a directory toolbox_fast_marching/ in your path.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_fast_marching/’);
    path(path, ‘toolbox_fast_marching/data/’);
    path(path, ‘toolbox_fast_marching/toolbox/’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_fast_marching/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_fast_marching
    compile_mex;
    cd ..

Front propagation in 2D.

  • We can now load an image and build a speed propagation function W. Note that area with W values near 0 (black pixels) are those in which the front propagate slowly, so that W is the inverse of the potential that weight the metric.
  • n = 128;
    name = ‘cavern’; % other possibilities are ‘mountain’ and ‘road2’
    W = load_image(name, n);
    W = rescale(W, 0.01, 1); % set up a reasonable range for the potential
    % display the weighting function

    clf; imagesc(W); colormap gray(256);

  • After asking to the user the starting and ending points, we proceed to the front propagation. Note that the propagation terminates when the front reaches the ending point.
  • [start_points,end_points] = pick_start_end_point(W);
    options.end_points = end_points;
    [D,S] = perform_fast_marching_2d(W, start_points, options);
    % display the distance function
    clf; imagesc(D);
    D(I==Inf) = 0; % remove Inf values that make contour crash
    figure; contour(D,50);

  • The shortest path is extracted by performing a gradient descent of D, starting from the ending point.
  • p = extract_path_2d(D,end_points, options);
    % display the path
    clf; plot_fast_marching_2d(W,S,p,start_points,end_points);

  • Now that you know the basic commands, you can try other speed functions loaded from other files, and you can try more complicated potential than just the value of the image. You can try a potential based on the gradient of the image computed using grad(W).


    Left : distance function for 3 different W maps.
    Center : the corresponding level set maps.
    Right : shortest path together with the explored area (in red).

3D Volumetric Shortest Paths.

  • Firs you need to load a 3D array of data M. Such a volumetric dataset is more difficult to visualize than a standard 2D image. You can render slices like M(:,:,i) or use a volumetric renderer such a vol3d. In order to do so, you need to set up a correct alpha mapping to make transparent some parts of the volume. If you rotate the data, then you need to re-render the volume using vol3d(h). You can download the 3D data set here.
  • % load the whole volume
    load brain1-crop-256.mat
    % crop to retain only the central part
    n = 100;
    M = rescale( crop(M,n) );
    % display some horizontal slices
    imageplot(M(:,:,50));
    % same thing in the other directions


    Cross sections of the data-set.
  • Another, more efficient way, to render volumetric data is to display a semi-transparent volumetric image. This can be achieved using the function vol3d than render semi-transparent slices of the data orthogonal to the viewing direction. In order to do so, you need to set up a correct alpha mapping to make transparent some parts of the volume. If you rotate the data, then you need to re-render the volume using vol3d(h).
  • clf;
    h = vol3d(‘cdata’,M,’texture’,’2D’);
    view(3); axis off;
    % set up a colormap
    colormap bone(256);
    % set up an alpha map
    options.center = …; % here a value in [0,1]
    options.sigma = .08; % control the width of the non-transparent region
    a = compute_alpha_map(‘gaussian’, options); % you can plot(a) to see the alphamap
    colormap bone(256);
    % refresh the rendering
    vol3d(h);
    % try with other alphamapping and colormapping


    Volumetric rendering with different alpha-mapping. Each time the options.center value is increased.
  • Geodesic distances can be computed on a 3D volume using the Fast Marching. The important point here is to define the correct potential field W that should be large in the region where you want the front to move fast. It means that geodesic will follow these regions. In order to do so, we will ask the user to click on a starting point in a given horizontal slice W(:,:,delta). The potential is then computed in order to be large for value of M that are close to the value at this starting point.
  • % ask to the user for some input point
    delta = 5;
    clf; imageplot(M(:,:,delta));
    title(‘Pick starting point’);
    start_point = round( ginput(1) );
    start_point = [start_point(2); start_point(1); delta];
    % compute a potential that is high only very close
    % to the value of M at the selected point
    W = …;
    W = rescale(W,.001,1);
    % perform the front propagation

    options.nb_iter_max = Inf;
    [D,S] = perform_fast_marching_3d(W, start_point, options);
    % display the results using vol3d

    Exploration of the distance function using vol3d.
  • In order to extract a geodesic, we need to select an ending point and perform a descent of the distance function D from this starting point. The selection is done by choosing a point of low distance value in the slice D(:,:,end-delta).
  • % extract a slice
    d = D(:,:,n-delta);
    % select the point (x,y) of minimum value of d
    % hint: use functions ‘min’ and ‘ind2sub’

    [x,y] = …;
    end_point = [x;y;n-delta];


    % extract the geodesic by discrete descent
    path = compute_discrete_geodesic(D,end_point);
    % draw the path
    Dend = D(end_point(1),end_point(2),end_point(3));
    D1 = double( D<=Dend );
    clf;
    plot_fast_marching_3d(M,D1,path,start_point,end_point);

    Exploration of the distance function using vol3d.
    The red surface indicates the region of the volume that has been explored
    before hitting the ending point.

Voronoi segmentation and geodesic Delaunay triangulation in 2D.

  • With the same code as above, one can use multiple starting points. The function perform_fast_marching_2d returns a segmentation map Q that contains the Voronoi segmentation.
  • n=300;
    name = ‘constant’; % other possibility is ‘mountain’
    W = load_potential_map(name, n);
    m = 20; % number of center points
    % compute the starting point at random.
    start_points = floor( rand(2,m)*(n-1) ) +1;
    [D,Z,Q] = perform_fast_marching_2d(W, start_points);
    % display the sampling with the distance function
    clf; hold on;
    imagesc(D’); axis image; axis off;
    plot(start_points(1,:), start_points(2,:), ‘.’);
    hold off;
    colormap gray(256);
    % display the segmentation
    figure; clf; hold on;
    imagesc(Q’); axis image; axis off;
    plot(start_points(1,:), start_points(2,:), ‘.’);
    hold off;
    colormap gray(256);

    Example of Voronoi cells (distance functions) obtained with
    a constant speed W (left) and the mountain map (right).
    Note how the cell on the left have polygonal boundaries whereas cells on the right
    have curvy boundaries.
  • A geodesic Delaunay triangulation is obtained by linking starting points whose Voronoi cells touch. This is the dual of the original Voronoi segmentation.
  • faces = compute_voronoi_triangulation(Q);
    hold on;
    imagesc(Q’); axis image; axis off;
    plot(start_points(1,:), start_points(2,:), ‘b.’, ‘MarkerSize’, 20);
    plot_edges(compute_edges(faces), start_points, ‘k’);
    hold off;
    axis tight; axis image; axis off;
    colormap jet(256);

    Examples of triangulations. Notice how the canonical euclidean Delaunay triangulation (left)
    differs from the geodesic one (right) when the metric is not constant.

Farthest point sampling.

  • We are now back to computations in 2D. The function farthest_point_sampling iteratively compute the distance to the already computed set of points, and add to the list (initialized as empty) the farthest point. You should read the function so that you fully understand what it is doing. You can also try different speed functions to see the resulting sampling.
  • n=300;
    name = ‘mountain’;
    W = load_potential_map(name, n);
    % perform sampling
    landmark = [];
    landmark = farthest_point_sampling( W, landmark, nbr_landmarks-size(landmark,2) );
    % display
    hold on;
    imagesc(M’);
    plot(landmark(1,:), landmark(2,:), ‘b.’, ‘MarkerSize’, 20);
    hold off;
    axis tight; axis image; axis off;
    colormap gray(256);
    % try with other metric W, like ‘constant’


    Farthest point sampling with 50, 100, 150, 200, 250 and
    300 points respectively.
  • Now you can compute the corresponding triangulation using the already given code.
  • Farthest point triangulations.

Heuristically driven front propagation.

  • The function perform_fast_marching_2d can be used with a fourth argument that gives an estimation of the distance to the end point. It helps the algorithm to reduce the number of explored pixels. This remaining distance H should be estimated quickly (hence the name « heuristic »). Here we propose to cheat and use directly the true remaining distance using a classical propagation. You have to test this code with various values of weight.
  • [start_points,end_points] = pick_start_end_point(W);
    % compute the heuristic
    [H,S] = perform_fast_marching_2d(W, end_points);
    % perform the propagation
    options.end_points = end_points;
    weight = 0.5; % should be between 0 and 1.
    [D,S] = perform_fast_marching_2d(W, start_points, options, H*weight);
    % compute the path
    p = extract_path_2d(D,end_points, options);
    % display
    clf;
    plot_fast_marching_2d(W,S,p,start_points,end_points);
    colormap jet(256);
    saveas(gcf, [rep name ‘-heuristic’ num2str(weight) ‘.jpg’], ‘jpg’);

  • As a final question, try to devise a fast way to estimate the remaining distance function. If you have no idea, you can use the function perform_fmstar_2d which implement two methods to achieve this.
  • Heuristic front propagation with a weight parameter of 0, 0.5 and 0.9 respectively.
    Notice how the explored area shrinks around the true path.

    Copyright © 2006 Gabriel Peyré

Lecture 3 – Geodesic Computations
on 3D Meshes

Abstract : The goals of this lecture is to manipulate the fast marching algorithm on triangulated meshes. Application to geodesic extraction, Voronoi segmentation, remeshing and bending invariants are presented.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_fast_marching.zip and toolbox_graph.zip. Unzip it into your working directory. You should have directories toolbox_fast_marching/ and toolbox_graph/ in your path.
  • The first thing to do is to install these toolboxes in your path.
  • path(path, ‘toolbox_fast_marching/’);
    path(path, ‘toolbox_fast_marching/toolbox/’);
    path(path, ‘toolbox_graph/’);
    path(path, ‘toolbox_graph/off/’);

  • Recompile the mex files for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_fast_marching/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_fast_marching
    compile_mex;
    cd ..

Distance Computation on Triangulated Surface.

  • Using the fast marching on a triangulated surface, one can compute the distance from a set of input points. This function also returns the segmentation of the surface into geodesic Voronoi cells.
  • name = ‘elephant’; % other choices includes ‘skull’ or ‘bunny’
    [vertex,faces] = read_mesh([name ‘.off’]);
    nverts = max(size(vertex)); % number of vertices
    nstart = 8; % number of starting points
    start_points = floor(rand(nstart,1)*nverts)+1;
    options.end_points = end_points(:);
    % perform the front propagation
    [D,S,Q] = perform_fast_marching_mesh(vertex, faces, start_points, options);
    % display the distance function
    col = D; col(col==Inf) = 0; % the color of the vertices
    clf; hold on;
    options.face_vertex_color = col;
    plot_mesh(vertex, faces, options);
    h = plot3(vertex(1,start_points),vertex(2,start_points), vertex(3,start_points), ‘r.’);
    set(h, ‘MarkerSize’, 25);
    hold off;
    colormap jet(256);
    axis tight; shading interp;
    % you can now display the segmentation function Q


    Example of distance function from a set of random
    starting points (upper row), and the corresponding
    Voronoi segmentation (bottom row).
  • You can even stop the propagation after a fixed number of steps and see the resulting partially propagated front.
  • options.nb_iter_max = …; % trying with a varying number of iterations
    [D,S,Q] = perform_fast_marching_mesh(vertex, faces, start_points, options);
    % display the distance


    Example of front propagation.

Geodesic Remeshing.

  • A regular sampling of the surface can be performed by seeding in a greedy farthest fashion samples. These points can be linked according to the Voronoi cells adjacency which gives a powerful but yet simple remeshing scheme.
  • nbr_landmarks = …; % number of points, eg. 400
    W = ones(nverts,1); % the speed function, for now constant speed to perform uniform remeshing
    % perform the sampling of the surface
    landmark = farthest_point_sampling_mesh( vertex,faces, [], nbr_landmarks, options );
    % compute the associated triangulation
    [D,Z,Q] = perform_fast_marching_mesh(vertex, faces, landmark);
    [vertex_voronoi,faces_voronoi] = compute_voronoi_triangulation_mesh(Q,vertex,faces);
    % display the distance function (same as before)

    % display the remeshed triangulation (same but with vertex_voronoi and faces_voronoi)


    Farthest point sampling with an increasing number of points
    (upper row) and corresponding remeshing (bottom row).
  • One can use a non-constant speed function in order to have an adapted remeshing. The sampling will be denser in area where the speed function is low.
  • % first kind of speed: low on the left, high on the right
    v = rescale(vertex(1,:));
    options.W = rescale(v>0.5, 1, 3); options.W = options.W(:);
    % do the remeshing

    % second kind of speed: continuously increasing
    v = rescale(-vertex(1,:),1,8);
    options.W = v(:);
    % do the remeshing






    Rows 1&2: uniform sampling and remeshing.
    Rows 3&4: adapted (split left/right) remeshing.
    Rows 5&6: adapted (continuously increasing) remeshing.

Bending Invariants of a Surface.

  • One can use the Isomap procedure in order to modify the surface to get bending invariant signature, useful to perform articulation-invariant recognition of 3D surfaces. One first need to compute the pairwise geodesic distances between points on the surfaces. One then look for new 3D positions of the vertices so that the new Euclidean distances matches closely the geodesic distances. In order to speed up computation, the geodesic distances are computed only on a reduced set of landmarks points. The 3D new locations are then interpolated.
  • nlandmarks = 300; % number of landmarks (low to speed up)
    landmarks = floor(rand(nlandmarks,1)*nverts)+1; % samples landmarks at random
    % compute the distance between landmarks and the rest of the vertices
    Dland = zeros(nverts,nlandmarks);
    for i=1:nlandmarks
    fprintf(‘.’);
    [d,S,Q] = perform_fast_marching_mesh(vertex, faces, landmarks(i));
    Dland(:,i) = d(:);
    end
    fprintf(‘\n’);
    % perform isomap on the reduced set of points
    D1 = Dland(landmarks,:); % reduced pairwise distances
    D1 = (D1+D1′)/2; % force symmetry
    J = eye(nlandmarks) – ones(nlandmarks)/nlandmarks; % centering matrix
    K = -1/2 * J*D1*J; % inner product matrix
    % compute the rank-3 approximation of the inner product to compute embedding
    opt.disp = 0;
    [xy, val] = eigs(K, 3, ‘LR’, opt);
    xy = xy .* repmat(1./sqrt(diag(val))’, [nlandmarks 1]);% interpolation on the full set of points
    % extend the embeding using geodesic interpolation
    vertex1 = zeros(nverts,3);
    deltan = mean(Dland,1);
    for x=1:nverts
    deltax = Dland(x,:);
    vertex1(x,:) = 1/2 * ( xy’ * ( deltan-deltax )’ )’;
    end
    % display both the original mesh and the embedding

    Original mesh (left) and bending invariant (right).

    Copyright © 2006 Gabriel Peyré

Lecture 4 – Mesh Processing

Abstract : The goal of this lecture is to manipulate a 3D mesh. This includes the loading and display of a 3D mesh and then the processing of the mesh. This processing is based on computations involving various kinds of Laplacians. These Laplacians are extensions of the classical second order derivatives to 3D meshes. They can be used to perform heat diffusion (smoothing), compression and parameterization of the mesh.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_graph.zip. Unzip it into your working directory. You should have a directory toolbox_graph/ in your path. Download also the set of additional meshes in .off format: meshes.zip and unzip this file into your directory.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_graph/’);
    path(path, ‘toolbox_graph/off/’);
    path(path, ‘toolbox_graph/toolbox/’);
    path(path, ‘meshes/’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_graph/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_graph
    compile_mex;
    cd ..

Mesh Loading and Displaying.

  • You can load a mesh from a file and display it. Remember that a mesh is given by two matrix: vertex store the 3D vertex positions and face store 3-tuples giving the number of the vertices forming faces.
  • name = ‘mushroom’; % other possibilities include ‘venus’
    %load from file
    [vertex,face] = read_mesh([name ‘.off’]);
    % display the mesh
    clf;
    plot_mesh(vertex,face);
    % remove the display of the triangle
    shading interp; camlight;

    Two examples of rendered triangle meshes.
  • A mesh can also be built by starting from a coarse triangulation and then subdividing it.
  • name = ‘sphere’; % another choice could be ‘L1’
    j = 2; % number of subdivision levels
    [vertex,face] = gen_base_mesh(name, 0); % the initial mesh
    [vertex,face] = gen_base_mesh(name, j);
    clf;
    plot_mesh(vertex,face);


    Two example of meshes computed by regular subdivision.

Heat Diffusion on 3D Meshes.

  • A Laplacian on a 3D mesh is a (n,n) matrix L, where n is the number of vertices, that generalize the classical laplacian of image processing to a 3D mesh. There are several forms of laplacian depending whether it is symmetric (L’=L) and normalized (1 on the diagonal) :
    L0 = D + W (symmetric, normalized)
    L1 = D^{-1}*L0 = Id + D^{-1}*W (non-symmetric, non-normalized)
    L2 = D^{-1/2}*L0*D^{-1/2} = Id + D^{-1/2}*W*D^{-1/2} (symmetric, normalized)

    Where W is a weight matrix, W(i,j)=0 if (i,j) is not an edge of the graph. There are several kinds of such weights

    W(i,j)=1 (combinatorial)
    W(i,j)=1/|vi-vj|^2 (distance)
    W(i,j)=cot(alpha_ij)+cot(beta_ij) (harmonic)

    where {vi}_i are the vertex of the mesh, i.e. vi=vertex(:,i), and (alpha_ij,beta_ij) is the pair of adjacent angles to the edge (vi,vj). A gradient matrix G associated to the laplacian L is an (m,n) matrix where m is the number of edges in the mesh, that satisfies L=G’*G. It can be computed as G((i,j),k)=+sqrt(W(i,j)) if k=j and G((i,j),k)=-sqrt(W(i,j)) if k=i, and G((i,j),k)=0 otherwise.

  • In the following, you can compute gradient, weights and laplacian using the compute_mesh_xxx functions.
  • % kind of laplacian, can be ‘combinatorial’, ‘distance’ or ‘conformal’ (slow)
    laplacian_type = …;
    % load two different kind of laplacian and check the gradient factorization
    options.symmetrize = 1;
    options.normalize = 0;
    L0 = compute_mesh_laplacian(vertex,face,laplacian_type,options);
    G0 = compute_mesh_gradient(vertex,face,laplacian_type,options);
    disp([‘Error (should be 0): ‘ num2str(norm(L0-G0’*G0, ‘fro’)) ‘.’]);
    options.normalize = 1;
    L1 = compute_mesh_laplacian(vertex,face,laplacian_type,options);
    G1 = compute_mesh_gradient(vertex,face,laplacian_type,options);
    disp([‘Error (should be 0): ‘ num2str(norm(L1-G1’*G1, ‘fro’)) ‘.’]);
    % these matrices are stored as sparse matrix
    spy(L0);

  • In order to smooth a vector f of size n (i.e. a function defined on each vertex of the mesh), one can perform a heat diffusion by solving the following PDE
    d F / dt = -L*f       with       F(x,t=0)=f
    until some stopping time t.
    When this diffusion is applied to each component of the positions of the vertices f=vertex(i,:), this smoothes the 3D mesh. Implement this PDE using an explicit discretization in time.
  • % the time step should be small enough
    dt = 0.1;
    % stopping time
    Tmax = 10;
    % number of steps
    niter = round(Tmax/dt);
    % initialize the 3 vectors at time t=0
    vertex1 = vertex;
    % solve the diffusion
    for i=1:niter
    % update the position by solving the PDE
    vertex1 = vertex1 + …;
    end

    Heat diffusion on a 3D mesh, at times t=0, t=10, t=40, t=200.
  • Another way to smooth a mesh is to perform the following quadratic regularization for each componnent f=vertex(i,:)
    F = argmin_g |f-g|^2 + t * |G*f|^2
    The solution of this optimization is given in closed form using the Laplacian L=G’*G as the solution of the following linear system:
    (Id+t*L)*F = f
    Solve this problem for various t on a 3D mesh. You can use the operator \ to solve a system. How does this method compares with the heat diffusion ?
  • % solve the equation
    vertex1 = …;
    % display

Combinatorial Laplacian: Spectral Decomposition and Compression.

  • The combinatorial laplacian is a linear operator (thus a NxN matrix where N is the number of vertices). It depends only on the connectivity of the mesh, thus on face only. The eigenvector of this matrix (which is symmetric, thus can be decomposed by SVD) forms an orthogonal basis of the vector space of signal of NxN values (one real value per vertex). Those functions are the extension of the Fourier oscillating functions to surfaces.
  • % combinatorial laplacian computation
    options.symmetrize = 1;
    options.normalize = 0;
    L0 = compute_mesh_laplacian(vertex,face,’combinatorial’,options);
    %% Performing eigendecomposition
    [U,S,V] = svd(L0);
    % extract one of the eigenvectors
    c = U(:,end-10); % you can try with other vector with higher frequencies
    % assign a color to each vertex
    options.face_vertex_color = rescale(c, 0,255);
    % display
    clf;
    plot_mesh(vertex,face, options);
    shading interp; lighting none;



    Eigenvectors of the combinatorial laplacian with
    increasing frequencies from left to right.
  • Like the Fourier basis, the laplacian eigenvector basis can be used to perform an orthogonal decomposition of a function. In order to perform mesh compression, we decompose each coordinate X/Y/Z of the mesh into this basis. Once this decomposition has been performed, a compression is achieved by keeping only the biggest coefficients (in magnitude).
  • % Warning : vertex should be of size 3 x nvert
    keep = 5; % number of percents of coefficients kept
    vertex2 = (U’*vertex’)’; % projection of the vector in the laplacian basis
    % set threshold to remove coefficients
    vnorm = sum(vertex2.^2, 1);
    vnorms = sort(vnorm); vnorms = vnorms(end:-1:1);
    nvert = size(vertex,2);
    thresh = vnorms( round(keep/100*nvert) );
    % remove small coefs by thresholding
    vertex2 = vertex2 .* repmat( vnorm>=thresh, [3 1] );
    % reconstruction
    vertex2 = (U*vertex2′)’;
    % display
    clf;
    plot_mesh(vertex2,face);
    shading interp; camlight;
    axis tight;


    Spectral mesh compression performed
    by decomposition on the eigenvectors of the laplacian.

Mesh Flattening and Parameterization.

  • The eigenvector of the combinatorial laplacian can also be used to perform mesh flattening. Flattening means finding a 2D location for each vertex of the mesh. These two coordinates are composed by the eigenvectors n°2 and n°3 of the laplacian.
  • % load a mesh
    name = ‘nefertiti’;
    [vertex,face] = read_mesh([name ‘.off’]);
    A = triangulation2adjacency(face);
    % perform the embedding using the combinatorial eigendecomposition
    xy = perform_spectral_embedding(2,A);
    % display the flattened mesh
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;

  • This combinatorial flattening does not use geometric information since it only use the connectivity of the mesh. So any mesh with the same connectivity will have the same 2D embedding. In order to improve the quality of the embedding, one can use a conformal laplacian, who approximate the Laplace Beltrami operator of the continuous underlying surface.
  • % this time, we use the information from vertex to compute flattening
    xy = perform_spectral_embedding(2,A,vertex,face);
    % display
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;

  • Another way to compute the flattening is to use the Isomap algorithm. This algorithm is not based on local differential operator such as laplacian. Instead, the geodesic distance between points on the mesh graph is first computed (see the course 4 for example of geodesic computations on graphs). Then the 2D layout of point is computed in order to match the geodesic distance with the distance in the plane.
  • % the embedding is now computed with isomap
    xy = perform_spectral_embedding(2,A,vertex);
    % display
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;

    Comparison of the flattening obtained with the combinatorial laplacian,
    the conformal laplacian and isomap.
  • Mesh parameterization is similar to flattening, except that we fix the boundary of the mesh to be flattened onto some given convex 2D curve. The flattening can then be proved to be 1:1 (no triangle flip) and long as the curve is convex and Id+laplacian is positive. While flattening involve spectral computation (eigenvectors extraction), which is very slow, mesh parameterization involve the solution of a sparse linear system, which is quite fast (even if the Laplacian matrix is ill-conditioned).
  • % pre-compute 1-ring, i.e. the neighbors of each triangle.
    ring = compute_1_ring(face);
    lap_type = ‘combinatorial’; % the other choice is ‘conformal’
    boundary_type = ‘circle’; % the other choices are ‘square’ and ‘triangle’
    % compute the parameterization by solving a linear system
    xy = compute_parametrization(vertex,face,lap_type,boundary_type,ring);
    % display
    clf;
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;


    The first row shows parameterization using the combinatorial laplacian
    (with various boundary conditions). This assumes that the edge lengths is 1.
    To take into account the geometry of the mesh, the second
    row uses the conformal laplacian.

    Copyright © 2006 Gabriel Peyré

Lecture 5 – Graph-based
data processing

Abstract : The goal of this lecture is to manipulate data in arbitrary dimensions using graph-based method. The points is the data are linked together in a graph structure. Geodesic computations can be performed to compute distance on the dataset.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_dimreduc.zip and toolbox_graph.zip. Unzip it into your working directory. You should have directories toolbox_dimreduc/ and toolbox_graph/ in your path.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_dimreduc’);
    path(path, ‘toolbox_dimreduc/toolbox/’);
    path(path, ‘toolbox_dimreduc/data/’);
    path(path, ‘toolbox_graph’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available for MacOs and/or Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_graph
    compile_mex;
    cd ..

Distance Computation on Graphs.

  • You can load synthetic and real datasets (only frey_rawface is included in the distribution however)
  • name = ‘swissroll’; % you can also try with ‘scurve’
    n = 800; % number of points
    [X,col] = load_points_set( name, n );
    clf; plot_scattered(X,col);
    axis equal; axis off;

  • From points in arbitrary dimension, you can create a graph data structure using the nearest-neighbor rule (you can also try the alternative epsilon rule).
  • options.use_nntools = 0; % set to 1 if you have compile TStool for your machine, this will speed up computations
    options.nn_nbr = 7; % number of nearest neighbor
    % compute NN graph and the distance between nodes
    [D,A] = compute_nn_graph(X,options);
    % display the graph
    clf; plot_graph(A,X, col);
    axis tight; axis off;

  • You can now compute the geodesic distance on this graph using the Dijkstra algorithm.
  • % set up the length of the edges (0 for no edges)
    D(D==Inf) = 0; % weight on the graph
    % find some cool location for starting point
    [tmp,start_point] = min( abs(col(:)-mean(col(:)))); % starting point
    % compute the geodesic distance
    [d,S] = perform_dijkstra(D, start_point);
    % display the graph with distance colors
    clf; hold on;
    plot_scattered(X, d);
    h = plot3(X(1,start_point), X(2,start_point), X(3,start_point), ‘k.’);
    set(h, ‘MarkerSize’, 25);
    axis tight; axis off; hold off;


    Two examples of 3D point clouds (left) ; corresponding NN-graph (center) ;
    geodesic distance to the point in black (right), blue means close.

PCA and Isomap Flattening.

  • The principal component analysis realize a linear dimensionality reduction by projecting the data set on the axes of largest variance.
  • % dimension reduction using PCA
    [Y,xy] = pca(X,2);
    % display
    clf; plot_scattered(xy,col);
    axis equal; axis off;

  • In order to better capture the manifold geometry of the data, Isomap compute the geodesic distance between pair of points, and find a low-dimensional layout that best respect these geodesic distance.
  • % dimension reduction using Isomap
    options.nn_nbr = 7; % number of NN for the graph
    xy = isomap(X,2, options);
    % display
    clf; plot_scattered(xy,col);
    axis equal; axis off;


    Original 3D data set (left) ; 2D flattening using PCA (center) and using Isomap (right).
    Note how the PCA does not recover the simple 2D parameterization of the
    manifold since it is a linear process. In contrast, Isomap is able
    to « unfold » the curvy surface.

Dimension Reduction for Image Libraries.

  • We can use the same process of flattening for data of arbitrary dimension. We first use a simple library of translating disks.
  • name = ‘disks’;
    options.nbr = 1000;
    % Read database
    M = load_images_dataset(name, options);
    % turn it into a set of points
    a = size(M,1);b = size(M,2);n = size(M,3);
    X = reshape(M, a*b, n);
    % perform isomap
    options.nn_nbr = 7;
    options.use_nntools = 0;
    xy = isomap(X,2, options);
    k = 30;
    clf; plot_flattened_dataset(xy,M,k);
    % perform pca
    [tmp,xy] = pca(X,2);
    clf; plot_flattened_dataset(xy,M,k);

  • We can do the same computation on a more complex library of faces.
  • name = ‘frey_rawface’;
    options.nbr = 2000;
    % Read database
    M = load_images_dataset(name, options);
    % subsample at random
    n = 1000;
    sel = randperm(size(M,3));
    M = M(:,:,sel(1:n));
    M = permute(M,[2 1 3]); % fix x/y orientation of the faces
    % turn it into a set of points
    a = size(M,1);b = size(M,2);n = size(M,3);
    X = reshape(M, a*b, n);
    % perform isomap
    options.nn_nbr = 7;
    options.use_nntools = 0;
    xy = isomap(X,2, options);
    k = 30;
    clf; plot_flattened_dataset(xy,M,k);
    % perform pca
    [tmp,xy] = pca(X,2);
    clf; plot_flattened_dataset(xy,M,k);



    Dimension reduction for two different libraries of images.
    Left: translating disks, right: face images.
    Although the disks data set depends on 2D translation, this is not
    a flat (euclidean) manifold (it is a bit curvy due to the disk shape).
    This is why the PCA mapping does not recover exactly a 2D square.
    The face database exhibits a more complex embedding.

    Copyright © 2006 Gabriel Peyré

conclusion

0 – Basic Matlab instructions.
1 – Active contour and level sets.
PDF
PPT
2 – Front propagation in 2D and 3D.
3 – Geodesic computation on 3D meshes.
4 – Differential Calculus on 3D meshes.
5 – High Dimensional Data Processing.

Data Processing Using
Manifold Methods

This course is an introduction to the computational theory of manifolds. Manifold models arise in various area of mathematics, image processing, data mining or computer science. Surfaces of arbitrary dimension can be used to model non-linear datasets that one encounters in modern data processing. Numerical methods allow to exploit this geometric non-linear prior in order to extract relevant information from the data. These methods include in particular local differential computations (related to the Laplacian operator and its variants) and global distance methods (related to geodesic computations). In this course, you will learn how to perform differential and geodesic computations on images, volumes, surfaces and high dimensional graphs.

The course includes a set of Matlab experiments. These experiments give an overview of various tasks in computer vision, image processing, learning theory and mesh processing. This includes computation of shortest paths, Voronoi segmentations, geodesic Delaunay triangulations, surface flattening, dimensionality reduction and mesh processing.

One should copy/paste the provided code into a file named e.g. tp.m, and launch the script directly from Matlab command line > tp;. Some of the scripts contain « holes » that you should try to fill on your own.


I MOVED THIS BLOG FROM WORDPRESS TO BLOGGER. Ce blog est à
ex-ample.blogspot.com

Blog Stats

  • 220 821 hits

localization

Flickr Photos

avril 2019
L M M J V S D
« Oct    
1234567
891011121314
15161718192021
22232425262728
2930