Archive for the 'traitement d’image' Category



alliance for medical image computing Kit = free open source software platform

Overview

The NA-MIC Kit is a free open source software platform. The NA-MIC Kit is distributed under a BSD-style license without restrictions or « give-back » requirements and is intended for research, but there are not restrictions on other uses. It consists of the 3D Slicer application software, a number of tools and toolkits such as VTK and ITK, and a software engineering methodology that enables multiplatform implementations. It also draws on other « best practices » from the community to support automatic testing for quality assurance. The NA-MIC kit uses a modular approach, where the individual components can be used by themselves or together. The NA-MIC kit is fully-compatible with local installation (behind institutional firewalls) and installation as an internet service. Significant effort has been invested to ensure compatibility with standard file formats and interoperability with a large number of external applications.

See this presentation on the NA-MIC Kit for more information.

Download Central

Please go here to download Slicer software, documentation and data.

Software Packages

3D Slicer

3D Slicer is a software package for visualization and medical image computing. A tutorial for prospective users of the program can be found on the web. See our tutorials page for an introduction to the use of 3D Slicer. More…


The Visualization Toolkit VTK

The Visualization Toolkit is an object-oriented toolkit for processing, viewing and interacting with a variety of data forms including images, volumes, polygonal data, and simulation datasets such as meshes, structured grids, and hierarchical multi-resolution forms. It also supports large-scale data processing and rendering. More…


The Insight Toolkit ITK

The Insight Segmentation and Registration Toolkit (ITK) is an open-source software toolkit for performing registration and segmentation. Segmentation is the process of identifying and classifying data found in digitally sampled representations. Typically the sampled representation is an image acquired from such medical instrumentation as CT or MRI scanners. Registration is the task of aligning or developing correspondences between data. For example, in the medical environment, a CT scan may be registered with a MRI scan in order to combine the information contained in both. More…


KWKidgets GUI Toolkit

KWWidgets is an Open Source library of GUI classes based on Tcl/Tk with a C++ API. This library was originally developed by Kitware for ParaView, and now has been extended in functionality and architecture thanks to NAMIC support. More…


Teem Libraries and Command Line Tools

Teem is a coordinated group of libraries for representing, processing, and visualizing scientific raster data. Teem includes command-line tools that permit the library functions to be quickly applied to files and streams, without having to write any code. More…


XNAT Web-based Image Informatics Server

The Extensible Neuroimaging Archive Toolkit (XNAT) is an open source software platform designed to facilitate management and exploration of neuroimaging and related data. XNAT includes a secure database backend and a rich web-based user interface.

NA-MIC is working to provide a portable, easy-to-install and easy-to-administer version of XNAT that can be deployed as part the Kit. These efforts will build on ongoing work in the BIRN community to integrate Slicer with XNAT.


Batchmake

BatchMake is a cross platform tool for batch processing of large amount of data. BatchMake can process datasets locally or on distributed systems using Condor (a grid computing tool that enables distributed computing across the network). Some of the key features of BatchMake include: 1) a BSD License, 2) CMake-like scripting language, 3) distributed scripting via Condor, 4) a centralized remote website for online statistical analysis. 4) a user Interface using FLTK, and 5) BatchMake is cross platform. More…


CMake The Cross-platform Make Tool

CMake is used to control the software build process using simple platform, compiler and operating system independent configuration files. CMake generates native makefiles and workspaces that can be used in the development environment of your choice. That is, CMake does not attempt to replace standard development tools such as compilers and debuggers, rather it produces build files and other development resources that can benefit from automated generation. Further, once CMake configuration files are created, they can be used to produce developer resources across the many platforms that CMake supports. CMake is quite sophisticated: it is possible to support complex environments requiring system configuration, pre-processor generation, code generation, and template instantiation. More…

New: CMake has been adopted by KDE, one of the world’s largest open source software systems.


CDash, CTest, CPack Software Process Tools

As an adjunct to CMake the tools CDash, CTest, CPack are used to test and package all components of the NAMIC kit. CTest is a testing client that locally performs testing on a software repository, and then communicates the results of the testing to CDash (and other testing, dashboard servers such as DART2). CPack is a cross-platform tool for packaging, distributing and installing the NAMIC kit on various systems including Linux, Windows, and Mac OSX. More…


DART Testing Server

DART is a testing server, meaning that it gathers the results of testing from clients (such as CTest) and aggregates them on the testing « dashboard ». This dashboard is central to the NAMIC software process; it provides a centralized web site where NAMIC Kit developers and users can ascertain the day-to-day health of the software, and repair the software immediately if faults are discovered. It facilitates distributed development, and provides the stability that complex software such as the NAMIC Kit requires to support a large community of users. More…

—————

Ref.

http://www.na-mic.org/Wiki/index.php/NA-MIC-Kit

—————

NA-MIC Kit in Numbers

The numbers in this table are statistics characterizing the NA-MIC kit. They provide an estimate of the scale of the Kit, including approximate costs to create and total effort expended. Note that estimates such as these are required because large open-source software systems cannot be tracked via direct investment since much of the effort is voluntary in nature, and distributed across the world through a variety of organizations.
Source: http://www.ohloh.org. Captured on May 30 2008. See the Ohloh website for an explanation of how the numbers were computed.

Package Lines of code Person years Price tag at 100k per person year
Slicer 587,919 161 $16,068,440
KWW 189,627 49 $ 4,925,590
VTK 1,344,989 385 $38,521,873
ITK 711,474 197 $19,712,495
CMAKE 213,671 56 $ 5,586,895
Total 3,047,680 848 $84,815,293

matlab .m .ppt course : introduction to the theory of manifolds; mesh generation ; mesh processing

take care : too many bugs…

This course is an introduction to the  theory of manifolds. Manifold models arise in various area of mathematics, image processing, data mining or computer science. Surfaces of arbitrary dimension can be used to model non-linear datasets that one encounters in modern data processing. Numerical methods allow to exploit this geometric non-linear prior in order to extract relevant information from the data. These methods include in particular local differential computations (related to the Laplacian operator and its variants) and global distance methods (related to geodesic computations). In this course, you will learn how to perform differential and geodesic computations on images, volumes, surfaces and high dimensional graphs.

The course includes a set of Matlab experiments. These experiments give an overview of various tasks in computer vision, image processing, learning theory and mesh processing. This includes computation of shortest paths, voronoi segmentations, geodesic delaunay triangulations, surface flattening, dimensionality reduction and mesh processing.

Ref.

http://www.mathworks.de/matlabcentral/fileexchange/loadFile.do?objectId=13464&objectType=file

Lecture 0 – Basic Matlab Instructions

Abstract : Learn the basic features of Matlab, and learn how to load and visualize signals and images.

Basic Matlab commands.

  • Create a variable and an array
  • % this is a comment
    a = 1; a = 2+1i; % real and complex numbers
    b = [1 2 3 4]; % row vector
    c = [1; 2; 3; 4]; % column vector
    d = 1:2:7; % here one has d=[1 3 5 7]
    A = eye(4); B = ones(4); C = rand(4); % identity, 1 and random matrices
    c = b’; % transpose

  • Modification of vectors and matrices
  • A(2,2) = B(1,1) + b(1); % to access an entry in a vector or matrix
    b(1:3) = 0; % to access a set of indices in a matrix
    b(end-2:end) = 1: % to access the last entries
    b = b(end:-1:1); % reverse a vector
    b = sort(b); % sort values
    b = b .* (b>2); % set to zeros (threshold) the values below 2
    b(3) = []; % suppress the 3rd entry of a vector
    B = [b; b]; % create a matrix of size 2×4
    c = B(:,2); % to access 2nd column

  • Advanced instructions
  • a = cos(b); a = sqrt(b); % usual function
    help perform_wavelet_transform; % print the help
    a = abs(b); a = real(b); a = imag(b); a = angle(b); % norm, real part and imaginary part of a complex
    disp(‘Hello’); % display a text
    disp( sprintf(‘Value of x=%.2f’, x) ); % print a values with 2 digits
    A(A==Inf) = 3; % replace Inf values by 3
    A(:); % flatten a matrix into a column vector
    max(A(:)); % max of a matrix
    M = M .* (abs(M)>T); % threshold to 0 values below T.

  • Display
  • plot( 1:10, (1:10).^2 ); % display a 1D function
    title(‘My title’); % title
    xlabel(‘variable x’); ylabel(‘variable y’); % axis
    subplot(2, 2, 1); % divide the screen in 2×2 and select 1st quadrant

  • Programmation
  • for i=1:4 % repeat the loop for i=1, i=2, i=3 et i=4
    disp(i); % make here something
    end
    i = 4;
    while i>0 % while syntax
    disp(i); % do smth
    i = i-1;
    end

Load and visualize signals and images

  • Load and plot a signal: (function load_signal.m should be in the toolbox of each course)
  • f = load_signal(‘Piece-Regular’, n); % signal of size n
    plot(f);

  • Load and display an image (download function load_image.m should be in the toolboxes)
  • I = load_image(‘lena’);
    imagesc(I); axis image; colormap gray(256);

Copyright © 2006 Gabriel Peyré

Lecture 1 – Active Contours and Level Sets

Abstract : The goals of this lecture is to use the level set framework in order to do curve evolution. The mean curvature motion is the basic tool, and it can be extended into edge-based (geodesic active contours) and region-based (Chan-Vese) snakes.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_fast_marching.zip. Unzip it into your working directory. You should have a directory toolbox_fast_marching/ in your path.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_fast_marching/’);
    path(path, ‘toolbox_fast_marching/data/’);
    path(path, ‘toolbox_fast_marching/toolbox/’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_fast_marching/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_fast_marching
    compile_mex;
    cd ..

Managing level set function.

  • In order to perform curve evolution, we will deal with a distance function stored in a 2D image D. The curve will be embedded in the level set D=0. This curve will be evolved by modifying the image D. A curve evolution ODE can be replaced by an PDE on D. This allows to deal with topological changes when a curve split or two curves merge.
  • n = 200; % size of the image
    % load a distance function
    D0 = compute_levelset_shape(‘circlerect2’, n);
    % type ‘help compute_levelset_shape’ to see other
    % basic curve you can load.

    % display the curve
    clf; hold on;
    imagesc(D); axis image; axis off; axis([1 n 1 n]);
    [c,h] = contour(D,[0 0], ‘r’);
    set(h, ‘LineWidth’, 2);
    hold off;
    colormap gray(256);

    % do the union of two curves
    options.center = [0.15 0.15]*n;
    options.radius = 0.1*n;
    D1 = compute_levelset_shape(‘circle’, n,options);
    imagesc(min(D0,D1)<0);

  • During the curve evolution, the image D might become far from being a distance function. In order to stabilize the algorithm, one needs to re-compute this distance function.
  • % here we simulate a modification of the distance function
    [Y,X] = meshgrid(1:n,1:n);
    D = (D0.^3) .* (X+n/3);
    D1 = perform_redistancing(D);
    % display both the original and the new,
    % redistanced, curve (should be very close)

Mean Curvature Motion.

  • In order to compute differential quantities (tangent, normal, curvature, etc) on the curve, you can compute derivatives of the image D.
  • % the gradient
    g0 = divgrad(D);
    % display the gradient (as arrow field with ‘quiver’, …)

    % the normalized gradient
    d = max(eps, sqrt(sum(g0.^2,3)) );
    g = g0 ./ repmat( d, [1 1 2] );
    % display

    % the curvature
    K = d .* divgrad( g );
    % display

  • The mean curvature motion of the level sets of some image u is driven be the following equation.

    Implement this evolution explicitly in time using finite differences.

    Tmax = 1000; % maximum time of evolution
    dt = 0.4; % time step (should be small)
    niter = round(Tmax/dt); % number of iterations
    D = D0; % initialization

    for i=1:niter
    % compute the right hand size of the PDE

    % update the distance field
    D = …;
    % redistance the function from time to time
    if mod(i,30)==0
    D = perform_redistancing(D);
    end
    % display from time to time
    if mod(i,30)=1
    % display here

    end
    end

    Curve evolution under the mean curvature motion (the background is the distance function D).

Edge-based Segmentation with Geodesic Active Contour (snakes + level set).

  • Given a background image M to segment, one needs to compute an edge-stopping function E. It should be small in area of high gradient, and high in area of large gradient.
  • % load an image
    name = ‘brain’;
    M = rescale( sum( load_image(name, n), 3) );
    % display it

    % compute a smoothed gradient
    sigma = 4; % blurring size
    G = divgrad( perform_blurring(M,sigma) );
    % compute the norm of the gradient
    d = …
    % compute the edge-stopping function
    E = …
    % rescale it so that it is in realistic ranges
    E = rescale(E,0.3,1);

  • The geodesic active contour evolution is a mean curvature motion modulated by the edge-stopping function:

    Implement this evolution explicitly in time using finite differences.
  • Segmentation with geodesic active contours.

Region-based Segmentation with Chan-Vese (Mumord-Shah + level sets).

  • The geodesic active contour uses an edge-based energy. It has lots of local minima and is very sensitive to initialization. In order to circumvent these drawbacks, one can use a region based energy like the Mumford-Shah functional. Re-casted into the level set framework, it reads:

    The corresponding gradient descent is the Chan-Vese active contour method:

    Implement this evolution explicitly in time using finite differences, when c1 and c2 are known in advanced.
  • % initialize with a complex distance function
    D0 = compute_levelset_shape(‘small-disks’, n);
    % set parameters
    lambda = 0.8;
    c1 = …; % black
    c2 = …; % gray

    Segmentation with Chan-Vese active contour without edges.
  • In the case that one does not know in advance the constants c0 and c1, how to update them automatically during the evolution ? Implement this method.
  • Copyright © 2006 Gabriel Peyré

Lecture 2 – Front propagation
in 2D and 3D

Abstract : The goals of this lecture is to manipulate the fast marching algorithm in 2D and 3D. Application to shortest path extraction (e.g. road tracking and tubular structure extraction in medical images) and Voronoi cell segmentation is presented.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_fast_marching.zip. Unzip it into your working directory. You should have a directory toolbox_fast_marching/ in your path.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_fast_marching/’);
    path(path, ‘toolbox_fast_marching/data/’);
    path(path, ‘toolbox_fast_marching/toolbox/’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_fast_marching/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_fast_marching
    compile_mex;
    cd ..

Front propagation in 2D.

  • We can now load an image and build a speed propagation function W. Note that area with W values near 0 (black pixels) are those in which the front propagate slowly, so that W is the inverse of the potential that weight the metric.
  • n = 128;
    name = ‘cavern’; % other possibilities are ‘mountain’ and ‘road2’
    W = load_image(name, n);
    W = rescale(W, 0.01, 1); % set up a reasonable range for the potential
    % display the weighting function

    clf; imagesc(W); colormap gray(256);

  • After asking to the user the starting and ending points, we proceed to the front propagation. Note that the propagation terminates when the front reaches the ending point.
  • [start_points,end_points] = pick_start_end_point(W);
    options.end_points = end_points;
    [D,S] = perform_fast_marching_2d(W, start_points, options);
    % display the distance function
    clf; imagesc(D);
    D(I==Inf) = 0; % remove Inf values that make contour crash
    figure; contour(D,50);

  • The shortest path is extracted by performing a gradient descent of D, starting from the ending point.
  • p = extract_path_2d(D,end_points, options);
    % display the path
    clf; plot_fast_marching_2d(W,S,p,start_points,end_points);

  • Now that you know the basic commands, you can try other speed functions loaded from other files, and you can try more complicated potential than just the value of the image. You can try a potential based on the gradient of the image computed using grad(W).


    Left : distance function for 3 different W maps.
    Center : the corresponding level set maps.
    Right : shortest path together with the explored area (in red).

3D Volumetric Shortest Paths.

  • Firs you need to load a 3D array of data M. Such a volumetric dataset is more difficult to visualize than a standard 2D image. You can render slices like M(:,:,i) or use a volumetric renderer such a vol3d. In order to do so, you need to set up a correct alpha mapping to make transparent some parts of the volume. If you rotate the data, then you need to re-render the volume using vol3d(h). You can download the 3D data set here.
  • % load the whole volume
    load brain1-crop-256.mat
    % crop to retain only the central part
    n = 100;
    M = rescale( crop(M,n) );
    % display some horizontal slices
    imageplot(M(:,:,50));
    % same thing in the other directions


    Cross sections of the data-set.
  • Another, more efficient way, to render volumetric data is to display a semi-transparent volumetric image. This can be achieved using the function vol3d than render semi-transparent slices of the data orthogonal to the viewing direction. In order to do so, you need to set up a correct alpha mapping to make transparent some parts of the volume. If you rotate the data, then you need to re-render the volume using vol3d(h).
  • clf;
    h = vol3d(‘cdata’,M,’texture’,’2D’);
    view(3); axis off;
    % set up a colormap
    colormap bone(256);
    % set up an alpha map
    options.center = …; % here a value in [0,1]
    options.sigma = .08; % control the width of the non-transparent region
    a = compute_alpha_map(‘gaussian’, options); % you can plot(a) to see the alphamap
    colormap bone(256);
    % refresh the rendering
    vol3d(h);
    % try with other alphamapping and colormapping


    Volumetric rendering with different alpha-mapping. Each time the options.center value is increased.
  • Geodesic distances can be computed on a 3D volume using the Fast Marching. The important point here is to define the correct potential field W that should be large in the region where you want the front to move fast. It means that geodesic will follow these regions. In order to do so, we will ask the user to click on a starting point in a given horizontal slice W(:,:,delta). The potential is then computed in order to be large for value of M that are close to the value at this starting point.
  • % ask to the user for some input point
    delta = 5;
    clf; imageplot(M(:,:,delta));
    title(‘Pick starting point’);
    start_point = round( ginput(1) );
    start_point = [start_point(2); start_point(1); delta];
    % compute a potential that is high only very close
    % to the value of M at the selected point
    W = …;
    W = rescale(W,.001,1);
    % perform the front propagation

    options.nb_iter_max = Inf;
    [D,S] = perform_fast_marching_3d(W, start_point, options);
    % display the results using vol3d

    Exploration of the distance function using vol3d.
  • In order to extract a geodesic, we need to select an ending point and perform a descent of the distance function D from this starting point. The selection is done by choosing a point of low distance value in the slice D(:,:,end-delta).
  • % extract a slice
    d = D(:,:,n-delta);
    % select the point (x,y) of minimum value of d
    % hint: use functions ‘min’ and ‘ind2sub’

    [x,y] = …;
    end_point = [x;y;n-delta];


    % extract the geodesic by discrete descent
    path = compute_discrete_geodesic(D,end_point);
    % draw the path
    Dend = D(end_point(1),end_point(2),end_point(3));
    D1 = double( D<=Dend );
    clf;
    plot_fast_marching_3d(M,D1,path,start_point,end_point);

    Exploration of the distance function using vol3d.
    The red surface indicates the region of the volume that has been explored
    before hitting the ending point.

Voronoi segmentation and geodesic Delaunay triangulation in 2D.

  • With the same code as above, one can use multiple starting points. The function perform_fast_marching_2d returns a segmentation map Q that contains the Voronoi segmentation.
  • n=300;
    name = ‘constant’; % other possibility is ‘mountain’
    W = load_potential_map(name, n);
    m = 20; % number of center points
    % compute the starting point at random.
    start_points = floor( rand(2,m)*(n-1) ) +1;
    [D,Z,Q] = perform_fast_marching_2d(W, start_points);
    % display the sampling with the distance function
    clf; hold on;
    imagesc(D’); axis image; axis off;
    plot(start_points(1,:), start_points(2,:), ‘.’);
    hold off;
    colormap gray(256);
    % display the segmentation
    figure; clf; hold on;
    imagesc(Q’); axis image; axis off;
    plot(start_points(1,:), start_points(2,:), ‘.’);
    hold off;
    colormap gray(256);

    Example of Voronoi cells (distance functions) obtained with
    a constant speed W (left) and the mountain map (right).
    Note how the cell on the left have polygonal boundaries whereas cells on the right
    have curvy boundaries.
  • A geodesic Delaunay triangulation is obtained by linking starting points whose Voronoi cells touch. This is the dual of the original Voronoi segmentation.
  • faces = compute_voronoi_triangulation(Q);
    hold on;
    imagesc(Q’); axis image; axis off;
    plot(start_points(1,:), start_points(2,:), ‘b.’, ‘MarkerSize’, 20);
    plot_edges(compute_edges(faces), start_points, ‘k’);
    hold off;
    axis tight; axis image; axis off;
    colormap jet(256);

    Examples of triangulations. Notice how the canonical euclidean Delaunay triangulation (left)
    differs from the geodesic one (right) when the metric is not constant.

Farthest point sampling.

  • We are now back to computations in 2D. The function farthest_point_sampling iteratively compute the distance to the already computed set of points, and add to the list (initialized as empty) the farthest point. You should read the function so that you fully understand what it is doing. You can also try different speed functions to see the resulting sampling.
  • n=300;
    name = ‘mountain’;
    W = load_potential_map(name, n);
    % perform sampling
    landmark = [];
    landmark = farthest_point_sampling( W, landmark, nbr_landmarks-size(landmark,2) );
    % display
    hold on;
    imagesc(M’);
    plot(landmark(1,:), landmark(2,:), ‘b.’, ‘MarkerSize’, 20);
    hold off;
    axis tight; axis image; axis off;
    colormap gray(256);
    % try with other metric W, like ‘constant’


    Farthest point sampling with 50, 100, 150, 200, 250 and
    300 points respectively.
  • Now you can compute the corresponding triangulation using the already given code.
  • Farthest point triangulations.

Heuristically driven front propagation.

  • The function perform_fast_marching_2d can be used with a fourth argument that gives an estimation of the distance to the end point. It helps the algorithm to reduce the number of explored pixels. This remaining distance H should be estimated quickly (hence the name « heuristic »). Here we propose to cheat and use directly the true remaining distance using a classical propagation. You have to test this code with various values of weight.
  • [start_points,end_points] = pick_start_end_point(W);
    % compute the heuristic
    [H,S] = perform_fast_marching_2d(W, end_points);
    % perform the propagation
    options.end_points = end_points;
    weight = 0.5; % should be between 0 and 1.
    [D,S] = perform_fast_marching_2d(W, start_points, options, H*weight);
    % compute the path
    p = extract_path_2d(D,end_points, options);
    % display
    clf;
    plot_fast_marching_2d(W,S,p,start_points,end_points);
    colormap jet(256);
    saveas(gcf, [rep name ‘-heuristic’ num2str(weight) ‘.jpg’], ‘jpg’);

  • As a final question, try to devise a fast way to estimate the remaining distance function. If you have no idea, you can use the function perform_fmstar_2d which implement two methods to achieve this.
  • Heuristic front propagation with a weight parameter of 0, 0.5 and 0.9 respectively.
    Notice how the explored area shrinks around the true path.

    Copyright © 2006 Gabriel Peyré

Lecture 3 – Geodesic Computations
on 3D Meshes

Abstract : The goals of this lecture is to manipulate the fast marching algorithm on triangulated meshes. Application to geodesic extraction, Voronoi segmentation, remeshing and bending invariants are presented.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_fast_marching.zip and toolbox_graph.zip. Unzip it into your working directory. You should have directories toolbox_fast_marching/ and toolbox_graph/ in your path.
  • The first thing to do is to install these toolboxes in your path.
  • path(path, ‘toolbox_fast_marching/’);
    path(path, ‘toolbox_fast_marching/toolbox/’);
    path(path, ‘toolbox_graph/’);
    path(path, ‘toolbox_graph/off/’);

  • Recompile the mex files for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_fast_marching/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_fast_marching
    compile_mex;
    cd ..

Distance Computation on Triangulated Surface.

  • Using the fast marching on a triangulated surface, one can compute the distance from a set of input points. This function also returns the segmentation of the surface into geodesic Voronoi cells.
  • name = ‘elephant’; % other choices includes ‘skull’ or ‘bunny’
    [vertex,faces] = read_mesh([name ‘.off’]);
    nverts = max(size(vertex)); % number of vertices
    nstart = 8; % number of starting points
    start_points = floor(rand(nstart,1)*nverts)+1;
    options.end_points = end_points(:);
    % perform the front propagation
    [D,S,Q] = perform_fast_marching_mesh(vertex, faces, start_points, options);
    % display the distance function
    col = D; col(col==Inf) = 0; % the color of the vertices
    clf; hold on;
    options.face_vertex_color = col;
    plot_mesh(vertex, faces, options);
    h = plot3(vertex(1,start_points),vertex(2,start_points), vertex(3,start_points), ‘r.’);
    set(h, ‘MarkerSize’, 25);
    hold off;
    colormap jet(256);
    axis tight; shading interp;
    % you can now display the segmentation function Q


    Example of distance function from a set of random
    starting points (upper row), and the corresponding
    Voronoi segmentation (bottom row).
  • You can even stop the propagation after a fixed number of steps and see the resulting partially propagated front.
  • options.nb_iter_max = …; % trying with a varying number of iterations
    [D,S,Q] = perform_fast_marching_mesh(vertex, faces, start_points, options);
    % display the distance


    Example of front propagation.

Geodesic Remeshing.

  • A regular sampling of the surface can be performed by seeding in a greedy farthest fashion samples. These points can be linked according to the Voronoi cells adjacency which gives a powerful but yet simple remeshing scheme.
  • nbr_landmarks = …; % number of points, eg. 400
    W = ones(nverts,1); % the speed function, for now constant speed to perform uniform remeshing
    % perform the sampling of the surface
    landmark = farthest_point_sampling_mesh( vertex,faces, [], nbr_landmarks, options );
    % compute the associated triangulation
    [D,Z,Q] = perform_fast_marching_mesh(vertex, faces, landmark);
    [vertex_voronoi,faces_voronoi] = compute_voronoi_triangulation_mesh(Q,vertex,faces);
    % display the distance function (same as before)

    % display the remeshed triangulation (same but with vertex_voronoi and faces_voronoi)


    Farthest point sampling with an increasing number of points
    (upper row) and corresponding remeshing (bottom row).
  • One can use a non-constant speed function in order to have an adapted remeshing. The sampling will be denser in area where the speed function is low.
  • % first kind of speed: low on the left, high on the right
    v = rescale(vertex(1,:));
    options.W = rescale(v>0.5, 1, 3); options.W = options.W(:);
    % do the remeshing

    % second kind of speed: continuously increasing
    v = rescale(-vertex(1,:),1,8);
    options.W = v(:);
    % do the remeshing






    Rows 1&2: uniform sampling and remeshing.
    Rows 3&4: adapted (split left/right) remeshing.
    Rows 5&6: adapted (continuously increasing) remeshing.

Bending Invariants of a Surface.

  • One can use the Isomap procedure in order to modify the surface to get bending invariant signature, useful to perform articulation-invariant recognition of 3D surfaces. One first need to compute the pairwise geodesic distances between points on the surfaces. One then look for new 3D positions of the vertices so that the new Euclidean distances matches closely the geodesic distances. In order to speed up computation, the geodesic distances are computed only on a reduced set of landmarks points. The 3D new locations are then interpolated.
  • nlandmarks = 300; % number of landmarks (low to speed up)
    landmarks = floor(rand(nlandmarks,1)*nverts)+1; % samples landmarks at random
    % compute the distance between landmarks and the rest of the vertices
    Dland = zeros(nverts,nlandmarks);
    for i=1:nlandmarks
    fprintf(‘.’);
    [d,S,Q] = perform_fast_marching_mesh(vertex, faces, landmarks(i));
    Dland(:,i) = d(:);
    end
    fprintf(‘\n’);
    % perform isomap on the reduced set of points
    D1 = Dland(landmarks,:); % reduced pairwise distances
    D1 = (D1+D1′)/2; % force symmetry
    J = eye(nlandmarks) – ones(nlandmarks)/nlandmarks; % centering matrix
    K = -1/2 * J*D1*J; % inner product matrix
    % compute the rank-3 approximation of the inner product to compute embedding
    opt.disp = 0;
    [xy, val] = eigs(K, 3, ‘LR’, opt);
    xy = xy .* repmat(1./sqrt(diag(val))’, [nlandmarks 1]);% interpolation on the full set of points
    % extend the embeding using geodesic interpolation
    vertex1 = zeros(nverts,3);
    deltan = mean(Dland,1);
    for x=1:nverts
    deltax = Dland(x,:);
    vertex1(x,:) = 1/2 * ( xy’ * ( deltan-deltax )’ )’;
    end
    % display both the original mesh and the embedding

    Original mesh (left) and bending invariant (right).

    Copyright © 2006 Gabriel Peyré

Lecture 4 – Mesh Processing

Abstract : The goal of this lecture is to manipulate a 3D mesh. This includes the loading and display of a 3D mesh and then the processing of the mesh. This processing is based on computations involving various kinds of Laplacians. These Laplacians are extensions of the classical second order derivatives to 3D meshes. They can be used to perform heat diffusion (smoothing), compression and parameterization of the mesh.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_graph.zip. Unzip it into your working directory. You should have a directory toolbox_graph/ in your path. Download also the set of additional meshes in .off format: meshes.zip and unzip this file into your directory.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_graph/’);
    path(path, ‘toolbox_graph/off/’);
    path(path, ‘toolbox_graph/toolbox/’);
    path(path, ‘meshes/’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available in toolbox_graph/ for MacOs and Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_graph
    compile_mex;
    cd ..

Mesh Loading and Displaying.

  • You can load a mesh from a file and display it. Remember that a mesh is given by two matrix: vertex store the 3D vertex positions and face store 3-tuples giving the number of the vertices forming faces.
  • name = ‘mushroom’; % other possibilities include ‘venus’
    %load from file
    [vertex,face] = read_mesh([name ‘.off’]);
    % display the mesh
    clf;
    plot_mesh(vertex,face);
    % remove the display of the triangle
    shading interp; camlight;

    Two examples of rendered triangle meshes.
  • A mesh can also be built by starting from a coarse triangulation and then subdividing it.
  • name = ‘sphere’; % another choice could be ‘L1’
    j = 2; % number of subdivision levels
    [vertex,face] = gen_base_mesh(name, 0); % the initial mesh
    [vertex,face] = gen_base_mesh(name, j);
    clf;
    plot_mesh(vertex,face);


    Two example of meshes computed by regular subdivision.

Heat Diffusion on 3D Meshes.

  • A Laplacian on a 3D mesh is a (n,n) matrix L, where n is the number of vertices, that generalize the classical laplacian of image processing to a 3D mesh. There are several forms of laplacian depending whether it is symmetric (L’=L) and normalized (1 on the diagonal) :
    L0 = D + W (symmetric, normalized)
    L1 = D^{-1}*L0 = Id + D^{-1}*W (non-symmetric, non-normalized)
    L2 = D^{-1/2}*L0*D^{-1/2} = Id + D^{-1/2}*W*D^{-1/2} (symmetric, normalized)

    Where W is a weight matrix, W(i,j)=0 if (i,j) is not an edge of the graph. There are several kinds of such weights

    W(i,j)=1 (combinatorial)
    W(i,j)=1/|vi-vj|^2 (distance)
    W(i,j)=cot(alpha_ij)+cot(beta_ij) (harmonic)

    where {vi}_i are the vertex of the mesh, i.e. vi=vertex(:,i), and (alpha_ij,beta_ij) is the pair of adjacent angles to the edge (vi,vj). A gradient matrix G associated to the laplacian L is an (m,n) matrix where m is the number of edges in the mesh, that satisfies L=G’*G. It can be computed as G((i,j),k)=+sqrt(W(i,j)) if k=j and G((i,j),k)=-sqrt(W(i,j)) if k=i, and G((i,j),k)=0 otherwise.

  • In the following, you can compute gradient, weights and laplacian using the compute_mesh_xxx functions.
  • % kind of laplacian, can be ‘combinatorial’, ‘distance’ or ‘conformal’ (slow)
    laplacian_type = …;
    % load two different kind of laplacian and check the gradient factorization
    options.symmetrize = 1;
    options.normalize = 0;
    L0 = compute_mesh_laplacian(vertex,face,laplacian_type,options);
    G0 = compute_mesh_gradient(vertex,face,laplacian_type,options);
    disp([‘Error (should be 0): ‘ num2str(norm(L0-G0’*G0, ‘fro’)) ‘.’]);
    options.normalize = 1;
    L1 = compute_mesh_laplacian(vertex,face,laplacian_type,options);
    G1 = compute_mesh_gradient(vertex,face,laplacian_type,options);
    disp([‘Error (should be 0): ‘ num2str(norm(L1-G1’*G1, ‘fro’)) ‘.’]);
    % these matrices are stored as sparse matrix
    spy(L0);

  • In order to smooth a vector f of size n (i.e. a function defined on each vertex of the mesh), one can perform a heat diffusion by solving the following PDE
    d F / dt = -L*f       with       F(x,t=0)=f
    until some stopping time t.
    When this diffusion is applied to each component of the positions of the vertices f=vertex(i,:), this smoothes the 3D mesh. Implement this PDE using an explicit discretization in time.
  • % the time step should be small enough
    dt = 0.1;
    % stopping time
    Tmax = 10;
    % number of steps
    niter = round(Tmax/dt);
    % initialize the 3 vectors at time t=0
    vertex1 = vertex;
    % solve the diffusion
    for i=1:niter
    % update the position by solving the PDE
    vertex1 = vertex1 + …;
    end

    Heat diffusion on a 3D mesh, at times t=0, t=10, t=40, t=200.
  • Another way to smooth a mesh is to perform the following quadratic regularization for each componnent f=vertex(i,:)
    F = argmin_g |f-g|^2 + t * |G*f|^2
    The solution of this optimization is given in closed form using the Laplacian L=G’*G as the solution of the following linear system:
    (Id+t*L)*F = f
    Solve this problem for various t on a 3D mesh. You can use the operator \ to solve a system. How does this method compares with the heat diffusion ?
  • % solve the equation
    vertex1 = …;
    % display

Combinatorial Laplacian: Spectral Decomposition and Compression.

  • The combinatorial laplacian is a linear operator (thus a NxN matrix where N is the number of vertices). It depends only on the connectivity of the mesh, thus on face only. The eigenvector of this matrix (which is symmetric, thus can be decomposed by SVD) forms an orthogonal basis of the vector space of signal of NxN values (one real value per vertex). Those functions are the extension of the Fourier oscillating functions to surfaces.
  • % combinatorial laplacian computation
    options.symmetrize = 1;
    options.normalize = 0;
    L0 = compute_mesh_laplacian(vertex,face,’combinatorial’,options);
    %% Performing eigendecomposition
    [U,S,V] = svd(L0);
    % extract one of the eigenvectors
    c = U(:,end-10); % you can try with other vector with higher frequencies
    % assign a color to each vertex
    options.face_vertex_color = rescale(c, 0,255);
    % display
    clf;
    plot_mesh(vertex,face, options);
    shading interp; lighting none;



    Eigenvectors of the combinatorial laplacian with
    increasing frequencies from left to right.
  • Like the Fourier basis, the laplacian eigenvector basis can be used to perform an orthogonal decomposition of a function. In order to perform mesh compression, we decompose each coordinate X/Y/Z of the mesh into this basis. Once this decomposition has been performed, a compression is achieved by keeping only the biggest coefficients (in magnitude).
  • % Warning : vertex should be of size 3 x nvert
    keep = 5; % number of percents of coefficients kept
    vertex2 = (U’*vertex’)’; % projection of the vector in the laplacian basis
    % set threshold to remove coefficients
    vnorm = sum(vertex2.^2, 1);
    vnorms = sort(vnorm); vnorms = vnorms(end:-1:1);
    nvert = size(vertex,2);
    thresh = vnorms( round(keep/100*nvert) );
    % remove small coefs by thresholding
    vertex2 = vertex2 .* repmat( vnorm>=thresh, [3 1] );
    % reconstruction
    vertex2 = (U*vertex2′)’;
    % display
    clf;
    plot_mesh(vertex2,face);
    shading interp; camlight;
    axis tight;


    Spectral mesh compression performed
    by decomposition on the eigenvectors of the laplacian.

Mesh Flattening and Parameterization.

  • The eigenvector of the combinatorial laplacian can also be used to perform mesh flattening. Flattening means finding a 2D location for each vertex of the mesh. These two coordinates are composed by the eigenvectors n°2 and n°3 of the laplacian.
  • % load a mesh
    name = ‘nefertiti’;
    [vertex,face] = read_mesh([name ‘.off’]);
    A = triangulation2adjacency(face);
    % perform the embedding using the combinatorial eigendecomposition
    xy = perform_spectral_embedding(2,A);
    % display the flattened mesh
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;

  • This combinatorial flattening does not use geometric information since it only use the connectivity of the mesh. So any mesh with the same connectivity will have the same 2D embedding. In order to improve the quality of the embedding, one can use a conformal laplacian, who approximate the Laplace Beltrami operator of the continuous underlying surface.
  • % this time, we use the information from vertex to compute flattening
    xy = perform_spectral_embedding(2,A,vertex,face);
    % display
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;

  • Another way to compute the flattening is to use the Isomap algorithm. This algorithm is not based on local differential operator such as laplacian. Instead, the geodesic distance between points on the mesh graph is first computed (see the course 4 for example of geodesic computations on graphs). Then the 2D layout of point is computed in order to match the geodesic distance with the distance in the plane.
  • % the embedding is now computed with isomap
    xy = perform_spectral_embedding(2,A,vertex);
    % display
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;

    Comparison of the flattening obtained with the combinatorial laplacian,
    the conformal laplacian and isomap.
  • Mesh parameterization is similar to flattening, except that we fix the boundary of the mesh to be flattened onto some given convex 2D curve. The flattening can then be proved to be 1:1 (no triangle flip) and long as the curve is convex and Id+laplacian is positive. While flattening involve spectral computation (eigenvectors extraction), which is very slow, mesh parameterization involve the solution of a sparse linear system, which is quite fast (even if the Laplacian matrix is ill-conditioned).
  • % pre-compute 1-ring, i.e. the neighbors of each triangle.
    ring = compute_1_ring(face);
    lap_type = ‘combinatorial’; % the other choice is ‘conformal’
    boundary_type = ‘circle’; % the other choices are ‘square’ and ‘triangle’
    % compute the parameterization by solving a linear system
    xy = compute_parametrization(vertex,face,lap_type,boundary_type,ring);
    % display
    clf;
    gplot(A,xy,’k.-‘);
    axis tight; axis square; axis off;


    The first row shows parameterization using the combinatorial laplacian
    (with various boundary conditions). This assumes that the edge lengths is 1.
    To take into account the geometry of the mesh, the second
    row uses the conformal laplacian.

    Copyright © 2006 Gabriel Peyré

Lecture 5 – Graph-based
data processing

Abstract : The goal of this lecture is to manipulate data in arbitrary dimensions using graph-based method. The points is the data are linked together in a graph structure. Geodesic computations can be performed to compute distance on the dataset.

Setting up Matlab.

  • First download the Matlab toolbox toolbox_dimreduc.zip and toolbox_graph.zip. Unzip it into your working directory. You should have directories toolbox_dimreduc/ and toolbox_graph/ in your path.
  • The first thing to do is to install this toolbox in your path.
  • path(path, ‘toolbox_dimreduc’);
    path(path, ‘toolbox_dimreduc/toolbox/’);
    path(path, ‘toolbox_dimreduc/data/’);
    path(path, ‘toolbox_graph’);

  • Recompile the mex file for your machine (this can produce some warning). If it does not work, either use the already compiled mex file (they should be available for MacOs and/or Unix) or try to set up matlab with a C compiler (e.g. gcc) using ‘mex -setup‘.
  • cd toolbox_graph
    compile_mex;
    cd ..

Distance Computation on Graphs.

  • You can load synthetic and real datasets (only frey_rawface is included in the distribution however)
  • name = ‘swissroll’; % you can also try with ‘scurve’
    n = 800; % number of points
    [X,col] = load_points_set( name, n );
    clf; plot_scattered(X,col);
    axis equal; axis off;

  • From points in arbitrary dimension, you can create a graph data structure using the nearest-neighbor rule (you can also try the alternative epsilon rule).
  • options.use_nntools = 0; % set to 1 if you have compile TStool for your machine, this will speed up computations
    options.nn_nbr = 7; % number of nearest neighbor
    % compute NN graph and the distance between nodes
    [D,A] = compute_nn_graph(X,options);
    % display the graph
    clf; plot_graph(A,X, col);
    axis tight; axis off;

  • You can now compute the geodesic distance on this graph using the Dijkstra algorithm.
  • % set up the length of the edges (0 for no edges)
    D(D==Inf) = 0; % weight on the graph
    % find some cool location for starting point
    [tmp,start_point] = min( abs(col(:)-mean(col(:)))); % starting point
    % compute the geodesic distance
    [d,S] = perform_dijkstra(D, start_point);
    % display the graph with distance colors
    clf; hold on;
    plot_scattered(X, d);
    h = plot3(X(1,start_point), X(2,start_point), X(3,start_point), ‘k.’);
    set(h, ‘MarkerSize’, 25);
    axis tight; axis off; hold off;


    Two examples of 3D point clouds (left) ; corresponding NN-graph (center) ;
    geodesic distance to the point in black (right), blue means close.

PCA and Isomap Flattening.

  • The principal component analysis realize a linear dimensionality reduction by projecting the data set on the axes of largest variance.
  • % dimension reduction using PCA
    [Y,xy] = pca(X,2);
    % display
    clf; plot_scattered(xy,col);
    axis equal; axis off;

  • In order to better capture the manifold geometry of the data, Isomap compute the geodesic distance between pair of points, and find a low-dimensional layout that best respect these geodesic distance.
  • % dimension reduction using Isomap
    options.nn_nbr = 7; % number of NN for the graph
    xy = isomap(X,2, options);
    % display
    clf; plot_scattered(xy,col);
    axis equal; axis off;


    Original 3D data set (left) ; 2D flattening using PCA (center) and using Isomap (right).
    Note how the PCA does not recover the simple 2D parameterization of the
    manifold since it is a linear process. In contrast, Isomap is able
    to « unfold » the curvy surface.

Dimension Reduction for Image Libraries.

  • We can use the same process of flattening for data of arbitrary dimension. We first use a simple library of translating disks.
  • name = ‘disks’;
    options.nbr = 1000;
    % Read database
    M = load_images_dataset(name, options);
    % turn it into a set of points
    a = size(M,1);b = size(M,2);n = size(M,3);
    X = reshape(M, a*b, n);
    % perform isomap
    options.nn_nbr = 7;
    options.use_nntools = 0;
    xy = isomap(X,2, options);
    k = 30;
    clf; plot_flattened_dataset(xy,M,k);
    % perform pca
    [tmp,xy] = pca(X,2);
    clf; plot_flattened_dataset(xy,M,k);

  • We can do the same computation on a more complex library of faces.
  • name = ‘frey_rawface’;
    options.nbr = 2000;
    % Read database
    M = load_images_dataset(name, options);
    % subsample at random
    n = 1000;
    sel = randperm(size(M,3));
    M = M(:,:,sel(1:n));
    M = permute(M,[2 1 3]); % fix x/y orientation of the faces
    % turn it into a set of points
    a = size(M,1);b = size(M,2);n = size(M,3);
    X = reshape(M, a*b, n);
    % perform isomap
    options.nn_nbr = 7;
    options.use_nntools = 0;
    xy = isomap(X,2, options);
    k = 30;
    clf; plot_flattened_dataset(xy,M,k);
    % perform pca
    [tmp,xy] = pca(X,2);
    clf; plot_flattened_dataset(xy,M,k);



    Dimension reduction for two different libraries of images.
    Left: translating disks, right: face images.
    Although the disks data set depends on 2D translation, this is not
    a flat (euclidean) manifold (it is a bit curvy due to the disk shape).
    This is why the PCA mapping does not recover exactly a 2D square.
    The face database exhibits a more complex embedding.

    Copyright © 2006 Gabriel Peyré

conclusion

0 – Basic Matlab instructions.
1 – Active contour and level sets.
PDF
PPT
2 – Front propagation in 2D and 3D.
3 – Geodesic computation on 3D meshes.
4 – Differential Calculus on 3D meshes.
5 – High Dimensional Data Processing.

Data Processing Using
Manifold Methods

This course is an introduction to the computational theory of manifolds. Manifold models arise in various area of mathematics, image processing, data mining or computer science. Surfaces of arbitrary dimension can be used to model non-linear datasets that one encounters in modern data processing. Numerical methods allow to exploit this geometric non-linear prior in order to extract relevant information from the data. These methods include in particular local differential computations (related to the Laplacian operator and its variants) and global distance methods (related to geodesic computations). In this course, you will learn how to perform differential and geodesic computations on images, volumes, surfaces and high dimensional graphs.

The course includes a set of Matlab experiments. These experiments give an overview of various tasks in computer vision, image processing, learning theory and mesh processing. This includes computation of shortest paths, Voronoi segmentations, geodesic Delaunay triangulations, surface flattening, dimensionality reduction and mesh processing.

One should copy/paste the provided code into a file named e.g. tp.m, and launch the script directly from Matlab command line > tp;. Some of the scripts contain « holes » that you should try to fill on your own.

The Visualization ToolKit (VTK) is an open source, freely available software system

http://www.vtk.org/

VTK has been installed and tested on nearly every Unix-based platform, PCs (Windows 98/ME/NT/2000/XP), and Mac OSX Jaguar or later.

download:

http://www.vtk.org/get-software.php

technical paper:

http://www.vtk.org/pdf/dioot.pdf

Technical Overview: Software

  • Over 700 C++ classes
  • 350,000+ lines of C++ code (110,000 executable lines)
  • Designed using the approach of Rumbaugh et al. ( Object-Oriented Modelling and Design from Prentice-Hall)
  • 215,000+ lines of automatically generated Tcl wrapper code (similar counts for Python and Java)
  • In-line documentation (both in-code and man pages)
  • Easy to understand C++ code (honest!)
  • Designed to be extensible
  • Lots of examples, applications, test cases, and data
  • Supports portable multithreading and distributed memory for parallel algorithms (download J. Ahrens’s (Los Alamos Nat’l Lab) VTK Parallel Paper).

Technical Overview: 3D Graphics

  • Surface Rendering
  • Volume Rendering
    • A flexible software ray casting implementation
    • Supports texture-based volume rendering
    • Support for VolumePRO volume rendering hardware
    • Supports mixing opaque surface geometry and volume rendering
  • Rendering Primitives
    • points
    • lines
    • polygons
    • triangle strips
    • volumes
  • Interactive Viewer/Renderer « 3D Widgets » for interacting with data
  • Properties
    • ambient, ambient color
    • diffuse, diffuse color
    • specular, specular color
    • color (lights & object)
    • transparency
    • texture mapping
    • shading (flat/Gouraud)
    • backlighting on/off
  • Lights
    • infinite
    • spot
  • Cameras
    • parallel and perspective projection
    • nice methods like elevation, azimuth, zoom, reset
    • automatic camera/light creation

  • Device Independent C++ code and/or Tcl, Java, Python scripts are independent of renderer type. Renderer type set at run-time with environment variable.

  • Graphics Model
    • Lights illuminate the scene
    • Cameras define viewpoint
    • Actors specify geometry/properties
    • LOD actors support manual and automatic generation of level-of-detail to support interactive rendering for even the largest models.
    • Assemblies group actors into arbitrary hierarchies
    • Mappers define geometry/link into visualization pipeline
    • Renderers coordinate lights, cameras, actors to create image
    • Volumes are a type of actor with their own special properties

  • Special Features
    • Multiple windows/viewports
    • Red/blue stereo
    • Crystal eyes stereo
    • Motion and focal blur
    • Backface/frontface culling of polygons
    • Save images to various file formats including png, jpeg, tiff, bmp and ppm.

Technical Overview: Visualization

  • Data Types:
    • polygonal data (points, lines, polygons, triangle strips)
    • images and volumes (i.e., structured point datasets)
    • structured grids (e.g., finite difference grids)
    • unstructured grids (e.g, finite element meshes)
    • unstructured points
    • rectilinear grids

  • Cell Types:
    • vertex, poly-vertex
    • line, poly-line
    • triangle
    • triangle strip
    • pixel
    • quadrilateral
    • polygon
    • tetrahedron
    • voxel
    • hexahedron
    • wedge
    • pyramid

  • Attribute Types:
    • scalars (single valued plus grayscale, grayscale-alpha, rgb, and rgb-alpha).
    • vectors
    • 3×3 tensors
    • normals
    • texture coordinates (1-3D)
    • field data

  • Scalar algorithms
    • color mapping
    • carpet plots
    • iso-contouring: marching cubes
    • iso-contouring: dividing cubes
    • thresholding
    • scalar generation from other data (elevation, velocity, etc.)

  • Vector algorithms
    • hedgehogs
    • streamlines
    • dashed streamlines
    • stream points
    • stream surfaces
    • streampolygon
    • displacement plots/warping

  • Tensor algorithms
    • tensor ellipsoids
    • tensor glyphs
    • hyper-streamlines

  • Information Visualization
    • parallel coordinates
    • programmable glyphs
    • splatting
    • dimension reduction

  • Modelling algorithms
    • spheres, cones, cylinders, cubes, lines, planes, etc.
    • axes, cursors, text, outlines
    • implicit modelling
    • decimation
    • texture thresholding
    • boolean textures
    • glyphs
    • cutting
    • clipping (2D and 3D)
    • probing
    • normal generation
    • connectivity
    • triangle strip generation
    • linear and rotational extrusion
    • splatting
    • swept surfaces/volumes
    • multi-variate visualization
    • scattered/unstructured point visualization
    • appending, merging, cleaning data
    • 2D & 3D Delaunay triangulation (including alpha shapes)
    • Laplacian & Windowed sinc mesh smoothing
    • Surface reconstruction

  • Data Interface (Readers/Writers treat a single dataset; Importers/Exporters treat a scene.) variety of polygonal formats including stereo-lithography, MOVIE.BYU, Cyberware, etc.

    • our own VTK formats (including a parallel XML format) for all data types
    • Inventor Writer, IV Exporters
    • 3D Studio Importer
    • PLOT3D
    • PNM
    • RIB (RenderMan) Exporter
    • SLC (Volume) Reader
    • TIFF Writer
    • VRML Exporter
    • Wavefront .OBJ Exporter, .OBJ Reader
    • BMP reader and writer
    • Raw image formats

  • Visualization Pipeline Demand-driven data-flow with automatic network updates

    • Reference counting to reduce memory requirements
    • Uses sources, filters, mappers to start, process, and terminate network
    • Network looping and feedback supported
    • Strongly type-checked to enforce filter connectivity
    • Supports multiple input / multiple output filters

  • Annotation
    • 2D and 3D text
    • Scalar bar (scalar to color index)
    • x-y plots
    • Flying axes
    • Overlay plane drawing
    • Attach overlay annotation to 3D positions

Technical Overview: Imaging

  • Features
    • Uses cached, streaming pipeline so that you can operate on gigantic datasets (i.e., deals with pieces of data). This is done completely transparently.
    • Most imaging filters are multi-threaded for parallel execution
    • Fully integrated with 3D graphics/visualization pipeline

  • Filter types (a quick summary)
    • diffusion filters
    • Butterworth, low-pass, high-pass filters
    • dilation, erosion, skeleton
    • convolution
    • difference, arithmetic, magnitude, divergence, gradient, mean
    • distance
    • FFT
    • Fourier, Gaussian, Sobel
    • histogram
    • threshold
    • permutation, conversion, padding

In A Nutshell

  • What’s Cool About VTK
    • It’s free (although the books help!)
    • Easy to create graphics/visualization applications
    • C++ source code – you have a lot of control
    • Easy to derive new classes
    • Can prototype or build applications using « interpretive » languages Tcl, Python, and Java
    • User interface can be created fast with Tk or Java GUI class libraries
    • Can learn about graphics / visualization / imaging
    • Supports an extensive palette of 3D widgets
    • Platform/rendering library independent
    • Lots of advanced and very useful algorithms
    • Integrated software, not a bunch of unreleated snippets of code
    • You can convert data into pictures
    • Object-oriented
    • Heavily tested in real-world applications…not academic code
    • Large user base provides decent support
    • Commercial support and consulting available

  • What’s UnCool About VTK
    • Not a super-fast graphics engine…VTK uses C++ dynamic binding and a device independent graphics model.
    • C++ source code (so use Tcl, Python, or Java)
    • Very large…not a toy…you’ll need a decent system to use it effectively

The installer for the Windows platform contains vtk.exe which is essentially a wish.exe replacement that allows you to run VTK TCL scripts without doing any other installation or configuration. Tcl/Tk 8.4.13 is statically linked into the 5.0.4 build of vtk.exe. For Python or Java support, or for Linux or Macintosh systems, you will have to compile VTK from source code using CMake and a native build system for your platform.

Unsupported Linux Pre-Compiled Binaries (RPMS)

You can download RPMS for VTK from Creatis VTK web page. Thanks go to the Creatis team at Insa-Lyon (France) for maintaining them. Questions regarding these packages should be forwarded to a Creatis representative.You can also:

a starting point for building 3D surface meshes or true volumetric tetrahedral grids

« nanosecond history »:

__________1987

Since William Lorensen and Harvey Cline’s publication of Marching Cubes: A High
Resolution 3D Surface Construction Algorithm in 1987 Marching Cubes has become the
defining algorithm for the creation of 3D surface meshes.

Yet despite the success of Marching Cubes the resulting mesh exhibits several weaknesses, including aliasing and terracing artifacts, less-than-optimal triangle quality, and large numbers of triangles.  A
multitude of techniques have been introduced to address these issues, including surface
smoothing and triangle decimation algorithms. Beyond reducing artifacts, smoothing
improves the effectiveness of triangle decimation algorithms and reduces errors during
finite element analysis. However, many smoothing techniques fail to eliminate terracing
because their local filter neighborhood does not encompass the width of the terrace.
Additionally, smoothing a mesh without consideration of the original data may smooth
away crucial fine details as well as mesh generation artifacts.

___________1998

In 1998 Sarah F. F. Gibson published Constrained Elastic Surface Nets: Generating
Smooth Surfaces from Binary Segmented Data. Her work attempts to preserve the fine
detail present in the original data by applying smoothing directly to the binary data and
introduces the concept of a constraint to limit the deviation of the smoothed data from the
original.
Modified SurfaceNets attempts to apply Gibson’s SurfaceNet technique to the
problem of smoothing a Marching Cubes mesh. By defining SurfaceNet nodes on the
Marching Cubes mesh and constraining their movement to their Voronoi regions,
Modified SurfaceNets aims to reduce terracing while preserving the fine detail of the
original image.
———————–

see http://en.wikipedia.org/wiki/Marching_cubes

and see this present bog : https://stef2cnrs.wordpress.com

———————-

This class of Surface Reconstruction methods is OK for immerged SURFACE and only for SURFACE.

If you want to generate not only surface models from your data but also to create true
volumetric tetrahedral grids suitable for advanced 3D finite-element simulations, then open your mind… Usually, these grids are constructed using a flexible advancing-front algorithm. Again, special care is taken to obtain meshes of high quality, i.e., tetrahedra with bad aspect ratio are avoided…

see a commercial software (with a quite good link with matlab): http://www.comsol.com/

———————-

In biophotonics, we have this process:

1/produce your images (exactly a stack of 2D images)

2/perform many image cleaning and many img processes with or without informations from the other images in the stack (often it is only a process for each image with imageJ)

3/the final step is segmentation & binarisation also with or without informations from the other images in the stack

4/generation of surface ( a/isosurface for rendering; b/ 3D surface mesh)

5/generation true  3D volumetric tetrahedral grids suitable for advanced 3D finite-element simulations

6/a 4D solver with time

7/statistics and comparison between biology, medical and multiphysics data and 4D simulations

matlab .m computes a segmented image Otsu’s n-thresholding

Iseg = OTSU(I,n) computes a segmented image (Iseg) containing n classes by means of Otsu’s n-thresholding method (Otsu N, A Threshold Selection Method from Gray-Level Histograms, IEEE Trans. Syst. Man Cybern. 9:62-66;1979). Thresholds are computed to maximize a separability criterion of the resultant classes in gray levels.

http://www.mathworks.de/matlabcentral/fileexchange/loadFile.do?objectId=16205&objectType=file


I MOVED THIS BLOG FROM WORDPRESS TO BLOGGER. Ce blog est à
ex-ample.blogspot.com

Blog Stats

  • 226 532 hits

Meilleurs clics

  • Aucun

localization

Flickr Photos

avril 2024
L M M J V S D
1234567
891011121314
15161718192021
22232425262728
2930