Eidors-logo    

EIDORS: Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software

EIDORS (mirror)
Main
Documentation
Tutorials
− Image Reconst
− Data Structures
− Applications
− FEM Modelling
− GREIT
− Old tutorials
Workshop
Install
Contrib Data
GREIT
Browse Docs
Browse SVN

News
Mailing list
(archive)
FAQ
Developer
                       

 

Hosted by
SourceForge.net Logo

 

Cheating with EIDORS

Andy Adler, William R.B. Lionheart

In this tutorial, we try to point out some simple approaches that work with common, linear regularization techniques. As the solution strategy becomes more complex, then clearly there become more advanced ways to cheat.

Sample Problem: The happy transform

To motivate the problem, assume that EIT measurement data have been acquired from an image which resembles a 'sad' face. Being of an optimistic outlook, we wish that the image reconstructed represent a 'happy' face instead. Fig. 1 illustrates this 'happy transform'.

Fig. 1: The happy transform

EIDORS Setup

To configure EIDORS, we use the following code in Matlab. Images are shown dark on a light background, so we set the colours as follows:

>> run /path/to/eidors/startup.m
>> calc_colours('backgnd',[1,1,1])
>> calc_colours('greylev',.01);
In order to simulate data for these images, we use the following code for a small (256 element) face model:
% Define Face Shapes: Small Face
% $Id: tutorial210a.m 1535 2008-07-26 15:36:27Z aadler $

clear p;
p.leye=   [78,97,98,117,118,141];
p.reye=   [66,82,83,102,103,123];
p.rsmile= [40:41, 53:55, 69:71, 86:88];
p.lsmile= [43:44, 57:59, 73:75, 91:93];
p.lsad =  [31,43,58,57,74,73,92,93,113,112,135];
p.rsad =  [28,40,53,54,69,87,70,107,88,108,129];
p.eyes= [p.leye,p.reye];
p.sad = [p.lsad,p.rsad];
p.smile= [p.rsmile, p.lsmile];
small_face = p;

% Simulate data for small face
imdl= mk_common_model('b2c');
e= size(imdl.fwd_model.elems,1);
simg= eidors_obj('image','small face');
simg.fwd_model= imdl.fwd_model;

% homogeneous data
simg.elem_data= ones(e,1);
vh= fwd_solve( simg);

% inhomogeneous data for sad face model
simg.elem_data(small_face.eyes)= 2;
simg.elem_data(small_face.sad)=  1.5;
vi= fwd_solve( simg);
and for a larger (576 element) face model:
% Define Face Shapes: Large Face
% $Id: tutorial210b.m 1535 2008-07-26 15:36:27Z aadler $

clear p;
p.sad=  [ ...
     53; 57; 69; 73; 86; 87; 91; 92; 106; 107; 111; 112; 127; 128;
    129; 133; 134; 135; 147; 151; 152; 153; 157; 158; 159; 165;
    171; 172; 177; 178; 179; 184; 185; 186; 192; 193; 199; 200;
    205; 206; 207; 212; 213; 214; 220; 221; 227; 228; 229; 235;
    236; 243; 244; 251; 252; 253; 259; 260; 261; 267; 268; 275;
    276; 283; 284; 285; 292; 293; 301; 310; 319; 320]; 

p.happy= [ ...
    69; 70; 71; 73; 74; 75; 86; 87; 88; 89; 91; 92; 93; 94; 106;
   107; 108; 109; 111; 112; 113; 114; 127; 128; 129; 130; 131; 133;
   134; 135; 136; 137; 147; 151; 152; 153; 154; 155; 157; 158; 159;
   160; 161; 165; 171; 172; 176; 177; 178; 179; 180; 183; 184; 185;
   186; 187; 192; 193; 199; 200; 220; 221; 227; 228; 229; 251; 252;
   253; 259; 260; 261; 283; 284; 285; 292; 293; 319; 320]; 

p.halfy= [ ...
    57; 69; 70; 71; 73; 86; 87; 88; 89; 91; 92; 106; 107; 108; 109;
   111; 112; 127; 128; 129; 130; 131; 133; 134; 135; 147; 151; 152;
   153; 154; 155; 157; 158; 159; 165; 171; 172; 176; 177; 178; 179;
   180; 184; 185; 186; 192; 193; 199; 200; 212; 213; 214; 220; 221;
   227; 228; 229; 243; 244; 251; 252; 253; 259; 260; 261; 275; 276;
   283; 284; 285; 292; 293; 310; 319; 320];

large_face= p;

Approach #1: Careful selection of noise

Typically, a reconstruction algorithm is presented with white Gaussian noise (WGN) added to the data. One technique to perform the happy transform is to carefully select the noise. In this case, we simulated a homogenous (vh) and inhomogeneous (vi) data on a 256 element FEM. Subsequently 17.75dB of WGN was added to vi, and the images reconstructed using the algorithm of Adler and Guardo (1996). Each image was reconstructed (with different selections of hyperparameter values), and reviewed by the author to determine which cases corresponded to the happy transform.
% Test image for different noise
% $Id: tutorial210c.m 3848 2013-04-16 16:55:57Z aadler $

il_g= mk_common_model('c2c2',16);

num_tries=6; clear vi_n;
for i= 1:num_tries 
   vi_n(:,i) = add_noise(2, vi, vh);
end

img = inv_solve( il_g, vh, vi_n );
img.calc_colours.greylev = .01;
img.show_slices.img_cols = num_tries;
show_slices( img );
Fig. 2 shows two successful 'happy' images.

Fig. 2: Images with 17.75dB WGN which were selected as 'happy'

In order to determine the frequency of 'happy' noise, 2000 images were reviewed and 41 were selected, corresponding to an occurance rate of approximately 2%. While careful noise selection is not a mathematical/computational technique, it is commonly used in association in research, and thus merits mention here.

Approach #2: Careful selection of priors

The Bayesian framework for regularization interprets the image penalty term as a priori information about the underlying image probability distribution. In practice, however, this term is selected using ad hoc or heuristic techniques. If the prior does not correspond to the real case, then the reconstructed image will be biased. This idea is key for approach #2.

In a Tikhonov regularization scheme, image amplitude is penalized. We use the following formulation:

x = (HtH + λR)−1 Hty
where the regularization term
R = √( trace HtH )
If, however, we know a priori that our data were measured from a happy face, then we would not wish to penalize image pixels which we know to be large. Thus for each pixel i in the happy face, we set
Ri,i = ½ √( trace HtH )i,i
In order to implement this prior function with EIDORS, we define function tutorial210_cheat_tikhonov.m as follows
function Reg= tutorial210_cheat_tikhonov( inv_model )
% Reg= cheat_tikhonov( inv_model )
% Reg        => output regularization term
% Parameters:
%   elems    = inv_model.RtR_prior.cheat_elements;
%            => elements weights to modify
%   weight   = inv_model.RtR_prior.cheat_weight;
%            => new weight to set elements to


pp= fwd_model_parameters( inv_model.fwd_model );
idx= 1:pp.n_elem;
weight= ones(1,pp.n_elem);
weight( inv_model.tutorial210_cheat_tikhonov.cheat_elements ) = ...
        inv_model.tutorial210_cheat_tikhonov.cheat_weight;

Reg = sparse( idx, idx, weight );
Images are reconstructed with this prior as follows:
% Reconstruct images with cheating Tikhonov prior (large model)
% $Id: tutorial210e.m 2674 2011-07-13 07:21:17Z bgrychtol $

lmdl= mk_common_model('c2c');
lmdl.RtR_prior= @tutorial210_cheat_tikhonov;
lmdl.tutorial210_cheat_tikhonov.cheat_weight= .5;
clear im_st;

% Normal Tikhonov prior
lmdl.tutorial210_cheat_tikhonov.cheat_elements= [];
im_st(1)= inv_solve(lmdl, vh, vi);

% Normal Tikhonov with sad face
lmdl.tutorial210_cheat_tikhonov.cheat_elements=  ...
     large_face.sad;
im_st(2)= inv_solve(lmdl, vh, vi);

% Normal Tikhonov with halarge_facey face
lmdl.tutorial210_cheat_tikhonov.cheat_elements=  ...
     large_face.happy;
im_st(3)= inv_solve(lmdl, vh, vi);

% Normal Tikhonov with half face
lmdl.tutorial210_cheat_tikhonov.cheat_elements=  ...
     large_face.halfy;
im_st(4)= inv_solve(lmdl, vh, vi);
im_st(1).calc_colours.greylev = .01;
im_st(1).show_slices.img_cols = 4;
show_slices( im_st);
The effect of careful prior selection is hown in Fig. 3. In this case, images were reconstructed on a 576 element FEM (chosen to differ from the 256 element simulation mesh).

Fig. 3: Reconstructed images illustrating the effect of image priors, using different mesh for model and reconstruction. Images are numbered from left to right. Image 1: Tikhonov prior with no weighting, Image 2: Tikhonov prior with weighting for positions in sad face, Image 3: Tikhonov prior with weighting for sad face (left) and happy face (right), Image 4: Tikhonov prior with weighting for positions in happy face,

In order to enhance this effect, we use an inverse crime, by putting the Tikhonov prior information exactly where it needs to be to get the happy face (Fig. 4). Images are reconstructed with this prior as follows:

% Reconstruct images with cheating Tikhonov prior (small model)
% $Id: tutorial210d.m 2647 2011-07-12 08:33:48Z bgrychtol $

smdl= mk_common_model('b2c');
smdl.RtR_prior= @tutorial210_cheat_tikhonov;
smdl.tutorial210_cheat_tikhonov.cheat_weight= .5;

% Normal Tikhonov prior
smdl.tutorial210_cheat_tikhonov.cheat_elements= [];
im_st(1)= inv_solve(smdl, vh, vi);

% Normal Tikhonov with sad face
smdl.tutorial210_cheat_tikhonov.cheat_elements=  ...
    [small_face.eyes, small_face.sad];
im_st(2)= inv_solve(smdl, vh, vi);

% Normal Tikhonov with hasmall_facey face
smdl.tutorial210_cheat_tikhonov.cheat_elements=  ...
    [small_face.eyes, small_face.smile];
im_st(3)= inv_solve(smdl, vh, vi);

% Normal Tikhonov with half face
smdl.tutorial210_cheat_tikhonov.cheat_elements=  ...
    [small_face.eyes, small_face.rsmile, small_face.lsad];
im_st(4)= inv_solve(smdl, vh, vi);
im_st(1).calc_colours.greylev = .01;
im_st(1).show_slices.img_cols = 4;
show_slices( im_st);

Fig. 4: Reconstructed images illustrating the effect of image priors, using same mesh for model and reconstruction. Images are numbered from left to right. Image 1: Tikhonov prior with no weighting, Image 2: Tikhonov prior with weighting for positions in sad face, Image 3: Tikhonov prior with weighting for sad face (left) and happy face (right), Image 4: Tikhonov prior with weighting for positions in happy face,

Approach #3: Edge based priors

It is somewhat difficult to properly model a Laplacian filter on a Finite Element mesh, but one way to approximate it is to do the following: for each edge between elements i and j, put 1 at i,i and j,j and −1 at i,j and j,i.

Such a Laplacian filter can be used as a regularization prior to penalize high frequency components (edges) in the image. On the other hand, if we know where the edges are, then edges should not be penalized (or be less penalized) in those places.

In order to implement this prior function with EIDORS, we define function tutorial210_cheat_laplace.m as follows. This function is clearly more complicated than the tutorial210_cheat_tikhonov.m because we need to search for adjoining elements in the FEM.

function Reg= tutorial210_cheat_laplace( inv_model )
% Reg= cheat_laplace( inv_model )
% Reg        => output regularization term
% Parameters:
%   elems    = inv_model.tutorial210_cheat_laplace.cheat_elements;
%            => elements weights to modify
%   weight   = inv_model.tutorial210_cheat_laplace.cheat_weight;
%            => new weight to set elements to

pp= fwd_model_parameters( inv_model.fwd_model );

ROI = zeros(1,pp.n_elem);
ROI( inv_model.tutorial210_cheat_laplace.cheat_elements ) = 1;

Iidx= [];
Jidx= [];
Vidx= [];
for ii=1:pp.n_elem
  el_adj = find_adjoin( ii, pp.ELEM );
  for jj=el_adj(:)'
      if (ROI(ii) + ROI(jj)) == 1 %one only
         fac= inv_model.tutorial210_cheat_laplace.cheat_weight *.5;
      else 
         fac = .5;
      end
      Iidx= [Iidx,      ii, ii, jj, jj];
      Jidx= [Jidx,      ii, jj, ii, jj];
      Vidx= [Vidx, fac*([1, -1, -1,  1]) ];
  end
end
Reg = sparse(Iidx,Jidx, Vidx, pp.n_elem, pp.n_elem );

% find elems which are connected to elems ee
function elems= find_adjoin(ee, ELEM)
   nn= ELEM(:,ee);
   [d,e]= size(ELEM);
   ss= zeros(1,e);
   for i=1:d
     ss= ss+ any(ELEM==nn(i));
   end
   elems= find(ss==d-1);
Images are reconstructed with this prior as follows (using almost identical code to the previous example):
% Reconstruct images with cheating Laplace prior (large model)
% $Id: tutorial210g.m 2674 2011-07-13 07:21:17Z bgrychtol $

lmdl= mk_common_model('c2c');
lmdl.RtR_prior= @tutorial210_cheat_laplace;
lmdl.tutorial210_cheat_laplace.cheat_weight= .3;
clear im_st;

% Normal Tikhonov prior
lmdl.tutorial210_cheat_laplace.cheat_elements= [];
im_st(1)= inv_solve(lmdl, vh, vi);

% Normal Tikhonov with sad face
lmdl.tutorial210_cheat_laplace.cheat_elements=  ...
     large_face.sad;
im_st(2)= inv_solve(lmdl, vh, vi);

% Normal Tikhonov with halarge_facey face
lmdl.tutorial210_cheat_laplace.cheat_elements=  ...
     large_face.happy;
im_st(3)= inv_solve(lmdl, vh, vi);

% Normal Tikhonov with half face
lmdl.tutorial210_cheat_laplace.cheat_elements=  ...
     large_face.halfy;
im_st(4)= inv_solve(lmdl, vh, vi);
im_st(1).calc_colours.greylev = .01;
im_st(1).show_slices.img_cols = 4;
show_slices( im_st);
Fig 5 shows the effect of such careful edge preserving prior selection (with no inverse crime). Known edges are weighted at 0.3×that of other edges in the image.

Fig. 5: Reconstructed images illustrating the effect of edge preserving image priors, using different mesh for model and reconstruction. Images are numbered from left to right. Known edges are weighted at 0.3×that of other edges in the image. Image 1: Edge prior with no weighting, Image 2: Edge prior with weighting for positions in sad face, Image 3: Edge prior with weighting for sad face (left) and happy face (right), Image 4: Edge prior with weighting for positions in happy face,

In order to enhance this effect, we use an inverse crime, by putting the Tikhonov prior information exactly where it needs to be to get the happy face (Fig. 6). Known edges are weighted at 0.3×that of other edges in the image. Images are reconstructed with this prior as follows:

% Reconstruct images with cheating Laplace prior (small model)
% $Id: tutorial210f.m 2674 2011-07-13 07:21:17Z bgrychtol $

smdl= mk_common_model('b2c');
smdl.RtR_prior= @tutorial210_cheat_laplace;
smdl.tutorial210_cheat_laplace.cheat_weight= .3;
clear im_st;

% Normal Tikhonov prior
smdl.tutorial210_cheat_laplace.cheat_elements= [];
im_st(1)= inv_solve(smdl, vh, vi);

% Normal Tikhonov with sad face
smdl.tutorial210_cheat_laplace.cheat_elements=  ...
    [small_face.eyes, small_face.sad];
im_st(2)= inv_solve(smdl, vh, vi);

% Normal Tikhonov with hasmall_facey face
smdl.tutorial210_cheat_laplace.cheat_elements=  ...
    [small_face.eyes, small_face.smile];
im_st(3)= inv_solve(smdl, vh, vi);

% Normal Tikhonov with half face
smdl.tutorial210_cheat_laplace.cheat_elements=  ...
    [small_face.eyes, small_face.rsmile, small_face.lsad];
im_st(4)= inv_solve(smdl, vh, vi);
im_st(1).calc_colours.greylev = .01;
im_st(1).show_slices.img_cols = 4;
show_slices( im_st);

Fig. 6: Reconstructed images illustrating the effect of edge preserving image priors, using same mesh for model and reconstruction. Images are numbered from left to right. Known edges are weighted at 0.3×that of other edges in the image. Image 1: Edge prior with no weighting, Image 2: Edge prior with weighting for positions in sad face, Image 3: Edge prior with weighting for sad face (left) and happy face (right), Image 4: Edge prior with weighting for positions in happy face,

An even more dramatic effect is obtained by setting the penalty for Known edges to be zero (Fig. 7).


Fig. 7: Reconstructed images illustrating the effect of edge preserving image priors, using same mesh for model and reconstruction. Images are numbered from left to right. Known edges are weighted at 0.0×that of other edges in the image. Image 1: Edge prior with no weighting, Image 2: Edge prior with weighting for positions in sad face, Image 3: Edge prior with weighting for sad face (left) and happy face (right), Image 4: Edge prior with weighting for positions in happy face,

Conclusion

The goal of this document is to illustrate some of the things that can go wrong with the algorithms provided with EIDORS. While the treatment in this document is lighthearted, it is surprisingly easy to unwittingly develop mathematical algorithms which are subject to variants of these cheats. We hope that these ideas will help clarify the kinds of possible errors, and help researchers to avoid them.