Tuesday, October 16, 2012

The Brain of Richard App


Use your pointing device to zoom, pan & rotate the brain of Richard

Link
http://jaanga.github.com/brainofrichard/

My friend Richard has a brain tumor. To be more precise he has an oligodendroglioma in his posterior cingulate gyrus, that is to say Brodmann area 23. His brain has been scanned on numerous occasions using an MRI scanner at UCSF. Richard has been to obtain the scan data and made it available to me - all are 2D images in DICOM format. DICOM is a frequently used data format in medical imaging as it allows patient and other alphanumeric and binary data to be stored along with digital images.
One of the many scans of the brain of Richard

The ZIP file I received was for a scan taken on 18 January 2012 and has over 12 hundred images. These are all 2D images and mostly look like the image just above. The images are in a variety of series of sections or slices and each scans appears to be about two millimeters apart from the next. There's a series of horizontal scans, vertical slices from front to back and side to side.

The question that Richard and I posed each other is would it be possible to design and build an app that could somehow recombine images and display them in some 3D manner? And, if so, would the result be useful in anyway. It should be pointed out that neither Richard nor I have any experience with medical imaging though we both have good experience with 3D and digital imaging.

Our first thought was to see if we could reassemble the images in 3D in an approximation of the original and then to reduce the opacity of scans so as to be able to see through a series of scans. This post is the first report on the results of the experiment.

The software we used is Three.js - the WebGL framework that enables GPU accelerated 3D graphics in a browser. Three.js is able to read and display 2D images as 'textures' applied to 3D objects. Three.js can read PNG, JPG and GIF only.

The first operation operation was to convert DICOM files to PNG (the most open format). On a Windows computer I was able to do the conversions using Irfanview and XnView - both are popular and readily available Windows apps available at no charge. They both handle converting, renaming and making the background transparent.

The next operation was to load the images into a web page using Three.js. Below is the code I used. I won't explain all the variables. The critical elements were setting the material to be double-sided and transparent. Suffice it to say, it's not rocket science.
 
brainApp.buildBrain = function() {
  if ( scans ) { scene.remove(scans); }
  geometry = new THREE.PlaneGeometry( 200, 200, 10, 10 );
  scans = new THREE.Object3D();
  scans.current = 0;
  for (var i = 1, l = hack.count; i <= l; i++) {
    map = THREE.ImageUtils.loadTexture( hack.dir + i + '.png' );
    material = new THREE.MeshBasicMaterial( { map: map, opacity: hack.opacityDefault, side: THREE.DoubleSide, transparent: true} );
    mesh = new THREE.Mesh( geometry, material );
    mesh.rotation.x = hack.angle;
mesh.position.set(0, hack.startY + i * hack.deltaY, hack.startZ + i * hack.deltaZ);
   scans.add( mesh );
  }
  scene.add( scans );
};

It's hard to describe the thrill I had when I dealt with enough enough issues so that the images finally appeared. I am a designer. When I design something I have a perception of what I am designing in my brain. Nevertheless there is that moment when a thing you are working on begins to transform itself from raw material into the thing it's meant to be, begins to come alive. In architecture, what was a the pile of wood begins to be a home. The lump of marble become a sculpture. In this case, there I was a hitting the enter key and then watching a bunch of JavaScript code transform into Richard. Magic.

Over the last two or three weeks I have spent a number of hours on the project and identified a number of interesting things:

There is a website that displays MRI scans of vegetables. Have a look at the wonderful InsideInsides.com. The developer, Andy Ellison, has kindly authorized me to take apart his animated GIF files and reassemble them in 3D. I have added a cactus and artichoke scan to the File Open menu

The right side menu is built using the FOSS dat.gui - A lightweight controller library for JavaScript. This project was my first foray into the work of dat.gui. The interesting thing here is that the dat.gui that comes with Three.js is an older version. The dat.gui from the home site has many more features and fewer bugs.

There is a playable demo included which is part of an ongoing effort to "Put Theo in a Bottle". The demo was built using the HTML5 audio element. The interesting thing here is that the audio element has a ton of cool features that allow it to be used as a timeline controller.

All of these and more are worth a double-click. So over the next few days, I plan to write a detailed post about each of these thoughts.

But more than that, we are beginning to consider where to go next with all this. There is a huge amount of data to explore. The data may contain all sorts of fascinating things to play with. For example MRI scans may contain anisotropic data, Three.js supports anisotropic rendering. Could that be a useful thing to explore?

After I pointed Andy Ellison - an MRI technologist at Boston University Medical School - to the app, he responded as follows
I'm super impressed. Usually we need the original DICOM format to do that. Very clever, and built into a browser, wicked.
If Andy has this thought regarding Richard's and my first little baby steps, then it might be fun to see what somebody who actually what they doing working on this sort of project...

Link
http://jaanga.github.com/brainofrichard/


Link to code for demo in header
http://jaanga.github.com/blode/#jaanga.githb.com/Blode/Brain-of-Richard-App


See also
http://www.nytimes.com/2012/10/09/health/labs-seek-new-ways-to-look-inside-the-body.html

Notes
The scans can take quite a while to load and then after the load they have to attain their transparency. Some of the files have over 30 megapixels of transparency to sort out. So it can take a minute or so so settle down. If you see a lot of black then the model has not finished loading. Note: There's a lot that can be done to optimize the process.

The other issue is trying to deal with 3D using a touchpad instead of a mouse move the models. If you are having issues with using the touchpad, try clicking on 'Zoom', 'Rotate' or 'Pan' in the right side menu. And that is clicking on not sliding the indicators. The 3D user interface is still at an early stage.