2D -> 3D
In the 2D image to 3D object reconstruction, the first step is to iterate through the sliced images, apply a "north", "south", "east", and "west" sobel filter and take its geometric average. This is done in order to generate edge. We then compare each pixel value of the edge detection algorithm output with a threshold value and assign each pixel value either a "0" or "1". This procedure turns a grey-scale image into a black and white "composite" image (image below). One should adjust the threshold value to get a reasonable composite image. (The threshold value changes drastically depending on what type of object the image files were generated from. i.e. medical imaging might use 0.95 while a computer generated set might use 0.1). CUDA parallelization is clearly the best option for this problem, since we can simply assign a GPU thread to each pixel.
The next part is to "hide" any voxels that are covered on all 6 sides. These blocks won't be able to be seen and are merely a waste of space. Again, the preferred parallelization method is CUDA, since each thread can be assigned a voxel and computed independently of the result of any other voxel. (This has been coded up in a way to prevent race conditions while simultaneously doing an in-memory computation) Finally, we must write out all of the "visible" pixel values out to an object file.