Writing the Program
The following is a description of the design and practical process of creating the application.
Reading a DICOM File
A CT volume matrix can be stored in a DICOM file. There are various ways of reading a DICOM file and we have chosen to use a VTK method to facilitate the 3D rendering. The CT slices are stored in files in a directory which can be read by a VTK method. The VTK method properly reads all the files and aligns them in a matrix in the correct order. Once the files have been read, the matrix can be exported to a NumPy format which is necessary to manipulate data with Python’s mathematical operations.
Deriving a 2D Image from CT Data
To extract an image from CT data, the mean value of a column in a CT slide makes up a pixel in the synthesised CR image and a row in the synthesised image consists of the mean values of every column of a particular CT slide. A complete image is generated by piecing together the rows of all the slides in the CT files pertaining to an examination. See Image 4.2.1 for an image created through this method.
The result of the procedure mentioned in 3.3.2 is an image where you can barely see the skeleton (Image 4.2.1) because there is too much soft tissue interfering with the clarity of the image. To solve this problem a segmentation method was constructed where the input parameter is a threshold value. Every voxel with a value above the threshold is then marked as true in a different matrix which is then multiplied element-wise with the original matrix extracted from the DICOM-file. With the correct threshold value this results in a matrix where the skeleton is easier to see. Extraction/manipulation of matrix values was done using logical matrix operations and NumPy. Compare images 4.2.1 and 4.2.2 for the effects of this procedure.
Rotating the Volume
A method (see 2.3. Object-oriented programming) for rotating a 3D volume was constructed (using a method from VTK) in order to correct for CT examinations where the patient’s hip is not parallel with the x-axis of a CT slide. This method receives an input in degrees and rotates the volume around the z-axis accordingly. Compare images 4.2.2 and 4.2.3 for the effects of rotating a volume before deriving an image.
Creating a DICOM File
For the prosthesis modelling program to be able to accept the image, a suitable DICOM “profile” was created. This was done using the DICOM header and dataset from an original CR image as a basis. However, since each DICOM image requires several unique identifiers the existing ones were replaced with new, randomised ones. Several other tags were also replaced or removed depending on whether they were necessary or not, in order to remove the amount of false or misleading information contained in the new DICOM file. To determine which tags should be changed, an iterative procedure was used where a few tags were changed at a time while the results of the changes were documented. To reduce the amount of time required to test different tags, DICOM PS3.3 2016b Information Object Definition was studied to find out which tags were mandatory and which were optional (Nema, 2016).
The images were then tested against a PACS workstation. Before access was granted to Sectra’s PACS, a free PACS program called OsiriX Lite was used. When the program could create DICOM files that were compatible with OsiriX Lite the next step was to test the files on an actual Sectra PACS. This was done to ensure that the images created by the program would not be rejected by the hospital’s software. The images were tested on Sectras PACS 17.1 through Sectra Preop Online (Appendix I: Images, image I.1) which is a demo service that allows remote access to a PACS testing environment. Once the files were uploaded to the PACS some basic distance measurements were made to assure that distances and pixel spacing was correct. Towards the final stages of the project an orthopaedic surgeon from Karolinska modelled a hip prosthesis on an image derived from the application and one traditional CR image in order to validate that the prosthesis planning based on the two different images resulted in a prosthesis of the same size. These measurements were done using the Ortho Toolbox in the PACS.
Designing a Graphical User Interface
The main idea for the Graphical User Interface (GUI) was a simple design where the user is not overloaded by unnecessary features and buttons. A prototype was constructed from this idea and subsequently presented to the radiologist and the orthopaedic surgeon who will be using the program for feedback. The feedback was then taken into account and implemented.
Constructing a Graphical User Interface
The Graphical User Interface (GUI) was created using a library called PyQt4 in order to combine and connect the different classes used to create the program. PyQt4 has back-ends for both VTK and Matplotlib which are both central for the GUI design. For the sake of this report, the two main windows of the program will be referred to as Render Window (Image 4.1.1) and Image Window (Image 4.1.2). Both windows are PyQt4 QWidgets which the user can swap between using buttons labelled “Next” and “Back”. The “Next” button on the Render Window also triggers the program methods that rotate the CT image, extract the voxels with a HU value greater than zero and derives a 2D image of said volume. The slider on the Image Window is connected to the segmentation program that extracts voxels over a threshold value mentioned in 3.3.4. The “Save” button on the Image Window saves a directory as a string which is then passed to the save method that creates a DICOM file of the 2D image based on the CT’s DICOM tags (see 4.1 for descriptive images of the GUI).
Compiling the Code
The compilation of the code into an executable file, was done using a library called Py2Exe. This library simply allows the user to compile any Python script, with its accompanying libraries into an “exe” file that is runnable in a Microsoft Windows environment without a Python development environment.
Graphical User Interface
The first window of the application consists of the rendered CT volume and three buttons, figure 4.1.1. The button “Load new” simply allows the user to choose a new volume to render. The button “Calibration Line” adds a horizontal guide to the render which helps the user align the hip horizontally. The “Next” button moves to program to the next part, in the same process it also captures the amount of degrees rotated. Holding left mouse button and moving the mouse up or down rotate the image.
The image window is the second window of the application (figure 4.1.2.). It consists of the derived image and possibility to change the deriving method from mean to max value calculation. It also has a slider for choosing intensity threshold and buttons for going back to the render or save the image.
Outputs from the created application are presented below. Figure 4.2.1 shows an image derived from the data seen in the 3D render in figure 4.1.1. This image shows what happens if the 3D data is derived into a 2D image without applying soft tissue removal or rotation of the data.
In the following images it will be shown what the application is able to do in terms of tissue removal and rotation using programmed methods explained in section 3.3. The underlying code can be seen in appendixes III through V.
In figure 4.2.2 there is a visible fold in the hip bone marked in red. This has been corrected in 4.2.3 by rotating the image 9.8 degrees. Compared to figure 4.2.1 both of these images have been derived using a method that removes soft tissue below a certain threshold of Hounsfield units. The lines encircled in blue are implemented for calibration when using Sectra Orthopaedic Solutions and are 50 mm long.
In figure 4.2.5 a final derived image with Sectra Orthopaedic Solutions applied can be seen. Using this software a hip prosthesis model has been fitted by orthopaedic surgeon Harald Brismar at Karolinska University Hospital.
Our programming choices
Deriving a 2D image by extracting the mean values of rows or columns of CT slides may not yield the best possible results because air within the volume may skew the mean values. This method was sufficient for this project but for any future attempts to create similar programs it may be beneficial to explore additional methods to improve image quality, for example by removing the CT bed through segmentation. This would have allowed us to extract only the bone tissue from the volume and create a new volume consisting only of bone and marrow tissue. Deriving a mean value image from this volume would probably remove some of the problems caused by soft tissue and the bedding, resulting in a cleaner image.
We experimented with segmetation at the beginning of the project, but decided that it was unpredictable in the long run, due to different images having different intensity ranges. Another problem with segmentation was that both the bone tissue and the bedding used in the CT had approximately the same intensity values. This made it very difficult to extract only the bone tissue and not the bedding.
Apart from the aforementioned methods we also tried maximum intensity projection (Scientific Volume Imaging, 2016) but discarded this method because it does not show the bone marrow in the femur and enhances the bed and certain other disturbances in the image.
Without enhancing the images by removing all the soft tissue the images were too messy and cluttered and you had with large difficulties with seeing the patient’s bones. We could not set a fix threshold value for the extraction method because the HU for bone varies between patients, so we decided to add a slider to the GUI. This way, whoever uses the application can individualise the threshold value for each specific patient. We decided not to include any methods to change image contrast or brightness because these functions are already included in Sectra’s PACS.
At first we wanted to create our own DICOM file from scratch and from there import and copy tags from the CT DICOM selected as the input file. After spending a week with DICOM standard, blogs and scouring the Internet for solutions we decided to give up on this solution because of an issue we think is related to the DICOM header. We instead used an existing DICOM “profile” from a CR image and changed a few tags, so that it suited our image. This method isn’t optimal since it could cause problems between our image and the one we used as a source for the DICOM tags. However, for our project this isn’t really an issue because it is not to be used with the hospital’s PACS network but with a separate PACS. However, we are still curious on how to create a DICOM file from scratch and if we continue the development
Table of contents :
2.1. CT vs. CR
2.2. Programming software
2.3. Object-oriented Programming
2.5. Prosthesis planning
3. Method and Material
3.1. Collection of Information
3.3. Writing the Program
3.4. Compiling the Code
4.1. Graphical User Interface
4.2. Derived Images
5.1. Our programming choices
5.2. Resulting images
5.3 Programming software
5.4. Planning and time frame
5.5. Future work