@iar
2017-12-11T10:02:07.000000Z
字数 8609
阅读 101
Gatech
requirements.txt
.Final_data
shared folder from Google Drive to download dataset, and AlexNet weights. Then follow the README.docx
in Final_data
to put them into the codebase's directories accordingly.code to do pre-process on Openi dataset to recognize front vs side X-ray images, so as to separate Openi-all dataset into Openi-front-only dataset.
.py
files and y.csv under the folder contains imagesy.csv
contains 168 observations, they are human coded labels (by myself). 0: side, 1: frontimage.py
first, it takes about 230s on my PCdata_prep.py
logistics.py
y_output.csv
: 92%
move_pic.py
to seperate front pics and side picsdouble checked visually (by myself)
93.59%
run img_list.py
to get the file names lists:
copy front_list.csv
, side_list.csv
, labeled_front_right.csv
, labeled_side_right.csv
, should_be_side.csv
, should_be_front.csv
, noise.csv
and img_divide.py
into folder you want to decide front vs side images
img_divide.py
- code to do descriptive statistics analysis of pixel values of the gray scale ima
Cropped_image_Lenet_more_metric
ges.- The Descriptive-Openi and Descriptive-NIH codes are similar.
findspark.init('PUT YOU SPARK HOME HERE')
Set directory paths
in the ipynb files.barplot.R
used ggplot2 in r. Please use R and install ggplot2 first.jupyter-notebook --ip=192.168.0.131 --no-browser
. Please change the ip accordingly.bvlc-alexnet.npy
in Google drive link, under Final_data.
- This folder doesn't contain data, please download the data from Google Drive Link.
- This folder contains the code for running the GoogLeNet for Openi_all and Openi_front_only datasets
Cropped_image_Lenet_more_metric.ipynb
: Read in the pickled 90x90 data set and run GoogLeNet for Openi_allCropped_image_Lenet_front_only_more_metrics.ipynb
: Read in the pickled 90x90 data set and run GoogLeNet for Openi_front_onlyGoogle_Lenet_Xray.ipynb
: Read in the normal and abnormal images, downsize the images and save them into pickled files.process.ipynb
: Read in the original downloaded images and divided them into normal and abnormal images This folder contains the code for running the AlexNet for Openi_all and Openi_front_only datasets
alexnet_cutting_point.py
: Pretrained AlexNet Xray_AlexNet_train_feature_extraction_debug_cutting_point.py
: Read in the pickled 90x90 data set and run AlexNet for Openi_all and Open_front_onlyCropped_image_Lenet_more_metric.ipynb
Cropped_image_Lenet_front_only_more_metrics.ipynb
Xray_AlexNet_train_feature_extraction_debug_cutting_point.py
for Openi_frontOpeni_front
to Openi_all
, i.e., uncomment the following lines
#normal_img_load = '../dataset/openi/pickled_cropped_img/2697_cropped_normal_imgs_and_labels_90x90.p'
#abnormal_img_load = '../dataset/openi/pickled_cropped_img/3517_cropped_abnormal_imgs_and_labels_90x90.p'
#normal_imgs = normal_img_load_90x90['images'].reshape(2697, 90, 90, 1)
#abnormal_imgs = abnormal_img_load_90x90['images'].reshape(3517, 90, 90, 1)
Comment # ------------ Cutting the AlexNet at the second final layer (fc7)----------#
Uncomment # ------------ Cutting the AlexNet at the third final layer (fc6)----------#
jupyter-notebook --ip=192.168.0.131 --no-browser
. Please change the ip accordingly.bvlc-alexnet.npy
in givin Google drive link, since it exceeds the github 100MB file size limit.This folder holds the code to do transfer learning, based on pre-trained AlexNet and use different cutting point to train on NIH pickled dataset. All jupyter notebook has the result. When you re-run these ipynb, same result for each cell should be reproduced.
jupy_fc6_lr0002.ipynb
: Training, validation and test using pre-trained AlexNet, cutting point at fc6 layer, with learning rate 0.0002.jupy_fc6_lr0001.ipynb
: Training, validation and test using pre-trained AlexNet, cutting point at fc6 layer, with learning rate 0.0001.jupy_fc7_lr0002.ipynb
: Training, validation and test using pre-trained AlexNet, cutting point at fc7 layer, with learning rate 0.0002.jupy_fc7_lr0001.ipynb
: Training, validation and test using pre-trained AlexNet, cutting point at fc7 layer, with learning rate 0.0001.jupy_maxpool5.ipynb
: Training, validation and test using pre-trained AlexNet, cutting point at maxpool5 layer, with learning rate 0.0001.bvlc-alexnet.npy
: AlexNet weights in TensorFlowjupy_fc6_lr0002.ipynb
The final jupyter cell output will show each Epoch's result as well as 5 figures.
The running time is about 150 min on Server with 48 CPU cores, 2 Titan X and 128GB ram.
This folder holds the code to do GoogLeNet training, validation, and testing
pickled_data directory (input data)
pickled_cropped_img directory (after processed data)
Cropped_image_Lenet_front_only_more_metrics.ipynb
: Train/Validation/Test LeNet on front only Openi pickled images.
Cropped_image_Lenet_more_metric
: Train/Validation/Test LeNet on all Openi pickled images.Cropped_image_Lenet_more_metric.ipynb
pip freeze of reuiqred package.
pip3 install -r requirements.txt
to install all packages.detail explain the structure of code base, step-by-step instruction to reproduce our result.