AllSetup
  • Introduction
  • Docker
  • Common commands
    • Various S3 utilities
    • Mac stuff
  • Mac stuff
  • Dockerfile examples
  • Docker GPU
  • TensorFlow
  • tensorFlow on iOS
  • Jupyter
  • iOS
  • SSD work
  • SSD using Inception
  • Object Detection
  • Docker GPU
  • TitBits
  • MySQL
  • FAIR Multipathnet
  • Git and GitFlow
  • Java Scala Mongo Installation
  • Devops
  • scratchpad
  • GCP Production
  • GStreamer
  • bash
  • Scala
  • Unix commands
  • Publish-Subscribe
  • Sensor-project
  • Flutter-project
Powered by GitBook
On this page

Was this helpful?

SSD work

PreviousiOSNextSSD using Inception

Last updated 5 years ago

Was this helpful?

Replicating: SSD from

According to the page, in order to run the minimal SSD example, we found that there is a fundamental file missing

This fundamental problem is in the notebooks/ directory

__init__.py

is missing. This caused error when the iPython notebook: ssd_notebook.ipynb is executed. Without this init file in the respective

directory, the "from notebooks import visualization" won't work. It will say "module not found"

Solution: I copied this init file from nets/ directory to the notebooks/ directory

Dataset:

  1. Created a directory called VOC2007_DOWNLOADED/ and then make 2 sub-directories 'train_val' and 'test'

  2. Invoke the following

  3. cd VOC2007_DOWNLOADED/train_val
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
    
    cd VOC2007_DOWNLOADED/test
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
    
    # not needed
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar (mainly .m files)
  4. Then untar all these files.

  5. Go to main dir: /tf_files/SSD-Tensorflow and then make another directory called VOC2007. Copy over the JPEGImages and Annotations files to VOC2007/:

  6. // NOT NEEDED ANY MORE
    /// cp -a VOC2007_DOWNLOADED/VOCdevkit/VOC2007/JPEGImages VOC2007/
    /// cp -a VOC2007_DOWNLOADED/VOCdevkit/VOC2007/Annotations VOC2007/
    
    THIS IS NOT NEEDED ANY MORE. The train_val and test are extracted separately and DATASET_DIR is
    modified to point to the correct location. This took care of the copying of data problem
  7. Next run the conversion script to convert the images and Annotations to tf_record format. Keep the train and test in separate directories '/tmp/train_val_TFRECORDS', and '/tmp/test_TFRECORDS'

 mkdir /tmp/train_val_TFRECORDS
 DATASET_DIR=./VOC2007_DOWNLOADED/train_val/VOCdevkit/VOC2007/
 OUTPUT_DIR=/tmp/train_val_TFRECORDS/
 python tf_convert_data.py \
    --dataset_name=pascalvoc \
    --dataset_dir=${DATASET_DIR} \
    --output_name=voc_2007_train \
    --output_dir=${OUTPUT_DIR}

 mkdir /tmp/test_TFRECORDS
 DATASET_DIR=./VOC2007_DOWNLOADED/test/VOCdevkit/VOC2007/
 OUTPUT_DIR=/tmp/test_TFRECORDS/
 python tf_convert_data.py \
    --dataset_name=pascalvoc \
    --dataset_dir=${DATASET_DIR} \
    --output_name=voc_2007_test \
    --output_dir=${OUTPUT_DIR}

Run evaluation

 DATASET_DIR=/tmp/test_TFRECORDS/
 python eval_ssd_network.py     --eval_dir=${EVAL_DIR}     --dataset_dir=${DATASET_DIR}     --dataset_name=pascalvoc_2007
--dataset_split_name=test     --model_name=ssd_300_vgg     --checkpoint_path=${CHECKPOINT_PATH}     --batch_size=1

Training (fine tuning)

 DATASET_DIR=/tmp/train_val_TFRECORDS
 TRAIN_DIR=/tmp/SSD_LOGS/
 CHECKPOINT_PATH=./checkpoints/ssd_300_vgg.ckpt
 python train_ssd_network.py \
    --train_dir=${TRAIN_DIR} \
    --dataset_dir=${DATASET_DIR} \
    --dataset_name=pascalvoc_2007 \
    --dataset_split_name=train \
    --model_name=ssd_300_vgg \
    --checkpoint_path=${CHECKPOINT_PATH} \
    --save_summaries_secs=60 \
    --save_interval_secs=600 \
    --weight_decay=0.0005 \
    --optimizer=adam \
    --learning_rate=0.001 \
    --batch_size=32

 # Note: for cpu, change DATA_FORMAT to NHWC instead of NCHW in train_ssd_network.py
 #       Also, for medium ec2 instance with mem of 4GB, reduce batch_size to 1
https://github.com/balancap/SSD-Tensorflow