AllSetup
  • Introduction
  • Docker
  • Common commands
    • Various S3 utilities
    • Mac stuff
  • Mac stuff
  • Dockerfile examples
  • Docker GPU
  • TensorFlow
  • tensorFlow on iOS
  • Jupyter
  • iOS
  • SSD work
  • SSD using Inception
  • Object Detection
  • Docker GPU
  • TitBits
  • MySQL
  • FAIR Multipathnet
  • Git and GitFlow
  • Java Scala Mongo Installation
  • Devops
  • scratchpad
  • GCP Production
  • GStreamer
  • bash
  • Scala
  • Unix commands
  • Publish-Subscribe
  • Sensor-project
  • Flutter-project
Powered by GitBook
On this page

Was this helpful?

Object Detection

PreviousSSD using InceptionNextDocker GPU

Last updated 5 years ago

Was this helpful?

Finally found Google's release on Object Detection

Pre-requisites: Make sure to do the following:

  1. mkdir /tensorflow_models

  2. cd /tensorflow_models

  3. git clone

  4. Follow the installation instructions, but at least install protobuf-compiler and run protoc ON EVERY BASH SHELL

    1. Ref:

 apt-get install protobuf-compiler
 cd /tensorflow_models/models/research

 protoc object_detection/protos/*.proto --python_out=.
 export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
  1. Test installation using:

    python object_detection/builders/model_builder_test.py

  2. Download VOC2007 (smaller in size) and then run 'create_pascal_tf_record' to generate TFRecords

 cd /tf_files/SSD-Tensorflow
 wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
 tar xvf VOCtrainval_06-Nov-2007.tar


 cd /tensorflow_models/models/object_detection
 python create_pascal_tf_record.py --data_dir=/tf_files/SSD-Tensorflow/VOCdevkit \
 --year=VOC2007 --set=train --output_path=/tmp/pascal_voc2007_train.record

 python create_pascal_tf_record.py --data_dir=/tf_files/SSD-Tensorflow/VOCdevkit \
 --year=VOC2007 --set=val --output_path=/tmp/pascal_voc2007_val.record


 # I customized create_tf_record.py and dataset_util.py to work on my custom dataset
 # This is needed to cutomize any folder tag, ignore difficult tag, provide a new label_map_path etc..
 # Manual process: Create a new label_map_path pertaining to the new classification labels
 #
 # cp /tf_files/SSD-Tensorflow/MYDEVKIT_2/VOC2007/mydevkit_label_map.pbtxt /tensorflow_models/models/research/object_detection/data/mydevkit_label_map.pbtxt
   python create_pascal_tf_record.py \
   --data_dir=/tf_files/SSD-Tensorflow/MYDEVKIT_2 \
   --year=VOC2007 \
   --set=train \
   --output_path=/tmp/mydev_train.record \
   --label_map_path=/tensorflow_models/models/research/object_detection/data/mydevkit_label_map.pbtxt \
   --folder_tag=VOC2007 \
   --ignore_difficult_tags=True

   python create_pascal_tf_record.py \
  --data_dir=/tf_files/SSD-Tensorflow/MYDEVKIT_2 \
  --year=VOC2007 \
  --set=val \
  --output_path=/tmp/mydev_val.record \
  --label_map_path=/tensorflow_models/models/research/object_detection/data/mydevkit_label_map.pbtxt \
  --folder_tag=VOC2007 \
  --ignore_difficult_tags=True
  1. Arrange the directory structure, create EXPORT variables

  2.  cd /tensorflow_models/models
     protoc object_detection/protos/*.proto --python_out=.
     export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
    
     PATH_TO_EVAL_DIR=/tensorflow_models/models/research/object_detection/models/model/eval
     PATH_TO_TRAIN_DIR=/tensorflow_models/models/research/object_detection/models/model/train
     PATH_TO_YOUR_PIPELINE_CONFIG=/tensorflow_models/models/research/object_detection/samples/configs/ssd_mobilenet_v1_voc2007.config
     PATH_TO_MODEL_DIRECTORY=/tensorflow_models/models/research/object_detection/models/model
     xyz
    
     Note: PATH_TO_MODEL_DIRECTORY is actually where the checkpoints will get generated.
           Shall we change the name to PATH_TO_GENERATED_CHECKPOINT?
  3. Generate TF records

    # I customized create_tf_record.py and dataset_util.py to work on my custom dataset
    # This is needed to cutomize any folder tag, ignore difficult tag, provide a new label_map_path etc..
    # Manual process: Create a new label_map_path pertaining to the new classification labels
    #
    # cp /tf_files/SSD-Tensorflow/MYDEVKIT_2/VOC2007/mydevkit_label_map.pbtxt /tensorflow_models/models/object_detection/data/mydevkit_label_map.pbtxt
        python create_pascal_tf_record.py \
    --data_dir=/tf_files/SSD-Tensorflow/MYDEVKIT_2 \
    --year=VOC2007 \
    --set=train \
    --output_path=/tmp/mydev_train.record \
    --label_map_path=/tensorflow_models/models/research/object_detection/data/mydevkit_label_map.pbtxt \
    --folder_tag=VOC2007 \
    --ignore_difficult_tags=True

    python create_pascal_tf_record.py \
    --data_dir=/tf_files/SSD-Tensorflow/MYDEVKIT_2 \
    --year=VOC2007 \
    --set=val \
    --output_path=/tmp/mydev_val.record \
    --label_map_path=/tensorflow_models/models/research/object_detection/data/mydevkit_label_map.pbtxt \
    --folder_tag=VOC2007 \
    --ignore_difficult_tags=True
    1. Run training

 # From the tensorflow/models/ directory
 python object_detection/train.py \
     --logtostderr \
     --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
     --train_dir=${PATH_TO_TRAIN_DIR}

Run eval

    # From the tensorflow/models/ directory
    python object_detection/eval.py \
        --logtostderr \
        --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
        --checkpoint_dir=${PATH_TO_TRAIN_DIR} \
        --eval_dir=${PATH_TO_EVAL_DIR}

Run tensorboard

 PATH_TO_MODEL_DIRECTORY=/tensorflow_models/models/research/object_detection/models/model
 tensorboard --logdir=${PATH_TO_MODEL_DIRECTORY}

Exporting a model for inference

    # From tensorflow/models
    python object_detection/export_inference_graph.py \
    --input_type image_tensor \
    --pipeline_config_path ${PATH_TO_YOUR_PIPELINE_CONFIG} \
    --checkpoint_path object_detection/models/model/train/model.ckpt-16069 \
    --inference_graph_path object_detection/models/model/train/output_inference_graph.pb

Running inference on new files

# After exporting your checkpoint as a graph,
# cd into deep_learning_repo/object_detection
# place your test image inside test_images directory (For example: pic1.jpg)
python inference.py --path_to_ckpt=models/model/train/output_inference_graph.pb

Controlling GPU memory

# In tensorflow_models/models/object_detection/trainer.py

    #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.5, allow_growth=True)
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.8)
    session_config = tf.ConfigProto(allow_soft_placement=True,
                                    log_device_placement=False,
                                    gpu_options=gpu_options)

Ref:

https://github.com/tensorflow/models/tree/master/object_detection
https://github.com/tensorflow/models.git
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md
https://stackoverflow.com/questions/34199233/how-to-prevent-tensorflow-from-allocating-the-totality-of-a-gpu-memory