# SSD work

Replicating: SSD from <https://github.com/balancap/SSD-Tensorflow>

According to the page, in order to run the minimal SSD example, we found that there is a fundamental file missing

This fundamental problem is in the **notebooks/** directory

```
__init__.py
```

is missing. This caused error when the iPython notebook: ssd\_notebook.ipynb is executed. Without this init file in the respective

directory, the "from notebooks import visualization" won't work. It will say "module not found"

Solution: I copied this init file from nets/ directory to the notebooks/ directory

## Dataset:

1. Created a directory called VOC2007\_DOWNLOADED/ and then make 2 sub-directories 'train\_val' and 'test'
2. Invoke the following
3. ```
   cd VOC2007_DOWNLOADED/train_val
   wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar

   cd VOC2007_DOWNLOADED/test
   wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar

   # not needed
   wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar (mainly .m files)
   ```
4. Then untar all these files.
5. ~~Go to main dir: /tf\_files/SSD-Tensorflow and then make another directory called VOC2007. Copy over the JPEGImages and Annotations files to VOC2007/~~:
6. ```
   // NOT NEEDED ANY MORE
   /// cp -a VOC2007_DOWNLOADED/VOCdevkit/VOC2007/JPEGImages VOC2007/
   /// cp -a VOC2007_DOWNLOADED/VOCdevkit/VOC2007/Annotations VOC2007/

   THIS IS NOT NEEDED ANY MORE. The train_val and test are extracted separately and DATASET_DIR is
   modified to point to the correct location. This took care of the copying of data problem
   ```
7. Next run the conversion script to convert the images and Annotations to tf\_record format. Keep the train and test in separate directories '/tmp/train\_val\_TFRECORDS', and '/tmp/test\_TFRECORDS'

```python
 mkdir /tmp/train_val_TFRECORDS
 DATASET_DIR=./VOC2007_DOWNLOADED/train_val/VOCdevkit/VOC2007/
 OUTPUT_DIR=/tmp/train_val_TFRECORDS/
 python tf_convert_data.py \
    --dataset_name=pascalvoc \
    --dataset_dir=${DATASET_DIR} \
    --output_name=voc_2007_train \
    --output_dir=${OUTPUT_DIR}

 mkdir /tmp/test_TFRECORDS
 DATASET_DIR=./VOC2007_DOWNLOADED/test/VOCdevkit/VOC2007/
 OUTPUT_DIR=/tmp/test_TFRECORDS/
 python tf_convert_data.py \
    --dataset_name=pascalvoc \
    --dataset_dir=${DATASET_DIR} \
    --output_name=voc_2007_test \
    --output_dir=${OUTPUT_DIR}
```

Run evaluation

```python
 DATASET_DIR=/tmp/test_TFRECORDS/
 python eval_ssd_network.py     --eval_dir=${EVAL_DIR}     --dataset_dir=${DATASET_DIR}     --dataset_name=pascalvoc_2007
--dataset_split_name=test     --model_name=ssd_300_vgg     --checkpoint_path=${CHECKPOINT_PATH}     --batch_size=1
```

Training (fine tuning)

```python
 DATASET_DIR=/tmp/train_val_TFRECORDS
 TRAIN_DIR=/tmp/SSD_LOGS/
 CHECKPOINT_PATH=./checkpoints/ssd_300_vgg.ckpt
 python train_ssd_network.py \
    --train_dir=${TRAIN_DIR} \
    --dataset_dir=${DATASET_DIR} \
    --dataset_name=pascalvoc_2007 \
    --dataset_split_name=train \
    --model_name=ssd_300_vgg \
    --checkpoint_path=${CHECKPOINT_PATH} \
    --save_summaries_secs=60 \
    --save_interval_secs=600 \
    --weight_decay=0.0005 \
    --optimizer=adam \
    --learning_rate=0.001 \
    --batch_size=32

 # Note: for cpu, change DATA_FORMAT to NHWC instead of NCHW in train_ssd_network.py
 #       Also, for medium ec2 instance with mem of 4GB, reduce batch_size to 1
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://bhabs.gitbook.io/allsetup/ssd-work.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
