TitBits
Unix utility commands
# tar and compress to stdout and then ssh and untar it to the new machine
$ tar czf - <files> | ssh user@host "cd /wherever && tar xvzf -"
Alternatively, use -C option with ssh and drop 'z' option with tar
can use rsync or sshfs as well
More info:
https://unix.stackexchange.com/questions/10026/how-can-i-best-copy-large-numbers-of-small-files-over-scp
# To find the UUID of the embedded device like UDOO x86
sudo dmidecode | grep UUID
# Regular expressions
https://www.gnu.org/software/sed/manual/html_node/Regular-Expressions.html
To substitute numbers followed by 1 or more spaces with comma and a space
a = [[2 345 657]
[34 567 7890]]
%s/[0-9] \+/&,/g
then
%s/, \+/, /g
# to find the IP address in ubuntu
hostname -I
# to find the IP address in MAC
ifconfig | grep -w inet | grep -v "127.0.0.1" | awk '{print $2}'
System Admin commands
# To find which processes are using/blocking sshd. Sometimes, we might need to kill those
sudo fuser -v 22/tcp | xargs echo
sudo netstat -tulpn | grep 22
sudo lsof -ti
#To find how the ssh connects - egress
sudo lsof -i -n | egrep '\<ssh\>'
#To find the incomming ssh
sudo lsof -i -n | egrep '\<sshd\>'
sudo nmap -A -T4 localhost
# firewall status
sudo ufw status
To give a particular user sudo privilege to run a particular command
Common apt-get commands
# list
sudo apt list --installed | grep -i <package name>
sudo dpkg -l | grep -i "tiscamera"
# remove
sudo apt-get remove --purge tiscamera
sudo apt-get autoremove
or generically speaking
sudo apt-get remove --purge <application>
sudo apt-get autoremove
another example:
sudo apt-get purge awscli && sudo pip install awscli
----------------------------------------------
pip - installing a particular package version
----------------------------------------------
pip install -r requirements.txt --ignore-installed
and say if requirements.txt has:
urllib3==1.23
then if the original installation has 1.24 it will reinstall version 1.23
To fix broken apt-get
# when you get errors like:
dpkg: error processing archive /var/cache/apt/archives/kde-config-telepathy-accounts_4%3a15.12.3-0ubuntu1_amd64.deb (--unpack):
trying to overwrite '/usr/share/accounts/services/google-im.service', which is also in package account-plugin-google 0.12+16.04.20160126-0ubuntu1
# do the following
sudo dpkg -r account-plugin-google unity-scope-gdrive
sudo apt-get -f install
To find the hard drives in udoo x86, creating filesystem and mounting
sudo lsblk -fm
# This will give the device "sda" and the size associated with it, but will show
# the fstype as empty
sudo fdisk -l
# to create a filesystem (ext4)
sudo mkfs.ext4 /dev/sda
# then check that the fstype got created properly by typing: sudo lsblk -fm
# mount the disk
sudo mount /dev/sda /mnt
# To automount the disk on system reboot
1. sudo cp /etc/fstab /etc/fstab.orig
2. sudo blkid # this will give the UUID of the partition you want to automount
3. sudo vim /etc/fstab # edit the fstab
Example:
UUID=<from lsbk or blkid> /mnt ext4 defaults 0 2
Ref: https://help.ubuntu.com/community/Fstab
4. sudo reboot
5. sudo mkdir /mnt/work
6. sudo chown -R agman:agman /mnt/work
# Then user 'agman' can work on /mnt/work filesstem
To activate /etc/rc.local in ubuntu 18.04
#!/bin/sh -e
#
# rc.local
# This script is executed at the end of each multiuser runlevel
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#
#========================================================================================
# Amit - In ubuntu 18.04 this file does not exist. Just create it and make it executable
# - The sudo systemctl status rc-local will then pick it up
# Ref - https://askubuntu.com/questions/886620/how-can-i-execute-command-on-startup-rc-local-alternative-on-ubuntu-16-10
# - Enable the service, by: (this is probably not a service to be enabled)
# # sudo systemctl enable rc-local
# - Start service and check status
# sudo systemctl start rc-local.service
# sudo systemctl status rc-local.service
#========================================================================================
#/usr/bin/python /home/agman/AgShift_workarea/GIT_STUFF/qtCamAgShift/py/webserver_v2.py &
/usr/bin/python /home/agman/AgShift_workarea/py/webserver_v2.py &
exit 0
Then do the following
sudo chmod +x /etc/rc.local
sudo systemctl status rc-local.service
sudo systemctl start rc-local.service
sudo systemctl stop rc-local.service
To check if an image is JPEG
If you need more than looking at extension, one way would be to read the JPEG header, and check that it matches valid data. The format for this is:
Start Marker | JFIF Marker | Header Length | Identifier
0xff, 0xd8 | 0xff, 0xe0 | 2-bytes | "JFIF\0"
so a quick recogniser would be:
def is_jpg(filename):
data = open(filename,'rb').read(11)
if data[:4] != '\xff\xd8\xff\xe0': return False
if data[6:] != 'JFIF\0': return False
return True
However this won't catch any bad data in the body. If you want a more robust check, you could try loading it with PIL. eg:
from PIL import Image
def is_jpg(filename):
try:
i=Image.open(filename)
return i.format =='JPEG'
except IOError:
return False
Bytes to string:
decoded_image = tf.image.decode_jpeg(encoded_jpg)
decoded_image_resized = tf.image.resize_images(decoded_image, [300, 300])#preprocess(decoded_image) # return float32
decoded_image_resized = tf.cast(decoded_image_resized, tf.uint8)
encoded_jpg = tf.image.encode_jpeg(decoded_image_resized) # expects uint8
encoded_jpg = tf.Session().run(encoded_jpg)
print('Amit - type encoded jpeg 4: ', type(encoded_jpg))
CRON
If cron is not installed in docker do this:
apt-get install -y cron
Cron running here:
> crontab -e
# m h dom mon dow command
# Amit - Run the image upload stat script
# runs the script every day at 4:42 hrs
41 4 * * * /bin/bash /tf_files/PIPELINE/image_upload_stat.sh
# Example to cleanup some files on a regular basis
# Using a local cache is nice, it will speed up things and reduce cost,
# but s3fs doesn't manage the size of this local cache folder.
# To limit the size of our local cache folder, we could create a cron job
# to periodically clean up all files that were not accessed in the last 7 days,
# or not modified during the last month by adding following line to root's crontab:
#@daily find /tmp_bucket -atime +7 -mtime +30 -exec rm {} \;
# image_upload_stat.sh runs the jupyter nbconvert command
# This will execute the .ipynb file and send an email to the recipients about the image upload stat
#!/bin/bash
# http://nbconvert.readthedocs.io/en/latest/execute_api.html
/bin/date > /tf_files/PIPELINE/logs/image_upload_stat.log 2>&1
/usr/local/bin/jupyter nbconvert --to notebook --execute /tf_files/PIPELINE/dbUtils_ImageUploadStats-Generic.ipynb >> /tf_files/PIPELINE/logs/image_upload_stat.log 2>&1
# To check cron is running:
service cron status
# To run cron
service cron start
# To start cron at boot time. This links up /etc/init.d/cron to the /etc/rc.<>d/
update-rc.d cron defaults
Python - glob files from dir and extract file_prefix
import glob
DIR2 = '' # some dir path containing .xml files
annotation_list = glob.glob(DIR2+'/*.xml')
annotation_files = [os.path.basename(os.path.normpath(afile)) for afile in annotation_list]
print(annotation_files)
annotation_file_prefix = [os.path.splitext(os.path.normpath(v))[0] for v in annotation_files]
print(annotation_file_prefix)
Python - utility commands
# To check where apt-get has installed the package, run
dpkg -L <package_name>
example: dpkg -L python-serial
# To set a particular version pf python as default (this will set python2.7 as default)
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 1
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2.7 10
sudo update-alternatives --config python
Ref: http://web.mit.edu/6.00/www/handouts/pybuntu.html
Python - multiprocessing
# Example:
if DEEP_LEARNING_PREDICT_ON_MULTI == 1:
logger_fireDL.info('Starting multiprocessing mp to DL call fetchDL...')
start = time.time()
jobs = []
for j in range(NUM_JOBS):
logger_fireDL.info('Processing inputDataId: {}'.format(inputDataId))
p = mp.Process(target=fetchDL, args=(accessToken, inputDataId, enclosureId, licenseKey, commodity, variety, url_top, url_bot, checkerboard_url_top, checkerboard_url_bot, backend_url))
inputDataId = str(int(inputDataId) + 1)
jobs.append(p)
p.start()
logger_fireDL.info('Appending job and Starting process p...')
for j in jobs:
j.join()
print '%s.exitcode = %s' % (j.name, j.exitcode)
logger_fireDL.info('job: {}, exitcode: {}'.format(j.name, j.exitcode))
Git utilities
# To add a user to the git global config, before commit and push.
# This will create the profile in ~/.gitconfig
git config --global user.email "develamit@gmail.com"
git config --global user.name "develamit"
# To add a user locally - this will create the user profile in .git/config
git config user.name develamit
git config user.email develamit@gmail.com
# For got submodule:
https://github.com/blog/2104-working-with-submodules
# For git pull request
https://help.github.com/articles/creating-a-pull-request/
You can store your credentials using the following command, so that it does not ask for username,
password everytime
git config credential.helper store
git push http://example.com/repo.git
Username: <type your username>
Password: <type your password>
How to block comment in bash script
#!/bin/bash
echo before comment
: <<'END'
bla bla
blurfl
END
echo after comment
BASH SCRIPTING
# To replace strings in place from all files:
cd /tmp/s3_dataset_download_driscollsvalEncoded/1542052183359val/strawberry_DV/JPEGImages
for i in `ls *`; do sed -i'' 's/<height>3024/<height>2048/' $i; done
for i in `ls *`; do sed -i'' 's/<width>4032/<width>3072/' $i; done
AWS CLI
#Reference of aws cli for s3:
http://docs.aws.amazon.com/cli/latest/reference/s3/
# Install aws-cli (avoid the option --user after --upgrade, since that installs locally
pip install --upgrade awscli
# Configure awscli
aws configure
# This created ~/.aws/credentials and ~/.aws/config files with the ACCESS_KEYS, SECRET_KEY and REGION
# Commands
aws s3 ls s3://agskore/
aws s3 rm s3://agskore/Users-Clone-Amit_3-Aug10/ --recursive
aws s3 rm s3://agskore/Users-Clone-Amit_3-Aug10/ --recursive --dryrun
aws s3 cp s3://agskore/Users-Clone-Aug09/IOS/Dev/D5633959-3631-4A01-A9FC-783F2C540E03/2017-08-01/JPEGImages/pic_1501605999845.jpg \
s3://agskore/Users-Clone-Amit_3-Aug10/IOS/Dev/test_device_id/test_date_1/JPEGImages/new_name_1.jpg
aws s3 cp s3://agskore/Users-Clone-Aug09 s3://agskore/Users-Clone-Aug09-Amit
aws s3 sync s3://agskore/Users-Clone-Aug09/IOS/Dev/D5633959-3631-4A01-A9FC-783F2C540E03/2017-08-01/JPEGImages/ s3://agskore/Users-Clone-Amit-Aug10/IOS/Dev/test_device_id/test_date/
# for copying
1. First create a bucket in S3 (example: agskore-backup-01-13-2018
2. Gave the public permission to read and write
3. aws s3 cp s3://agskore/ s3://agskore-backup-01-13-2018/ --exclude "logs/*" --recursive
4. Then went inside agskore-backup-01-13-2018 folder and deleted the unnecessary objects
# for removing with excluding certain folders
aws s3 rm s3://agskore/Users/IOS/rjo/ --exclude "*/strawberry/*" \
--exclude "*/blueberry/*" \
--exclude "*/raspberry/*" \
--recursive
#=======================================================
# Executing aws-cli command from Python script
#=======================================================
import subprocess
src_file = 's3://agskore/' + k
dst_file = 's3://agskore/' + v
command = ['aws', 's3', 'cp', src_file, dst_file]
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
#===================================================================
# changing the root volume to persist after the instance is running
#===================================================================
aws ec2 modify-instance-attribute --instance-id i-1234567890abcdef0 --block-device-mappings file://mapping.json
and mapping.json file looks like:
[
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": false
}
}
]
More info:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/terminating-instances.html#preserving-volumes-on-termination
#===================================================================
# More AWS CLI commands
#===================================================================
# to find the group, which the instance belongs to
aws ec2 describe-instance-attribute --instance-id i-0499e303627a360d0 --attribute groupSet
# to describe the details of the instance
aws ec2 describe-instances --instance-ids i-0499e303627a360d0
# to describe a particular security group
aws ec2 describe-security-groups --group-names launch-wizard-3
# query the security group in general (did not understand this fully)
ec2-describe-group --aws-access-key AKIAIPZL6WZS6OMG3BXQ --aws-secret-key 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
AWS Check the 'pending validation' state in Certificate Manager
# On an ec2 machine (ubuntu) type
> dig agshifthydrastaging.com NS
See if it prints out Name servers or not
For example:
ubuntu@ip-172-31-39-113:~$ dig agshifthydra.com NS
; <<>> DiG 9.10.3-P4-Ubuntu <<>> agshifthydra.com NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19046
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;agshifthydra.com. IN NS
;; ANSWER SECTION:
agshifthydra.com. 60 IN NS ns-756.awsdns-30.net.
agshifthydra.com. 60 IN NS ns-1489.awsdns-58.org.
agshifthydra.com. 60 IN NS ns-1776.awsdns-30.co.uk.
agshifthydra.com. 60 IN NS ns-86.awsdns-10.com.
;; Query time: 110 msec
;; SERVER: 172.31.0.2#53(172.31.0.2)
;; WHEN: Mon Apr 15 16:07:15 UTC 2019
;; MSG SIZE rcvd: 181
Since agshifthydra.com is generating NS when this command is run, it means that the machine is ready
and the SSL Certificate will get issued
AWS MOUNT S3 to EC2
AWS mount a volume to a running instance
# First login to AWS and create and attach a new volume.
# The volume needs to be in the same zone as the ec2 instance to which it
# needs to be attached
# Check "ec2_dev_vol" (vol-0855b66badc8b39cf) created on us-west-2c
# login to your ec2 instance and list the available disks using the command
lsblk
# check if the volume has any data
# If the above command output shows “/dev/xvdf: data”, it means your volume is empty.
sudo file -s /dev/xvdf
# make file system on the new volume
sudo mkfs -t ext4 /dev/xvdf
# create a mount point
sudo mkdir /volxvdf
# mount the new volume
sudo mount /dev/xvdf /volxvdf
# check the new volume
cd /volxvdf/
df -h .
# auto mount the new volume when the machine reboots
sudo cp /etc/fstab /etc/fstab.BACKUP
sudo vim /etc/fstab
Enter the line:
/dev/xvdf /volxvdf ext4 defaults,nofail
# Check if fstab entry is correct. This command should not give any error
sudo mount -a
# Change ownership of the mounted directory
cd /
sudo chown -R ubuntu:ubuntu /volxvdf
BOTO
Ref: https://stackoverflow.com/questions/31486828/running-aws-cli-through-python-returns-a-sh-1-aws-not-found-error
Uploading to GDrive
1. Download 'gdrive' executable
https://github.com/prasmussen/gdrive (use gdrive-linux-x64 to install in our amazon aws machine)
2. For the first time it asks for the verification code. For subsequent ones the verification code
is not needed any more
# The following gives glimpse of what happens when the ./gdrive command is entered for the first time
root@ip-172-31-31-190:/home/ubuntu# ./gdrive upload test.txt
Authentication needed
Go to the following url in your browser:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=367116221053-7n0vf5akeru7on6o2fjinrecpdoe99eg.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state
Enter verification code: 4/atRrIv-zAbVl45MVqDZU0fu6TaaD1ert9ditez9LWOA
Uploading test.txt
Uploaded 0B0kI89g4d-mVY3VDbkFPUlJ3Tmc at 9.0 B/s, total 9.0 B
GDrive commands
# to find help for a command
/tf_files/amit/deep_learning_repo_1/utils/gdrive help download
/tf_files/amit/deep_learning_repo_1/utils/gdrive help list
# information about a folder
/tf_files/amit/deep_learning_repo_1/utils/gdrive info 0B8Ho4sT4hSdgdklhREhhbDZhWUk
# to list files of type folders only
/tf_files/amit/deep_learning_repo_1/utils/gdrive list --query "trashed=false \
and (mimeType contains 'application/vnd.google-apps.folder') \
and '0B0kI89g4d-mVZXhPT096dmlqdUE' in parents" --absolute
/tf_files/amit/deep_learning_repo_1/utils/gdrive list --query "trashed=false and (mimeType contains 'application/vnd.google-apps.folder') and '0B0kI89g4d-mVZXhPT096dmlqdUE' in parents" --absolute
Id Name Type Size Created
0B-v0jjI4PAJ5S3lXd2tOdEZEaG8 AgShiftCrowdSourcin...images/Demie Cheng dir 2017-06-14 05:07:01
0B8Ho4sT4hSdgVUdfbTI4TUd0TWM AgShiftCrowdSourcin...mages/Jesus Medina dir 2017-06-13 11:14:39
# to screen out all the folders (id's) which has 'AgShiftCrowdSourcing' on their name
/tf_files/amit/deep_learning_repo_1/utils# ./gdrive list --query "trashed=false \
and (mimeType contains 'application/vnd.google-apps.folder')" \
--absolute -m 100 | grep 'AgShiftCrowdSourcing'
# list files of image type and whose parent has the '0B0k...' id
/tf_files/amit/deep_learning_repo_1/utils/gdrive list --query "trashed=false \
and (mimeType contains 'image/') and '0B0kI89g4d-mVZXhPT096dmlqdUE' in parents"
# Another listing variation (use 'absolute' as an option to get the absolute path to the images)
# Use the -m 100 option to display 100 items. (the default is to display only 25)
/tf_files/amit/deep_learning_repo_1/utils# ./gdrive list --query "trashed=false \
and (mimeType contains 'image/') and \
0B0kI89g4d-mVZXhPT096dmlqdUE' in parents" --absolute
# recursively downloading files from that same parent folder
/tf_files/amit/deep_learning_repo_1/utils/gdrive download 0B0kI89g4d-mVZXhPT096dmlqdUE \
--recursive --force
/tf_files/amit/deep_learning_repo_1/utils/gdrive download 0B0kI89g4d-mVZXhPT096dmlqdUE --recursive --force
# to upload files to G-drive:
/tf_files/amit/deep_learning_repo_v1/utils/gdrive upload --parent 1Vkmtr__pBy25epDIalhAhrC0ew_H70kX /tf_files/tmp/straw_164_bot.tar.gz
# to recursively upload frozen directories - backup
/tf_files/amit/deep_learning_repo_v1/utils/gdrive upload --parent 1U9lHiMduEHeTV7sdNV1peiJOtpNZPOhV --recursive 25
nvidia-smi commands:
# List GPU's
nvidia-smi -L
# To see the status every 1 second
nvidia-smi -l 1
CURL COMMANDS:
curl -X POST -d '{"to":"Support"}' -H "Content-Type: application/json" http://localhost:809
0/api/misc/contact-us -v
curl -X POST -H 'Content-Type: application/json' 54.245.190.37:39179 -d '{"inputDataId":321234, "accessToken":"23dw34"}' -vvvv
curl -d '{"email":"kk1234@gmail.com", "name":"kk1234"}' -X POST -H "Content-Type: application/json" http://ss.agshiftdata.com/api/auth/send-code
MongoDB commands:
# To find using a regex
db.agshift_user.find({"name.last": { $regex: /^bh/i }})
# to find and update
db.agshift_user.update({"email":"cootdemo123@gmail.com"},{$set:{"email":"cootdemo@gmail.com", "login.email1":"cootdemo@gmail.com", "login.email2":"cootdemo@gmail.com"}})
# to update only 1 field of a collection
db.agshift_version.update( {}, {$set:{"agsky":"1.0.26"} } )
S3
# To make only 1 folder public read but keep rest of them protected and private.
Check out S3/agskore : permissions->bucket policy.
The first section makes everything private.
The second section only opens "OUTPUTImages/" to the public
{
"Version": "2012-10-17",
"Id": "bucketPolicy",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::705395670584:role/Cognito_agshiftprodUnauth_Role",
"arn:aws:iam::705395670584:user/hubino"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::agskore",
"arn:aws:s3:::agskore/*"
]
},
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::agskore/Users/*/*/*/*/OUTPUTImages/*"
}
]
}
Upload Images to S3 from client - secure way without exposing credentials
GCloud Commands
gcloud compute --project -dev-dl ssh --zone us-west1-b dldev
gcloud compute --project -dev-dl ssh --zone us-west1-b dldev3
Jenkins on kubernetes
https://www.blazemeter.com/blog/how-to-setup-scalable-jenkins-on-top-of-a-kubernetes-cluster
https://dzone.com/articles/how-to-setup-scalable-jenkins-on-top-of-a-kubernet
Best perhaps:
https://kumorilabs.com/blog/k8s-6-integrating-jenkins-kubernetes/
Jenkins images repo:
https://hub.docker.com/r/jenkins/jenkins/
TLS 1.2 (same as SSL v4.0)
To learn about using Let's Encrypt to setup TLS1.2
openssl s_client -connect google.com:443 -tls1_2
apt install nmap
nmap --script ssl-enum-ciphers -p 443 www.google.com
SSL/TLS - create own certificate - letsencrypt
SSE4_2
lshw -class processor | grep "sse4"
cat /proc/cpuinfo
should show sse4_2 in FLAGS section
# To find the uuid of the ubox x86 pc:
sudo lshw | grep uuid | awk -F "=" '{print $6}'
Remote Desktop Solutions - Ubuntu
Install Fonts in ubuntu machine
apt-get install -y ttf-mscorefonts-installer
This installs the fonts in "/usr/share/fonts/truetype/msttcorefonts/"
SSH TUNNELING
On the host (example: Driscoll's Udoo Analyzer PC)
1. ssh-keygen (not mandatory, but important to ssh to ubox without password. Use the private key instead)
This will generate id_rsa and id_rsa.pub key files in ~/.ssh directory
Copy over the id_rsa (private key) to the Mac (client machine from where you would like to connect)
Download in Mac as ~/.ssh/id_rsa_olam_ubox
chmod 400 ~/.ssh/id_rsa_olam_ubox
Creating a new user in amazon aws instance: (Not mandatory)
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html
Id: raspitunnel or uboxuser12018
Pass: r^sp1tunnel or uboxuser12018
2. Install openssh-server openssh-client and autossh on raspberry pi or ubox
This is important otherwise 'sshd' daemon won't start in ubox and the port 22 won't
listen to incoming connections
# remove any previously installed clients and servers
sudo apt-get remove --purge openssh-client autossh openssh-server
sudo apt-get install -y openssh-server openssh-client
sudo apt-get install -y autossh
=========================================
# Client side sshd_config configuration
=========================================
Next in ubox. Remember we are setting up the reverse ssh tunnel on the ubox (autossh server process)
sudo vim /etc/ssh/sshd_config
# To make the authentication id_rsa key based
PasswordAuthentication no
# TCPKeepAlive option when set to 'yes' will send empty ACK packets over the TCP layer to see
# if the client connections are alive. This is spoofable. The firewall on the other
# side anyways may decide to drop these empty TCP packets. So setting it to 'no'
TCPKeepAlive no # important to keep the client connections alive
# Make the next two = 0 so that the client connections are not kicked out
ClientAliveInterval 0
ClientAliveCountMax 0
==========================================
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
chmod 600 authorized_keys
sudo service ssh restart
CAUTION: By mistake if we use cat the 'private' key to authorized_keys instead of 'public' key, then the
autossh won't be able to start the ssh -L process
3. Open an ingress port in AWS EC2 instance. check ec2_kops machine
(see 'remote-port-forwarding-sg' security group)
MAKE SURE BOTH THE IP ADDRESSES (Mac and ubox) are allowed in security grou
for both the SSH (39989 or 22) and ingress (32222) ports
SOMEHOW PORT 22 WORKS SO MUCH BETTER - cannot figure out why 39989 shows issues sometimes
================================================================
# AWS EC2 port-forwarding server side sshd_config configuration
================================================================
4. Modify sshd_config (not ssh_config) in AWS instance
Ref: https://www.ssh.com/ssh/sshd_config/
Next, Edited /etc/ssh/sshd_config as shown above and did:
sudo vim /etc/ssh/sshd_config
AllowTcpForwarding yes
GatewayPorts yes
TCPKeepAlive no => this allows the connections made to get freed up quickly. (important)
Restart ssh service: “sudo service ssh restart”
5. On ubox or raspberry pi, start autossh like in the command given below
Bringing in -M (echo) port to keep the server TCP session alive
> /usr/lib/autossh/autossh -M 0 -o ServerAliveInterval=30 \
-o ServerAliveCountMax=3 -o ExitOnForwardFailure=yes \
-o StrictHostKeyChecking=no -o ConnectTimeout=10 \
-NR 32224:127.0.0.1:22 ubuntu@13.57.28.203 \
-i /home/pi/.ssh/aws_id_rsa -p 39989 \
>> $AUTOSSH_LOGFILE 2>&1
6. This can be converted to a systemd service
a. sudo vim /etc/systemd/system/autossh-iot-tunnel-service
b. sudo systemctl daemon-reload
c. sudo systemctl start autossh-iot-tunnel.service
# to autostart the service at reboot
d. sudo systemctl enable autossh-iot-tunnel.service
7. Few other options to look at:
a. Mosh - mosh.org
b. sshuttle
c. OpenVPN
current in use in analyzer IoT:
> /usr/lib/autossh/autossh -i /home/agman/sshpem/ubuntu1403_keypair.pem -M 32248 -q -N
-o ServerAliveInterval=30 -o ServerAliveCountMax=1 -R 32224:localhost:22 ubuntu@34.209.176.237
At Olam:
/usr/lib/autossh/autossh -i /home/agman/sshpem/ubuntu1403_keypair.pem -M 32246 -q -N
-o ServerAliveInterval=30 -o ServerAliveCountMax=1 -R 32223:localhost:22 ubuntu@34.209.176.237
Best would be to do this:
mkdir AgShift_workarea/utils
cp AgShift_workarea/GIT_STUFF/raspberry_pi_repo/ssh_tunnel/tunnel.sh AgShift_workarea/utils
Then edit the tunnel.sh to use a unique port (available) and then put in crontab
@reboot /home/agman/AgShift_workarea/utils/tunnel.sh
6. From Mac or your laptop use the following command to login to olam ubox using ssh key instead of password
> ssh -i ~/.ssh/id_rsa_olam_ubox agman@34.209.176.237 -p 32223 -v
====================
Important note:
Ref:
====================
This will send a ssh keepalive message every 60 seconds, and if it comes time to send another
keepalive, but a response to the last one wasn't received, then the connection is terminated.
The critical difference between ServerAliveInterval and TCPKeepAlive is the layer they operate at.
TCPKeepAlive operates on the TCP layer. It sends an empty TCP ACK packet. Firewalls can be configured to
ignore these packets, so if you go through a firewall that drops idle connections, these may not keep the
connection alive.
ServerAliveInterval operates on the ssh layer. It will actually send data through ssh, so the TCP packet
has encrypted data in and a firewall can't tell if its a keepalive, or a legitimate packet,
so these work better.
====================
To scp a file
====================
Example: Here we are copying 'test.py' from my Mac to ubox for driscolls. Have to give password
scp -v -P 32222 -i ~/.ssh/id_rsa_olam_ubox test.py agman@34.209.176.237:/tmp/
====================
Maintenance:
====================
To tell autossh manually that you want it to re-establish the SSH connection, run
kill -SIGUSR1 `pgrep autossh`
To kill autossh you can run
kill `pgrep autossh`
====================
Crontab (To start autossh on system reboot):
Not needed. Can be done using systemd - shown above
====================
@reboot /bin/sudo -u agman bash -c '/usr/bin/autossh -i /home/agman/sshpem/ubuntu1403_keypair.pem -M 32249 -q -N
-f
-o ExitOnForwardFailure=yes (not using this option anymore)
-o ServerAliveInterval=60 -o ServerAliveCountMax=3 -R 32222:localhost:22 ubuntu@54.193.21.30'
Screen
Setting up Uwsgi, Flask and Nginx
=============================
create ubuntu user in docker
=============================
adduser ubuntu
usermod -aG sudo ubuntu
================
Install nginx
================
cd /tmp/ && wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key
sh -c "echo 'deb http://nginx.org/packages/mainline/ubuntu/ '$(lsb_release -cs)' nginx' > /etc/apt/sources.list.d/Nginx.list"
sh -c "echo 'deb-src http://nginx.org/packages/mainline/ubuntu/ '$(lsb_release -cs)' nginx' >> /etc/apt/sources.list.d/Nginx.list"
vim /etc/apt/sources.list.d/Nginx.list
apt-get update
apt-get remove nginx-common
apt-get update
apt-get install nginx
nginx -v
(shows: nginx version: nginx/1.15.7)
Commands for nginx:
service nginx stop
service nginx start
service nginx restart | quit (does it gracefully, by finishing serving existing connections)
Test nginx
nginx -t or service nginx configtest
======================
Install flask, uwsgi
======================
pip install uwsgi flask
cur
nginx is running on port 6006 in gcp_prod.
Check it using the command in the docker: netstat -tulpn | grep LISTEN
Make sure everything is owned by ‘ubuntu’. Since the nginx and uwsgi is owned by ubuntu, writing to the log files, or uploading/downloading to AWS will need ‘ubuntu’ privilege. Do the following.
1. chown -R ubuntu:ubuntu /var/www
2. chown -R ubuntu:ubuntu /var/log
3. chown -R ubuntu:ubuntu /tf_files/amit /tf_files/logs
4. Create a home directory for ubuntu /home/ubuntu
5. chown -R ubuntu:ubuntu /home/ubuntu
6. su ubuntu
1. aws configure
2. Then resatrt nginx and uwsgi
Then follow UWSGI/AgShift_README to set it up
Sendmail/mail/mailx/ssmtp
In ubuntu 16.04
sudo apt-get install mailutils
Then configure /etc/ssmtp/ssmpt.conf
sudo su
cd /etc/ssmptp
vim ssmtp.comf
mailhub=smtp.gmail.com:587
AuthUser=support@agshift.com
AuthPass=ca$hc0w2018
AuthMethod=LOGIN
# Where will the mail seem to come from?
rewriteDomain=gmail.com
# Use SSL/TLS before starting negotiation
UseTLS=Yes
UseSTARTTLS=Yes
# Send email like this
echo "test message" | mailx -s 'test subject' user@agshift.com
echo "test message" | mail -s 'test subject using mail' user@agshift.com
How to bind v4l2 cameras to USB ports
I suggest you autocreate /dev symlinks using udev, using unique properties (serial number?
port number?) of your USB cameras. See this (should apply to Arch as well) tutorial about udev rules.
Or maybe this tutorial is clearer.
You can get the list of properties for your devices using:
sudo udevadm info --query=all --name=/dev/video1
or simply
sudo udevadm info /dev/video0
then
sudo udevadm info --query=all --name=/dev/video2
Find what's different and create a .rules file out of it inside /etc/udev/rules.d
(you can use 99-myvideocards.rules as a filename, say); let's say you want to use the
serial number, you'd get a ruleset that looks like:
ATTRS{ID_SERIAL}=="0123456789", SYMLINK+="myfirstvideocard"
ATTRS{ID_SERIAL}=="1234567890", SYMLINK+="mysecondvideocard"
After unplugging/replugging your devices (or after a reboot),
you'll get /dev/myfirstvideocard and /dev/mysecondvideocard that always point to the same devices.
******* Alternative *******
There are already symlinks in Linux as /dev/v4l/by-id/usb-046d_0819_92E84F10-video-index0
in folder /dev/v4l/by-id/ so no need to do anything, if ones program can accept arguments
other than /dev/videoX
--------------------------------------------------
We can leverage the following symlink, which is already there in ubox
cd /dev/v4l/by-id$
ls -ltr
usb-The_Imaging_Source_Europe_GmbH_DFM_37UX178-ML_31810275-video-index0 -> ../../video1
usb-The_Imaging_Source_Europe_GmbH_DFM_37UX178-ML_28810260-video-index0 -> ../../video0
I also created the rule in :
cd /etc/udev/rules.d$
sudo vim 99-imaging-source-camera.rules
ATTRS{ID_SERIAL_SHORT}=="28810260", SYMLINK+="imagingsourcecam0"
ATTRS{ID_SERIAL_SHORT}=="31810275", SYMLINK+="imagingsourcecam1"
How to start VNC on GCP machine
# Install tightvncserver, tasksel, kubuntu-desktop on GCP machine
# sudo apt-get install tightvncserver (This showed keyboard issues, so uninstalled it)
# sudo apt-get remove --purge tightvncserver
# sudo apt-get autoremove
https://www.linode.com/docs/applications/remote-desktop/install-vnc-on-ubuntu-16-04/
# good old vnc4server worked
# All Needed installations given below
sudo apt install vnc4server
sudo apt install tasksel
sudo apt-get install ubuntu-desktop gnome-panel gnome-settings-daemon metacity nautilus gnome-terminal
sudo apt install ubuntu-gnome-desktop
sudo tasksel install gnome-desktop --new-install
# To remove keyboard mapping issues#
# Ref: https://bugreports.qt.io/browse/QTBUG-44938
export XKB_DEFAULT_RULES=base
Needed - can create this file from scratch
===========================
# Edit ~/.vnc/xstartup.
===========================
Insert the following lines
#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
x-window-manager &
gnome-panel &
gnome-settings-daemon &
metacity &
nautilus &
---------------------------
Not needed the following
# kubuntu-desktop installation may not be needed
sudo tasksel install kubuntu-desktop
#Ref: https://www.linode.com/docs/applications/remote-desktop/install-vnc-on-ubuntu-16-04/
sudo apt install sddm
sudo systemctl enable gdm
sudo systemctl start gdm
sudo systemctl status gdm
-------------------------------
# Launch vncserver
vncserver -geometry 1920x1048
# Enter vnc server password
test1234 => in test-sharing-server
hydravnc =>in dldev5hydra1
# Open respective ingress port 5901 in GCP firewall rule
see the firewall rule "vnc-1"
# to kill vnc
vncserver -kill :1
# If vncserver fails to start on :1, then perhaps it is locked. Remove the following files
rm -f /tmp/.X1-lock
rm -f /tmp/.X11-unix/X1
XML in Python
Creating XML objects.
Also look at the code in 'https://github.com/agshift/utilities.git' under scripts/XML-Python
To extend disk space on GCP/AWS drive
lsblk
# will show
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 750G 0 disk
└─sda1 8:1 0 500G 0 part /
# command that show individual disk space
df
# growing the partition
sudo growpart /dev/xvda 1
# check again
lsblk
# now resize the FS
sudo resize2fs /dev/xvda1
When ubuntu GUI hangs
In case GUI hangs :
1) Open virtual consoles (tty2-tty5) : Ctrl+Alt+F2 (tty2) to Ctrl+Alt+F5(tty5)
Switching back to GUI (Ctrl+Alt+F1 or Ctrl+Alt+F7 depending on the installation)
2) In case tty shell hangs, do a complete reboot(option not preferred):
(hold these three keys) Alt + SysReq + Shift
(and type) 'r e i s u b'
If SysReq key is not present, use PrintScreen key
OpenVPN
1. Install openvpn, sudo apt-get install openvpn (edited)
2. Copy the conf file to, /etc/openvpn folder
3. Stop and Start openvpn service, sudo service overvpn stop sudo service overvpn start
4. To enable openvpn service on boot up run, sudo systemctl enable openvpn
5. sudo systemctl disable openvpn@multi-user.service
Setting up FTP server
# FTP example (from your own MAC/laptop)
ftp -o ~/Downloads/sampleFileAgShift ftp://agshiftftpuser:agshiftftpuser2019@35.166.196.102/sampleFileAgShift
sudo wget --no-passive --no-parent ftp://agshiftftpuser:agshiftftpuser2019@35.166.196.102/openVPN/id\_rsa.pub
# Create a security group is aws (ftp-sg)
Enable port 20-21 and 1024-1048 for some IP addresses, which would access the server
# Current AWS instance:
alias aws_hydra_cpu1_rsa='ssh -i ~/.ssh/id_rsa_olam_ubox ubuntu@35.166.196.102 -p 39989 -v'
sudo apt-get install -y vsftpd
sudo vim /etc/vsftpd/vsftpd.conf
Put the following lines:
##===============================================
chroot_local_user=YES
local_enable=YES
#chroot_list_enable=YES
#chroot_list_file=/etc/vsftpd.userlist
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
write_enable=YES
allow_writeable_chroot=YES
#userlist_enable=YES
tcp_wrappers=YES
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=35.166.196.102
#===========================
sudo adduser awsftpuser
sudo adduser agshiftftpuser
sudo usermod -d /home/awsftpuser/ awsftpuser
sudo usermod -d /home/agshiftftpuser
/ agshiftftpuser
# password for both the user: <userid>2019
sudo echo "awsftpuser" | sudo tee -a /etc/vsftpd.userlist
sudo echo "agshiftftpuser" | sudo tee -a /etc/vsftpd.userlist
sudo service vsftpd status
sudo service vsftpd restart
or
sudo systemctl status vsftpd
sudo systemctl restart vsftpd
# To start the service at boot time
sudo update-rc.d enable vsftpd
or
sudo systemctl enable vsftpd
Command to find Internet-facing IP address:
myip="$(dig +short myip.opendns.com @resolver1.opendns.com)"
echo "My WAN/Public IP address: ${myip}"
Last updated
Was this helpful?