Search This Blog

Sunday, December 1, 2019

how to connect to gcp vm and deploy docker from your gcr



open putty gen get the username from the name column and get public and private keys(ppk)
copy paste public key data generated on the putty gen screen to the ssh secion on gcp vm.

now use the username and add ppk to the auth of the putty session, boom you are in...

make sure your vm service account is added as a storage admin else you wont be
able to pull docker images from vm shell.

use below command

gcloud projects add-iam-policy-binding projectname \
    --member=serviceAccount:92xxxxx9-compute@developer.gserviceaccount.com\
    --role=roles/storage.admin

yeah try to pull docker in your newly baked vm ,, you are in business ...

Sunday, November 24, 2019

Make docker work on ubuntu 18 running on virtual box with host as windows 10 behind company proxy/zscaler/mitm

If you have an ubuntu vm running on windows and are struggling with ssl handshake errors and not able to 
open any website, please follow the below instructions

Firefox,

 it has its on certificate manager so export the certificate from the lock icon 
of any website that is giving erro rand save it in the crt format.
Now go to to Firefox settings and import this certificate, it will start working.
restart firefox.

Chrome :
has its own database, use the above crt file and run the below command.

certutil -d sql:$HOME/.pki/nssdb -A -t "CP,CP," -n CertNickName -i cert_file.crt
Restart chrome

Docker:
if you try to run docker search or docker run hello-world 
you will end up with an error like below.

x509: certificate signed by unknown authority.

Well docker won't work oob if you are behind a proxy/zscaler/corporate

These instructions are for ubuntu 18 not sure about others

Go to the registry url on the host machine and open it on your browser,
Click on the lock icon and look at the certifcate chain, it will be a series of CA

Now we need to export all of them in base 64 CER , rename them to type pem in 
your VM(ubuntu) by cp or something and move all the certificates to the 
/usr/local/share/ca-certificates
Now use the below tool to convert pem to CRT type, this is important because ubuntu won't recognize any other format.

openssl x509 -in foo.pem -inform PEM -out foo.crt

Now run the 

$ sudo update-ca-certificates 

You should see a message that x number of certificates are imported, 

$sudo service docker restart

You should be able to search the images and pull the images
from the docker hub behind a corporate proxy

Monday, November 18, 2019

Docker cheatsheet

copy files to docker container



docker cp foo.txt mycontainer:/foo.txt
docker cp mycontainer:/foo.txt foo.txt
docker cp src/. mycontainer:/target
docker cp mycontainer:/src/. target


docker ps /docker history imagename  / docker exec -it container /bin/bash


docker command line volume will mount host folder to container volume, and volume tag in docker file will initate an empty volume on /var/lib/docker/blablabla


Remove containers 

docker rm -v container name // to delete volumes
docker inspect containername |grep volume


Backup docker images   tar


docker save name 
docker load name 

Backup live containers // wont bkup volumes


docker commit/docker export/docker import 


backup docker volume'





















Move local rocketchat to production

If you have a dev environment on VSC and you want to move to production by creating docker images
please read ahead

create docker out of your meteor installation.
meteor build --server-only --directory /tmp/rc-build cp .docker/Dockerfile /tmp/rc-build cd /tmp/rc-build docker build -t someimage .

check your local meteor mongo port and take a dump.


mongodump -h 127.0.0.1 --port 3001 -d meteor  --forceTableScan  


// dbname is meteor for dev by default use the local mongodumb cli in ubuntu if not install mongo-tools.
//you will have to move this folder as a gzip file to the mongo container and do a restore.
tar -zcvf meteor.tar.gz meteor/   

docker cp meteor.tar.gz mongo:/ ----------- mongo is container name and will place gz file in the root.



mongorestore -d rocketchat dump/meteor // db name if rocketchat by default for offical images

//make sure to check db names while importing use a tool like robot3 to have mongo gui



to run the new images run below.
$ docker run --name mongo -d mongo:4.0 --smallfiles --replSet rs0 --oplogSize 128
$ docker exec -ti mongo mongo --eval "printjson(rs.initiate())"

// default rocket.chat images look for mongo container to connect to on 27017 and with name mongo.

 docker run --name rocketchat -p 80:3000 --link mongo --env ROOT_URL=http://localhost --env MONGO_OPLOG_URL=mongodb://mongo:27017/local -d someimage

//docker file for rocket.chat
https://github.com/RocketChat/Rocket.Chat/blob/develop/.docker/Dockerfile

Wednesday, October 23, 2019

upload xl/CSV to S3 and process it via lambda for db insert

create a bucket 
create a lambda function and add trigger function as S3
when you create lambda a role will be created for you, add additional policy to it , dynamodb and S3 policy
Now add the following code in the lambda_handler function of the lambda
========================================
import json
import boto3

s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')

def lambda_handler(event, context):
    # TODO implement
   
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    obj = s3.get_object(Bucket=bucket, Key=key)

    rows = obj['Body'].read().decode('utf-8') .split('\n')

    table = dynamodb.Table('entity')
   
    print(len(rows))
    with table.batch_writer() as batch:
        for row in rows[:-1]:
            batch.put_item(Item={

                'name':row.split(',')[0],
                'address':row.split(',')[1]
            })
           
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

===================================
^^ make sure you have a dynamodb with table name entity and field name,address 
you can add as many fields as you want.

Now upload the csv to s3 and you should see lambda processing your xl and inserting to dynamo.

=============================
Sample csv
name,address,city
target,23230,austin
walmart,77707,houston
macy,80808,dallas


First row will be skipped as per the lambda for loop 

Friday, October 18, 2019

Connect to posgres on aws problem

create a new rds instance but make sure you select advanced options
and create a starter database else you won't be able to connect to
posgres and get errors.

so an initial db has to be there if you connecting via sqlworkbench
or eclipse or psql.

connect to ec2 via windows

Download the pem file from aws and convert to ppk via the puttygen,
Create a new session and in auth tab browse to ppk
enter user name as ec2-user and url and public dns or ec2

Tuesday, October 1, 2019

How to run Ubuntu 18 on SSD and Windows 10 on HDD using dual boot

Get two pen drives and make one of them the recovery disk for windows.
Just search for recovery disk on windows and it will help you create
one, might take 2 hours.
Now your backup plan is ready in case things go south.

Download ISO from ubuntu site and burn it over the other pen drive using rufus
Change boot order in windows bios to boot from pen drive first and
move HDD after that.

Restart pc and let it boot from usb and select the install option
Navigate to the page where it asks you if you want to install along
windows or erase or select something else

You need to select something else, assuming your SSD is 256 gig and
your RAM is 8 gig.
Now identify your ssd in the disk section and make two partitions

e. g /dev/sdb5 230 GB / on WD256DIAXX


SWAP primary at the end 16 GB
ext4 primary at the front left over 230~ gb

MOST important make sure boot loader (option at the bottom) is set to
the master hdd where windows MBR resides so that
GRUB is updated correctly, if you mess up this option ubuntu won't be
added to GRUB.

Now just do next next and it will install and reboot later, it will work fine
Ubuntu will work out of ssd with a grub entry and windows will work
from grub /dev/sda

It worked for me !

Wednesday, September 25, 2019

Login to docker vm without password default vm

ssh -i /path to id_rsa in .docker/machine folder docker@ ip of the vm
good to have it bridged so that ip is available in host.

Log into Minikube via putty

Get the ppk file via puttygen , import the id_rsa files in minikube/machines folder and import
via the conversion tab on putty gen, save the ppk and use it on the putty ssh options.
make sure you select the SHa1-rsa(2048 bits) while saving ppk else it won't work.

Tuesday, August 13, 2019

on_picture_save event motion.conf raspberry pi

If you are tying to fire an FTp or shell/java/python command on the on_picture_save event, and its  highly likely
that it is not firing, reason could be that motion user is not able to run  your command at all. 
Do a switch user to pi and try to run that command manually and see if their are any permission issues.
Do a chmod/chown of the file/dir your command is interacting with, check the absolute/relative paths your command
is executing, also do a grep on the /var/log/*.log for errors on your command name, check the motion log file on.

/tmp/motion/motion.log 
enable the debug log inside the motion.conf file by changing the level to 8 
also if you are running motion on jessie and upgraded from wheezy , these kind of problem will occur.

Thursday, March 7, 2019

cloud foundry resource exhaustion event

If you are trying to push a simple java app using java build pack which uses openjdk 8
and you are facing an issue of resource exhaustion it could be because of bad memory
paramters. try increasing the XX:MaxMetaspaceSize if your Jar is big in size.

for e.g your Jar is 90 mb it should be atleast 200 mb, i wasted lot of time on this.
This parameter you can set in app setttings on pcf , under user environment variables.
restage and it should not give you resource exhausted warnings...

symtom logs

//////////////

 Internal Error (javaCalls.cpp:53), pid=xx, tid=xxxxxxxxxxxxxxxxxxxxx
2017-10-11T17:05:31.80+0200 [APP/PROC/WEB/3]OUT # JRE version: OpenJDK Runtime Environment (8.0_144-b01) (build 1.8.0_144-b01)
2017-10-11T17:05:31.80+0200 [APP/PROC/WEB/3]OUT # Java VM: OpenJDK 64-Bit Server VM (25.144-b01 mixed mode linux-amd64 compressed oops)
2017-10-11T17:05:31.80+0200 [APP/PROC/WEB/3]OUT # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
2017-10-11T17:05:31.80+0200 [APP/PROC/WEB/3]OUT #

pcf tcp routing ... will make you cry..

tcp routing in pcf can be a pain in the A*** believe me..
here is what you can do .. if you app spring or whatever is listening on a port like 8080 
exposed from SERVER_PORT=$PORT via buildpack .. which can also be configured in application.properties 
of boot like server.port = 9080 or whatever.

Point is , if http app then pcf will directly route traffic via route to your app on 80/443 else you will have to create
tcp domain and route and a router group , can use default tcp router group and get the traffic on non tcp port.

also you can see if you can run your non http app on 443 if thats possible.
if you are bound to use some other port, you gotta configure all th is.

you can ssh in the cf app using cf ssh app name and do a netstat -an and ps-ef to see which 
port your app is listening to also you directly run java via the /app/.java_buildpack/.././java -cp and your main file name 
if you have a normal java app and you are listening to a port inside your code..

i tried to curl from outside container  to a spring hello world and it will let me curl the https endpoint even 
though the boot was running on 8008 on container.. and when i was trying to run a java jar listening on 9877 
it will let me curl the end point https one but it will give 502 bad gateway.. i guess because the endpoint is expecting 
non http traffic ,, i mean tcp traffic .. and cf router gives bad gateway error..

point is... get the admin rights and configure tcp routing.. or find if your app can run on 443/80 even if tcp traffic.. try it.. 



Tuesday, March 5, 2019

How to configure pivotal cloud foundry onperm/laptop

If you have trial limitation with pcf trial account and you are not able to host anything due to resource constraints
you can configure the pivotal onperm single node virtual machine using the pcfdev.

Prerequisites 
Windows laptop 8 Gb ram and Virtual box 5.2 and  pcfdev plugin download from pivotal
make sure you get 30 version , lot of bugs are fixed in it. also cf cli is required

Now go to folder where pcfdev downloaded extracted // do not run pcfdev on a VM else it wont work
and run the powershell in admin mode before starting dev 
cf install-plugin cfdev
cf dev start // it will take ages like an hour or so let all services run 
if all is well you should see 58 of 58 services started , hello message else break your head 
cf dev ssh will land you inside the machine , it will provision in VB 
To debug ssh to dev using cf dev ssh -- /var/vcap/monit folder and sudo tail the monit.log 
also you can grep the drain and other libraries in the /var/pcfdev/run.log
check here  /var/vcap/sys/log/syslog_drain_binder/syslog_drain_binder.log
admin/admin is the cred to login to the api end point 
cf login -a  api.local.pcfdev.io  --skip-ssl-validation // user and pass should work .. 
you will be able to see org/quota/space 
cf create-space myspace -o orgname
cf target -s myspace
now push
create a space and push you app by cf push -f manifest.yml 
sample manifest.yml 
applications:  - name: fixserver    memory: 1024M    buildpack: https://github.com/cloudfoundry/java-buildpack.git    path: server.jar    hostname: fixserver    health-check-type: process  
health check type of process will help your app to boot atleast else it will check that port 
and mark your app as failure.

It should push you app to pcf cloud , now you can login to dash of pcf and see the logs 
also from command line 
cf logs fixserver // you can also ssh to app container by cf ssh appname

docker push appname --docker-image url  from hub
cf enable-feature-flag diego_docker    

Heroku clone a running up and update it with cli

Download the heroku cli and change the ssh keys if you have a previous installation of keys from some other account
 $heroku auth:logout
$heroku auth:login 
 Openweb page and authenticate with your creds, it will log you in.

download your code from a running app with 

heroku git:clone -a APP-NAME

Do a  git add .   // if you have changed a file
git commit -m "dfd"
git push heroku master.

You can see your updated app after the push is complete 

Tuesday, January 8, 2019

How to run a jar file as an image in openshift.

Boot your docker host, it will be a boot to docker iso booted over virtual box.

Get the jar first, either make a runnable jar in eclipse or create one with jar -cfm commands.

Once jar is ready make a simple docker file but make sure you have right permissions of
the mounts else it will work fine in docker bu tnot in openshift.


sample docker file.
FROM java:8
WORKDIR /app
RUN chgrp -R 0 /app && \
    chmod -R g=u /app
ADD server.jar server.jar
EXPOSE 9877
CMD java -jar server.jar


now using winscp on 22 transfer this docker file jar to the docker host / username docker/tcuser (default)
Now run a build  from that directory

docker buid -t fixserver .

now try to run
docker run -i -t -p 9877:9877 fixserver


if it runs fine , push it to docker, just login and push but make
sure tag is as per standard docker.io/uname/image:latest

Now run minishift in c drive , i mean open cmd and run minishift --vm-drive virtualbox start
make nodosfilewarning flag also if using cygwin.

now open the console, minishift console enter.. usually 

Now add to project , deploy an image , take second opotion 
search for the image url in docker.io/uname/imagename

it will give warning for root, continue with the deployment ..
and expose it via route or as external service on the node port

e.g $ oc expose dc mariadb --type=LoadBalancer --name=mariadb-ingress

$ oc export svc mariadb-ingressit will open a port on the node some random port, and it will start routing to pod port

check the binding on oc get svc

do a telnet from laptop to to the minishift vm host ip on that random port

it should listen to your service on that port and will rout it to any of the pod.


Ref.

sudo docker exec -i -t b359ba5cb67f  /bin/bash   loginto a container


















docker image not running on openshift giving file not found

If your docker image which was running earlier on raw container fails to boot
on openshift and starts giving file not found error or directory not found error.

You need to check the permissions, your root account enabled image
will not work on openshift and you will have to rewrite docker file with 
approppriate user/directories and permissions. 

The mounts will have to be writable, this kind of problems are not there
when running under root inside docker.