Tuesday, December 22, 2015

How to configure SSH tunneling for JMeter on remote host

We just need to add two tcp connection listening on each part.

1. Client - modify jmeter.properties file fixing remote_hosts:

remote_hosts=127.0.0.1:55511
client.rmi.localport=55512

2. Server - modify jmeter.properties file adding:

server_port=55511
server.rmi.localhostname=127.0.0.1
server.rmi.localport=55511

3. Connect to the server using:

  • Linux and Mac users
    ssh user@server -L 55511:127.0.0.1:55511 -R 55512:127.0.0.1:55512
  • Windows users
    putty.exe -ssh user@server -L 55511:127.0.0.1:55511 -R 55512:127.0.0.1:55512

4. Server - start jmeter

cd apache-jmeter-2.13/bin/
./jmeter-server -Djava.rmi.server.hostname=127.0.0.1

5. Client - start jmeter

cd apache-jmeter-2.13/bin/
./jmeter.sh -Djava.rmi.server.hostname=127.0.0.1 -t test.jmx

Tuesday, November 24, 2015

Connecting to remote Tomcat JMX instance using jConsole

Tried with Java 8

1. Add this to your java startup script on tomcat remote-host:

-Dcom.sun.management.jmxremote.port=1616
-Dcom.sun.management.jmxremote.rmi.port=1618
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false

2. Execute this on your computer.

  • Windows users:
    putty.exe -ssh user@remote-host -L 1616:remote-host:1616 -L 1618:remote-host:1618
  • Linux and Mac Users:
    ssh user@remote-host -L 1616:remote-host:1616 -L 1618:remote-host:1618
This enable jconsole to connect to a remote-host via few ssh tunnels.

3. Open jconsole on your computer

jconsole localhost:1616

4. Have fun!

P.S.: during step 2, using ssh and -L you specify that the ports 1616 and 1818 on the local (client) host is to be forwarded to the remote side.

Saturday, August 29, 2015

Kubernetes - Azure - remove annoying message 'enter your password for the ssh key'

Trying Kubernets on Azure has been a very painful experience, especially because I were unable to start both the examples presented in the documentation.

I discovered they are bugged, in this post I explain how to overcome the bug present in the first one.

https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/coreos/azure/README.md

https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/azure.md

After have successfully executed the create-kubernetes-cluster.js I was unable to follow the guide because every time I try to connect via ssh I was prompted by an annoying message:  "Enter your password for the SSH key"



after a long search I figured out the problem.
The easiest way to avoid this popup is patch the SSH key using the command:

openssl rsa -in ./output/kube_7c84ff0d2685e5_ssh.key -out ./output/kube_7c84ff0d2685e5_ssh.key

finally you can execute:

ssh -F  ./output/kube_7c84ff0d2685e5_ssh_conf kube-00

and continue with the guide.

Friday, January 30, 2015

Vagrant AWS - Chef::Exceptions::InvalidDataBagPath

During Chef execution on a AWS provisioning Vagrant returned the following error:


==> default: Chef::Exceptions::InvalidDataBagPath
==> default: ------------------------------------

==> default: Data bag path '/tmp/vagrant-chef/fdc6caf9a3118279699556aed8f850d7/data_bags' is invalid

To fix this error, just create data_bags directory (if there isn't already) and add chef.data_bags_path configuration this in your VagrantFile


  config.vm.provision :chef_solo do |chef|

    # insert here
    chef.data_bags_path = "./data_bags"

  end

Thursday, January 29, 2015

AWS Vagrant "No host IP was given to the Vagrant core NFS helper"


During the provision of an AWS environment with vagrant:

vagrant up --provision aws 

This command returned

No host IP was given to the Vagrant core NFS helper. This is an internal error that should be reported as a bug.

It was quite hard understand why vagrant is trying to use NFS, even because it should be disabled, on AWS synced_folder are copied using rsync. 
At last I found this link "Error: No host IP was given to the Vagrant core NFS helper" in 0.2.0 #5 https://github.com/gosddc/vagrant-vcenter/issues/5

where is suggested to use

config.nfs.functional = false

If you have a multi environment configuration like me, I suggest to use:

  config.vm.provider :aws do |aws, override|
    ...
    override.nfs.functional = false
  end

Wednesday, January 14, 2015

How to create a Vagrant Box for VirtualBox

I had to create an development environment and the target was an old Centos version (6.2).

But following steps, in my humble opinion, are applicable also to early versions of Centos and maybe, with the right precautions, even in different distributions:

I have downloaded and installed the minimal iso (CentOS-6.2-x86_64-minimal.iso) and created a new virtualbox machine.

Choose to have enough memory
Choose an hard drive large enough


Disable unnecessary devices like usb, audio.


Setup port ssh port forwarding



Configure default CD/DVD drive to mount CentOS-6.2-x86_64-minimal.iso file


Now you can boot the virtual machine and follow the step by step installation.

Leave everything as default and set root password to "vagrant".

When the virtual machine is up and running login as root/vagrant.

Now you need to configure eth0 network card:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0

and add/change following lines:

ONBOOT=yes
BOOTPROTO=dhcp

now reboot and install sudo:

# yum install sudo

then you need to add default vagrant user:

# vi /etc/sudoers.d/vagrant

# vagrant ALL=(ALL) NOPASSWD:ALL

# groupadd -g 900 vagrant
# adduser -m -g 900 -u 900 vagrant -p vagrant

Set the password "vagrant"

# passwd vagrant 

now you need to install openssh-server and configure ssh authorized_keys in order to enable vagrant remote configuration and provisioning:

# sudo yum install openssh-server
# sudo yum install wget

# mkdir -p /home/vagrant/.ssh 
# chmod 0700 /home/vagrant/.ssh 
# wget --no-check-certificate https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O /home/vagrant/.ssh/authorized_keys 
# chmod 0600 /home/vagrant/.ssh/authorized_keys 
# chown -R vagrant /home/vagrant/.ssh

Now we need to adjust few little settings in order to let everything work:

# vi /etc/ssh/sshd_config

and uncomment line with:

AuthorizedKeysFile      .ssh/authorized_keys 

Edit sudoers configuration

# vi /etc/sudoers

now we need to comment line with:

# Defaults requiretty

We have almost finished, we need to install Virtual Box Guest Additions. To do this we need to install few required packages:

# yum install gcc build-essential linux-headers-server kernel-devel perl acpid

Now you can mount Guest Additions CD from Virtual Box menus:

VirtualBox Machine -> Devices -> Insert Guest Additions CD Image... (Host + D)

# sudo mkdir /media/cdrom
# sudo mount /dev/cdrom /media/cdrom/
# cd /media/cdrom/
# ./VBoxLinuxAdditions.run 

In my case, during VBoxLinuxAdditions execution I had the following error:

Building the VirtualBox Guest Additions kernel modules
The headers for the current running kernel were not found. If the following
module compilation fails then this could be the reason.
The missing package can be probably installed with
yum install kernel-devel-2.6.32-220.el6.x86_64

Building the main Guest Additions module                   [FAILED]
(Look at /var/log/vboxadd-install.log to find out what went wrong)

You can avoid the problem installing manually the required package:

# cd ~/
# wget ftp://ftp.pbone.net/mirror/ftp.scientificlinux.org/linux/scientific/6.0/x86_64/updates/security/kernel-devel-2.6.32-220.el6.x86_64.rpm

# yum install kernel-devel-2.6.32-220.el6.x86_64
# cd /media/cdrom/ 
# ./VBoxLinuxAdditions.run 

Verifying archive integrity... All good.
Uncompressing VirtualBox 4.3.20 Guest Additions for Linux............
VirtualBox Guest Additions installer
Removing installed version 4.3.20 of VirtualBox Guest Additions...
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox non-DKMS kernel modules       [  OK  ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module                   [  OK  ]
Building the shared folder support module                  [  OK  ]
Building the OpenGL support module                         [  OK  ]
Doing non-kernel setup of the Guest Additions              [  OK  ]
Starting the VirtualBox Guest Additions                    [  OK  ]
Installing the Window System drivers

Could not find the X.Org or XFree86 Window System, skipping.

Now your virtual box machine is ready to be used as vagrant box. You can shutdown it and package with the following command:

$ vagrant package --base vagrant-centos-6.2-x64