Saturday, August 12, 2017

Using Tcpdump to dump and read network traffic

Another Quick FYI tip.

There are many network analyzer/reader utilities available on both Linux and Windows platform. There is of-course Wireshark, the most preferred GUI network protocol analyzer, but I prefer tcpdump as it is very easy to use.

Lets get started with some examples then. Say if you want to capture all multicast data (from all interfaces), to a file, you run:

$> tcpdump -n "multicast" -vvv -w VideoStreamData.pcap

  • -w flag writes the captured traffic to the filepath specified. 
  • -n flag avoid DNS look ups to convert host addresses to names. 

Or if you want to capture any packets with a specific destination IP say and destination port 80 or 8080.

$> tcpdump -n "dst host and (dst port 80 or dst port 8080)"
After you have captured network data in a pcap file with the tcpdump command, you can read the data packets in ASCII character sets with the command:

$> tcpdump  -A -r httpServerLogonMessages.pcap 
It comes in handy for reading http/web page traffic.

Or if you want to read the messages both HEX and ASCII along with header data.

$> tcpdump  -x -r MessengerCommunication_11July.pcap 
This comes in handy when you want to convert between different host and network byte order (or the other way round).

Read more about Big Endian to Little Endian conversion and vice-versa here

Sunday, August 06, 2017

Adding routes on a windows machine

Just a small FYI article.

We have multiple P2P lease lines in our office, connecting our different offices within the city, apart from multiple internet connections.
While trying to access these system its prefered to have them accessible over the leased line network.

All of our networks merge on the single LAN. So we need to add routes on our system to tell them, to direct which traffic through which router (as in the specific router connected to the leased line of a office).

For eg the LAN segment (to our office in OfficeA1) is accessible via router (which is the meeting point of one of the P2P from here to OfficeA1), run the following command from an elevated command prompt.

C:\>route add -p mask
Similarly the LAN segment is accessible via router ip

C:\>route add -p mask

Sunday, June 11, 2017

Troubleshooting Packet Drops in SolarFlare Onload 10G PCI Card

If you see lots of packet drops in your onload accelerated application even after going the troubleshooting discussion we did over here, you still see drops and are you are nowhere, you can investigate your application design/OS context scheduling and how the application is reading consuming the data packets. The potential reasons for packet drops could be that,
a) another task interrupting the threads reading the traffic or
b) You have run out of packet buffers because the socket receive buffers aren’t being emptied because read()/recv() isn’t called often enough.

It would be a good idea to monitor how many context switches the application/threads are experience because if this suddenly increases when you have a problem it would indicate another thread is competing for processing time on that core.

You can check this using the following command:

# cat /proc/<pid>/status | grep ctxt_switches
voluntary_ctxt_switches:        58
nonvoluntary_ctxt_switches:     1
Replace “<pid>” with the process ID for your application and the “voluntary” count is the number of times the application blocked and another thread was allowed to run. The “nonvoluntary” count is the number of times the thread was stopped from running by the kernel so something else could run in preference.

You can check the individual threads for the process by monitoring the ‘status’ files in the underlying ‘task’ directory:

# grep ctxt_switches /proc/<pid>/task/*/status
/proc/<pid>/task/<task-n1>/status:voluntary_ctxt_switches:    630
/proc/<pid>/task/<task-n1>/status:nonvoluntary_ctxt_switches: 32
/proc/<pid>/task/<task-n2>/status:voluntary_ctxt_switches:    2
/proc/<pid>/task/<task-n2>/status:nonvoluntary_ctxt_switches: 0
/proc/<pid>/task/<task-n3>/status:voluntary_ctxt_switches:    1
/proc/<pid>/task/<task-n3>/status:nonvoluntary_ctxt_switches: 0

Sunday, April 16, 2017

Using Docker Containers to build C++ Projects

With everyone moving to the docker bandwagon, why should you feel left behind. If you have your project hosted on, make sure that you do make use of the pipeline feature to ensure that the project/product is always in the sane state.

Here is a sample bitbucket-pipelines.yml file to get you going with building C++ projects.

# This is a sample build configuration for C++ – Make.
# Check our guides at for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: gcc:6.1

    - step:
        script: # Modify the commands below to build your repository.
          - make
          - make test

Well this is not going to be very useful if you use other libraries like boost, or qt in your project. Because the basic default image that BitBucket uses has tools like gcc, python, maven, npm, java etc(Read more about the image here). It does not back with it other third party libraries (like boost).

What do you do? You have the following options at hand for specifying the image:
  1. the default image bitbucket provides(which is not very useful. Read here about the image), 
  2. your own image from docker hub (public/private) 
  3. specify am image hosted in private registry. (for detailed instruction read here)
Well for our little POC project where we were testing the feasibility of making use of Pipeline features we decided to use ilssim/cmake-boost-qt5  image.

CAUTION: Always check the DockerFile of the image when using someone else's image for security issues. Its always better to build your own image and host it on Docker hub and use it.

Coming back to my example, our POC project had reference to boost threading libraries, so here is my sample bitbucket-pipelines.yml file:

# This is a sample build configuration for C++ – Make.
# Check our guides at for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: ilssim/cmake-boost-qt5

    - step:
          - echo "This script runs on all branches that don't have any specific pipeline assigned in 'branches'."
      - step:
          - make
          - make test
      - step:
          - cd orchestrationService
          - cmake
          - make

You can read more about branch workflows here.

Thursday, March 30, 2017

Block UDP Traffic via IPTables

While testing an application to verify all code flow paths, one of the scenarios demanded that the application handle dropped UDP packet stream. Now we had a recorded UDP stream at hand which we could play at will.

To simulate UDP packet drop, we added a rule in IPTables to drop/block all UDP packets destined for our UDP destination port 10222. The following command does it for you. 

To Block all udp traffic destined for port 10222
[root@paragpc ~]# iptables -A INPUT -p udp -i eth1 --dport 10222 -j DROP
  • -A is for add/Append iptable rule
  • -p is for protocol
  • -i  = --in-interface eth1  (there is similarly -o =--out-interface
  • -j = JUMP
Refer the man page of iptables for more details. 

However don't forget to remove the rule once your test case is over.
Remove iptable rule dropping udp traffic destined for port 10222 [-D is for Delete iptable rule]
[root@paragpc ~]# iptables -D INPUT -p udp -i eth1 --dport 10222 -j DROP
However if you wish to permanently save the rule run the following command(source).
service iptables save

Thursday, February 16, 2017

[Solution] Password less ssh connection not working after upgrading Git Bash in windows

After I had updated git bash with version 2.11.1 on windows my .ssh/config files was overwritten (wiped clean) along with the public-private keys entries for ssh passwordless authentication.

Good thing I had a backup of my .ssh/config and id_pub keys with me.
Simply populating the .ssh/config file, resolved the ssh hostname but still it asked my for my password. Then I added the following entries for IdentityFile at the end of .ssh-config file

Host *
    ServerAliveInterval 300
    ServerAliveCountMax 2
    Compression yes
    CompressionLevel 9
    GSSAPIAuthentication no
    ForwardAgent yes  
    IdentityFile ~/.ssh/id_dsa
    IdentityFile ~/.ssh/vm_private_key

where  id_dsa and vm_private_key were the private ssh keys for ssh authentication.
Again simply running ssh-add <Path to key did not work. It kept complaining about :   

parag@paragpc MINGW64 ~
$ ssh-add -l
Could not open a connection to your authentication agent.

So I ran the following command (source) :

parag@paragpc MINGW64 ~
$ eval $(ssh-agent -s); ssh-add /c/Users/parag/.ssh/id_dsa

Sunday, December 04, 2016

[Resolved] 'AmqpClient::AmqpLibraryException' in SimpleAmqpClient

I have an application using SimpleAmqpClient, which was working fine in our dev environment. when I moved it to a staging environment, it started failing with the following exception.

terminate called after throwing an instance of 'AmqpClient::AmqpLibraryException'
 what(): a socket error occurred
Well nothing had changed (don't we hear this all the time from devs, we changed nothing in that RabbitMQ/AMQP module), and the error looked cryptic with no details.
Well it turned out that, a (new) login account had been created for connecting to the RabbitMQ server, and it hadn't been enabled to connect from a remote host. Turned it on via the configuration file.
And viola, things started working :) . Read more about RabbitMQ configuration here

Using Tcpdump to dump and read network traffic

Another Quick FYI tip. There are many network analyzer/reader utilities available on both Linux and Windows platform. There is of-course ...